I'm trying to find the number of overlapping 1 bits between 2 given numbers.
For example, given 5 and 6:
5 // 101
6 // 110
There is 1 overlapping 1 bit (the first bit)
I have following code
#include <iostream>
using namespace std;
int main() {
int a,b;
int count = 0;
cin >> a >> b;
while (a & b != 0) {
count++;
a >>= 1;
b >>= 1;
}
cout << count << endl;
return 0;
}
But when I entered 335 and 123 it returned 7 but I think it is not correct
Can someone see a problem with my code?
The problem is that you're just printing out the number of times any of the bits match, as you lop off the least significant bit for each iteration (which will at max be the number of bits set for the smaller number). You're comparing all bits of [a] BITWISE AND [b] each iteration. You could rectify this by masking with 1: a & b & 1, so that while you're shift thing bits rightward each time, you're only checking if the least significant bit is being checked:
while (a && b){
count += a & b & 1;
a>>=1;
b>>=1;
}
Your existing algorithm counts each bit as long as any bit in the remaining bits to test matches; since 123 and 335 have a common MSB, it's true as long as either number is non-zero. 123 is the smaller with 7 bits, so it's true 7 times until that number is completely processed. As an extreme case, 128 (10000000) and 255 (11111111) would return 8 with your method, even though it's actually 1.
You want to AND the two numbers together to start with and then count the number of 1s in that result
You want to count the number of bits that are set. Instead, your code is sort of computing the binary logarithm.
Only increment the count if the lowest order bit is set.
for (int c = a & b; c != 0; c >>= 1) {
if (c & 1)
++count;
}
Slightly shorter form:
int countCommonBits(int a,int b) {
int n = 0;
for (unsigned v = (unsigned)(a & b); v; v >>= 1) {
n += 1 & v;
}
return n;
}
If you know both numbers are positive, you can omit the use of the unsigned type. Note when using "int" that sign extension on a right shift of a negative number would give you a bit of an overcount (i.e. an infinite loop).
Much later...
Reviewing an old answer, came across this. The current "accepted" answer is 1) inefficient, and 2) an infinite loop if the numbers are negative. FWIW.
Related
i was trying to find out number of different bit in two number. i find a solution here but couldn't understand how it works.it right shifting with i and and doing and with 1. actually what is happening behind it? and why do loop through 32?
void solve(int A, int B)
{
int count = 0;
// since, the numbers are less than 2^31
// run the loop from '0' to '31' only
for (int i = 0; i < 32; i++) {
// right shift both the numbers by 'i' and
// check if the bit at the 0th position is different
if (((A >> i) & 1) != ((B >> i) & 1)) {
count++;
}
}
cout << "Number of different bits : " << count << endl;
}
The loop runs from 0 up to and including 31 (not through 32) because these are all of the possible bits that comprise a 32-bit integer and we need to check them all.
Inside the loop, the code
if (((A >> i) & 1) != ((B >> i) & 1)) {
count++;
}
works by shifting each of the two integers rightward by i (cutting off bits if i > 0), extracting the rightmost bit after the shift (& 1) and checking that they're the same (i.e. both 0 or both 1).
Let's walk through an example: solve(243, 2182). In binary:
243 = 11110011
2182 = 100010000110
diff bits = ^ ^^^ ^ ^
int bits = 00000000000000000000000000000000
i = 31 0
<-- loop direction
The indices of i that yield differences are 0, 2, 4, 5, 6 and 11 (we check from the right to the left--in the first iteration, i = 0 and nothing gets shifted, so & 1 gives us the rightmost bit, etc). The padding to the left of each number is all 0s in the above example.
Also, note that there are better ways to do this without a loop: take the XOR of the two numbers and run a popcount on them (count the bits that are set):
__builtin_popcount(243 ^ 2182); // => 6
Or, more portably:
std::bitset<CHAR_BIT * sizeof(int)>(243 ^ 2182).count()
Another note: best to avoid using namespace std;, return a value instead of producing a print side effect and give the method a clearer name than solve, for example bit_diff (I realize this is from geeksforgeeks).
Suppose I have two numbers(minimum and maximum) . `
for example (0 and 9999999999)
maximum could be so huge. now I also have some other number. it could be between those minimum and maximum number. Let's say 15. now What I need to do is get all the multiples of 15(15,30,45 and so on, until it reaches the maximum number). and for each these numbers, I have to count how many 1 bits there are in their binary representations. for example, 15 has 4(because it has only 4 1bits).
The problem is, I need a loop in a loop to get the result. first loop is to get all multiples of that specific number(in our example it was 15) and then for each multiple, i need another loop to count only 1bits. My solution takes so much time. Here is how I do it.
unsigned long long int min = 0;
unsigned long long int max = 99999999;
unsigned long long int other_num = 15;
unsigned long long int count = 0;
unsigned long long int other_num_helper = other_num;
while(true){
if(other_num_helper > max) break;
for(int i=0;i<sizeof(int)*4;i++){
int buff = other_num_helper & 1<<i;
if(buff != 0) count++; //if bit is not 0 and is anything else, then it's 1bits.
}
other_num_helper+=other_num;
}
cout<<count<<endl;
Look at the bit patterns for the numbers between 0 and 2^3
000
001
010
011
100
101
110
111
What do you see?
Every bit is one 4 times.
If you generalize, you find that the numbers between 0 and 2^n have n*2^(n-1) bits set in total.
I am sure you can extend this reasoning for arbitrary bounds.
Here's how I do it for a 32 bit number.
std::uint16_t bitcount(
std::uint32_t n
)
{
register std::uint16_t reg;
reg = n - ((n >> 1) & 033333333333)
- ((n >> 2) & 011111111111);
return ((reg + (reg >> 3)) & 030707070707) % 63;
}
And the supporting comments from the program:
Consider a 3 bit number as being 4a + 2b + c. If we shift it right 1 bit, we have 2a + b. Subtracting this from the original gives 2a + b + c. If we right-shift the original 3-bit number by two bits, we get a, and so with another subtraction we have a + b + c, which is the number of bits in the original number.
The first assignment statement in the routine computes 'reg'. Each digit in the octal representation is simply the number of 1’s in the corresponding three bit positions in 'n'.
The last return statement sums these octal digits to produce the final answer. The key idea is to add adjacent pairs of octal digits together and then compute the remainder modulus 63.
This is accomplished by right-shifting 'reg' by three bits, adding it to 'reg' itself and ANDing with a suitable mask. This yields a number in which groups of six adjacent bits (starting from the LSB) contain the number of 1’s among those six positions in n. This number modulo 63 yields the final answer. For 64-bit numbers, we would have to add triples of octal digits and use modulus 1023.
I am confused by the below question.
Flipping a bit means changing the bit from 0 to 1 and vice versa.An operation OP(i) would result in flipping of binary digit as follows.
Performing OP(i) would result in flipping of each ith bit from starting i>0
An n bit number is given as input and OP(j) and OP(k) are applied on it one after the other. Objective is to specify how many bits will remain the same after applying these two operations.
When I have applied the logic floor(n/i)+floor(n/j)-2 it doesn't give me the expected solution.
example:
binary number:10110101101
i:3
j:4
expected output:6
But I got 3.Please tell me how to approach this problem.
I have also checked this solution Filpping bits in binary number .But they have also mentioned the same logic.
Let the register comprises of N bits, bits 1 to N.
(1) OP(i) implies every ith bit is flipped. That is bits at i, 2*i, 3*i ...
are flipped. Total bits flipped = floor(N/i)
(2) OP(j) implies every ith bit is flipped. That is bits at j, 2*j, 3*j ...
are flipped. Total bits flipped = floor(N/j)
(3) Let L = LCM(i,j). Therefore, bits at L, 2*L, 3*L, ... will be
flipped twice, implies bits unchanged are floor(N/L)
So, after OP(i) and OP(j), the total bits changed will be
floor(N/i) + floor(N/j) - 2*floor(N/L)
Number of bits unchanged = N - floor(N/i) - floor(N/j) + 2*floor(N/L)
For N=11, i=4, j=3, L = LCM(3,4) = 12,
Number of unchanged bits = 11 - 11/4 - 11/3 + 11/12 = 11 - 2 - 3 + 0 = 6
public static int nochange_bits(String input1,int i1,int i2)
{
try{
int len=input1.length();
if(i1<1 || i2<1){
return -1;
}else if(i1>len && i2>len){
return len;
}else if(i1==i2){
return len;
}else{
return (int)(len-Math.floor(len/i1)-Math.floor(len/i2)+2*Math.floor(len/(Math.abs(i1*i2) / GCF(i1, i2))));
}
}catch(Exception e){
e.printStackTrace();
return -1;
}
}
public static int GCF(int a, int b) {
if (b == 0) return a;
else return (GCF (b, a % b));
}
a) First, we check for all the conditions and invalidity of inputs
b) Then we calculate the LCM to get the output
Explanation: It's similar to the flipping switches problem,
first turn we switch the i1 bits
second turn we switch the i2 bits
in the process, the bits which have the LCM(i1,i2) are turned back.
so we add the lcm back to the total
I have a problem wherein I need to find the number of matching bits (from left to right) between two integers
Inputs: 2 Variable A and B to store my decimal numbers
Output: Numbers of bits in A and B that match (starting from the left)
Some Examples:
A = 3 and B = 2, A and B bits match up to 7 bits from the left bit
A = 3 and B = 40, A and B bits match up to 7 bits from the left bit.
How can I do that using bitwise operation (AND,OR,XOR)?
Thanks
XOR the two together (to produce a number which has all zeroes from the left until the first non matching element), then shift the result right until it equals 0. Subtract this from the bit length of the integers you are dealing with (e.g., you seem to be implying 8 bits).
pseudocode:
int matchingBits(A, B) {
result = A XOR B
int shifts = 0
while (result != 0) {
result = result >> 1 (Shift right the result by 1)
shifts++
}
return integer_bit_length - shifts
}
Do (A XNOR B) to find the matching digits:
10101010
01001011
--XNOR--
00011101
Then use the hamming algorithm to count the ones: Count number of 1's in binary representation
(btw: xnor is !xor)
try this may be it will work
int matchingBitsCount(val1,val2)
{
int i , cnt = 0;
for(i=7;i>=0;i--)
{
if(((1<<i)&a)^((1<<i)&b))==0) //left shifted for starting from left side and then XOR
{
cnt++;
}
else
{
break;
}
}
}
i take val1 and val 2 as char if you want to check for int then just take i=31
sorry for the stupid question, but how would I go about figuring out, mathematically or using c++, how many bytes it would take to store an integer.
If you mean from an information theory point of view, then the easy answer is:
log(number) / log(2)
(It doesn't matter if those are natural, binary, or common logarithms, because of the division by log(2), which calculates the logarithm with base 2.)
This reports the number of bits necessary to store your number.
If you're interested in how much memory is required for the efficient or usual encoding of your number in a specific language or environment, you'll need to do some research. :)
The typical C and C++ ranges for integers are:
char 1 byte
short 2 bytes
int 4 bytes
long 8 bytes
If you're interested in arbitrary-sized integers, special libraries are available, and every library will have its own internal storage mechanism, but they'll typically store numbers via 4- or 8- byte chunks up to the size of the number.
You could find the first power of 2 that's larger than your number, and divide that power by 8, then round the number up to the nearest integer. So for 1000, the power of 2 is 1024 or 2^10; divide 10 by 8 to get 1.25, and round up to 2. You need two bytes to hold 1000!
If you mean "how large is an int" then sizeof(int) is the answer.
If you mean "how small a type can I use to store values of this magnitude" then that's a bit more complex. If you already have the value in integer form, then presumably it fits in 4, 3, 2, or 1 bytes. For unsigned values, if it's 16777216 or over you need 4 bytes, 65536-16777216 requires 3 bytes, 256-65535 needs 2, and 0-255 fits in 1 byte. The formula for this comes from the fact that each byte can hold 8 bits, and each bit holds 2 digits, so 1 byte holds 2^8 values, ie. 256 (but starting at 0, so 0-255). 2 bytes therefore holds 2^16 values, ie. 65536, and so on.
You can generalise that beyond the normal 4 bytes used for a typical int if you like. If you need to accommodate signed integers as well as unsigned, bear in mind that 1 bit is effectively used to store whether it is positive or negative, so the magnitude is 1 power of 2 less.
You can calculate the number of bits you need iteratively from an integer by dividing it by two and discarding the remainder. Each division you can make and still have a non-zero value means you have one more bit of data in use - and every 8 bits you're using means 1 byte.
A quick way of calculating this is to use the shift right function and compare the result against zero.
int value = 23534; // or whatever
int bits = 0;
while (value)
{
value >> 1;
++bits;
}
std::cout << "Bits used = " << bits << std::endl;
std::cout << "Bytes used = " << (bits / 8) + 1 << std::endl;
This is basically the same question as "how many binary digits would it take to store a number x?" All you need is the logarithm.
A n-bit integer can store numbers up to 2n-1. So, given a number x, ceil(log2 x) gets you the number of digits you need.
It's exactly the same thing as figuring out how many decimal digits you need to write a number by hand. For example, log10 123456 = 5.09151220... , so ceil( log10(123456) ) = 6, six digits.
Since nobody put up the simplest code that works yet, I mind as well do it:
unsigned int get_number_of_bytes_needed(unsigned int N) {
unsigned int bytes = 0;
while(N) {
N >>= 8;
++bytes;
};
return bytes;
};
assuming sizeof(long int) = 4.
int nbytes( long int x )
{
unsigned long int n = (unsigned long int) x;
if (n <= 0xFFFF)
{
if (n <= 0xFF) return 1;
else return 2;
}
else
{
if (n <= 0xFFFFFF) return 3;
else return 4;
}
}
The shortest code way to do this is as follows:
int bytes = (int)Math.Log(num, 256) + 1;
The code is small enough to be inlined, which helps offset the "slow" FP code. Also, there are no branches, which can be expensive.
Try this code:
// works for num >= 0
int numberOfBytesForNumber(int num) {
if (num < 0)
return 0;
else if (num == 0)
return 1;
else if (num > 0) {
int n = 0;
while (num != 0) {
num >>= 8;
n++;
}
return n;
}
}
/**
* assumes i is non-negative.
* note that this returns 0 for 0, when perhaps it should be special cased?
*/
int numberOfBytesForNumber(int i) {
int bytes = 0;
int div = 1;
while(i / div) {
bytes++;
div *= 256;
}
if(i % 8 == 0) return bytes;
return bytes + 1;
}
This code runs at 447 million tests / sec on my laptop where i = 1 to 1E9. i is a signed int:
n = (i > 0xffffff || i < 0) ? 4 : (i < 0xffff) ? (i < 0xff) ? 1 : 2 : 3;
Python example: no logs or exponents, just bit shift.
Note: 0 counts as 0 bits and only positive ints are valid.
def bits(num):
"""Return the number of bits required to hold a int value."""
if not isinstance(num, int):
raise TypeError("Argument must be of type int.")
if num < 0:
raise ValueError("Argument cannot be less than 0.")
for i in count(start=0):
if num == 0:
return i
num = num >> 1