I found an observation by testing in C++.
Observation is ,
1 ) If two numbers where both numbers have odd number of set bits in it then its XOR will have even number of set bits in it.
2 ) If two numbers where both numbers have even number of set bits in it then its XOR will have even number of set bits in it.
1 ) If two numbers where one number has even number of set bits and another has odd number of set bits then its XOR will have odd number of set bits in it.
I could not prove it. I want to prove it. Please help me.
Code that i executed on my computer is
#include<bits/stdc++.h>
using namespace std;
int main(){
vector<int> vec[4];
for(int i=1;i<=100;i++){
for(int j=i+1;j<=100;j++){
int x=__builtin_popcount(i)%2;
int y=__builtin_popcount(j)%2;
int in=0;
in|=(x<<1);
in|=(y<<0);
int v=__builtin_popcount(i^j)%2;
vec[in].push_back(v);
}
}
for(int i=0;i<4;i++){
for(int j=0;j<vec[i].size();j++) cout<<vec[i][j] << " ";
cout << endl;
}
return 0;
}
It gives me
100 zeros in first line
100 ones in second line
100 ones in third line
100 zeros in fourth line
If there is a doubt in understanding the code then please tell me in comments.
This behavior mirrors an easy-to-prove arithmetical fact:
When you add two odd numbers, you get an even number,
When you add two even numbers, you get an even number,
When you add an odd number to an even number, you get an odd number.
With this fact in hand, consider the truth table of XOR, and note that for each of the four options in the table ({0, 0 => 0}, {0, 1 => 1}, {1, 0 => 1}, {1, 1, => 0}) the odd/even parity of the count of 1s remains invariant. In other words, if the input has an odd number of 1s, the output will have an odd number of 1s as well, and vice versa.
This observation explains why you observe the result: XORing two numbers with the counts of set bits of N and M will yield a number that has the same odd/even parity as N+M.
Thanks all who tried to answer.
We can give proof like this,
Suppose N is number of set bits in first number and M is set bits in second number.
Then set bits in XOR of these two numbers is N+M - 2 (Δ) where is delta is total number of bit positions where both of numbers have set bit. Now this expression explains every thing.
even + odd - even = odd
odd + odd - even = even
even + even - even = even
xor just clears out common bits. It doesn't matter how many bits are set, just how many bits are common.
With all bits common, the result is zero. With no bits in common, the result is the sum of set bits.
No conclusions based on parity of inputs unless you also account for parity of common bits.
A possible proof is based in the observation that xor is a conmutative opperator, so (xor digits of x) xor (xor digits of y) = xor of digits of (x xor y)
Related
Is there a way to set every nth bit in an integer without using a for loop?
For example, if n = 3, then the result should be ...100100100100. This is easy enough with a for loop, but I am curious if this can be done without one.
--
For my particular application, I need to do this with a custom 256-bit integer type, that has all the bit operations that a built-in integer has. I'm currently using lazily initialized tables (using for loops) and that is good enough for what I'm doing. This was mostly an exercise in bit-twidling for me, but I couldn't figure out how to do it in a few steps/instructions, and couldn't easily find anything online about this.
… I need to do this with a custom 256-bit integer type.
Set r to 256 % n.
Set d to ((uint256_t) 1 << n) - 1. Then the binary representation of d is a string of n 1 bits.
Set t to UINT256_MAX << r >> r. This removes the top r bits from UINT256_MAX. UINT256_MAX is of course 2256−1. This leaves t as a string of width-r 1 bits, and width-r is some multiple of n, say k*n.
Set t to t/d. As a string of k*n 1 bits divided by a string of n 1 bits, this produces a quotient that is 000…0001 repeated k times, where each 000…0001 is n-1 0 bits followed by one 1 bit.
Now t is the desired bit pattern except the highest desired bit may be missing if r is not zero. To add this bit, if needed, OR t with t << n.
Now t is the desired value.
Alternately:
Set t to 1.
OR t with t << n.
OR t with t << 2*n.
OR t with t << 4*n.
OR t with t << 8*n.
OR t with t << 16*n.
OR t with t << 32*n.
OR t with t << 64*n.
OR t with t << 128*n.
Those shifts must be defined (shifting by zero would suffice) or suppressed when the shift amount exceeds the integer width, 256 bits.
I have a question, which is to find the modulo 11 of a large number. The number is stored in a string whose maximum length is 1000. I want to code it in c++. How should i go about it?
I tried doing it with long long int, but its impossible that it can handle the corner case value.
A number written in decimal positional system as a_na_{n-1}...a_0 is the number
a_n*10^n+a_{n-1}*10^{n-1}+...+a_0
Note first that this number and the number
a_0-a_{1}+a_{2}+...+(-1)^{n}a_n
which is the sum of its digits with alternating signs have the same remainder after division by 11. You can check that by subtracting both numbers and noting that the result is a multiple of 11.
Based on this, if you are given a string consisting of the decimal representation of a number, then you can compute the remainder modulo 11 like this:
int remainder11(const std::string& s) {
int result{0};
bool even{true};
for (int i = s.length() - 1; i > -1; --i) {
result += (even ? 1 : -1) * ((int)(s[i] - '0'));
even = !even;
}
return ((result % 11) + 11) % 11;
}
Ok, here is the magic (math) trick.
First imagine you have a decimal number that consists only of 1s.
Say 111111, for example. It is obvious that 111111 % 11 is 0. (Since you can always write it as the sum of a series of 11*10^n). This can be generalized to all integers consists purely of even numbers of ones. (e.g. 11, 1111, 11111111). For those with odd number of ones, just subtract one from it and you will get a 10 times some number that consists of odd numbers of one (e.g 111=1+11*10), so their modulo to 11 would be 1.
A decimal number can be always written as the form of
where a0 is the least significant digit and an is the most significant digit. Note that 10^n can be written as 10^n - 1 + 1, and 10^n - 1 is a number consists of n nines. If n is even, then you will get 9 times some even number of ones, and its modulo to 11 is always 0. If n is odd, then we get 9 times some odd number of ones, and its modulo to 11 is always 9. And don't forget we've still got a +1 after 10^n - 1 + 1 so we need to add a to the result.
We are very close to our results now: we just have to add things up and do a final modulo to 11. The pseudo-code would be like:
Initialize sum to 0.
Initialize index to 0.
For every digit d from the least to most significant:
If the index is even, sum += d
Otherwise, sum += 10 * d
++index
sum %= 11
Return sum % 11
Suppose I have two numbers(minimum and maximum) . `
for example (0 and 9999999999)
maximum could be so huge. now I also have some other number. it could be between those minimum and maximum number. Let's say 15. now What I need to do is get all the multiples of 15(15,30,45 and so on, until it reaches the maximum number). and for each these numbers, I have to count how many 1 bits there are in their binary representations. for example, 15 has 4(because it has only 4 1bits).
The problem is, I need a loop in a loop to get the result. first loop is to get all multiples of that specific number(in our example it was 15) and then for each multiple, i need another loop to count only 1bits. My solution takes so much time. Here is how I do it.
unsigned long long int min = 0;
unsigned long long int max = 99999999;
unsigned long long int other_num = 15;
unsigned long long int count = 0;
unsigned long long int other_num_helper = other_num;
while(true){
if(other_num_helper > max) break;
for(int i=0;i<sizeof(int)*4;i++){
int buff = other_num_helper & 1<<i;
if(buff != 0) count++; //if bit is not 0 and is anything else, then it's 1bits.
}
other_num_helper+=other_num;
}
cout<<count<<endl;
Look at the bit patterns for the numbers between 0 and 2^3
000
001
010
011
100
101
110
111
What do you see?
Every bit is one 4 times.
If you generalize, you find that the numbers between 0 and 2^n have n*2^(n-1) bits set in total.
I am sure you can extend this reasoning for arbitrary bounds.
Here's how I do it for a 32 bit number.
std::uint16_t bitcount(
std::uint32_t n
)
{
register std::uint16_t reg;
reg = n - ((n >> 1) & 033333333333)
- ((n >> 2) & 011111111111);
return ((reg + (reg >> 3)) & 030707070707) % 63;
}
And the supporting comments from the program:
Consider a 3 bit number as being 4a + 2b + c. If we shift it right 1 bit, we have 2a + b. Subtracting this from the original gives 2a + b + c. If we right-shift the original 3-bit number by two bits, we get a, and so with another subtraction we have a + b + c, which is the number of bits in the original number.
The first assignment statement in the routine computes 'reg'. Each digit in the octal representation is simply the number of 1’s in the corresponding three bit positions in 'n'.
The last return statement sums these octal digits to produce the final answer. The key idea is to add adjacent pairs of octal digits together and then compute the remainder modulus 63.
This is accomplished by right-shifting 'reg' by three bits, adding it to 'reg' itself and ANDing with a suitable mask. This yields a number in which groups of six adjacent bits (starting from the LSB) contain the number of 1’s among those six positions in n. This number modulo 63 yields the final answer. For 64-bit numbers, we would have to add triples of octal digits and use modulus 1023.
I have to write a function that count the number of bit required to represent an int in 2's complement form. The requirement:
1. can only use: ! ~ & ^ | + << >>
2. no loops and conditional statement
3. at most, 90 operators are used
currently, I am thinking something like this:
int howManyBits(int x) {
int mostdigit1 = !!(0x80000000 & x);
int mostdigit2 = mostdigit1 | !!(0x40000000 & x);
int mostdigit3 = mostdigit2 | !!(0x20000000 & x);
//and so one until it reach the least significant digit
return mostdigit1+mostdigit2+...+mostdigit32+1;
}
However, this algorithm doesn't work. it also exceed the 90 operators limit. any suggestion, how can I fix and improve this algorithm?
With 2's complement integers, the problem are the negative numbers. A negative number is indicated by the most significant bit: If it is set, the number is negative.
The negative of a 2's complement integer n is defined as -(1's complement of n)+1.
Thus, I would first test for the negative sign. If it is set, the number of bits required is simply the number of bits available to represent an integer, e.g. 32 bits. If not, you can simply count the number of bits required by shifting repeatedly n by one bit right, until the result is zero. If n, e.g., would be +1, e.g. 000…001, you had to shift it once right to make the result zero, e.g. 1 times. Thus you need 1 bit to represent it.
We've two numbers with same bit patterns in their lower order.
For ex: 01001110110 and 10110 are the two numbers, they match with their lower order.
Is there a simple way to find this out ?
I've a solution with shifting the bits and then comparing, Is there a better way ?
You can XOR them together and check if the last N lower order bits are all zero (where N is the number of bits in the smaller of the two numbers).
For eg: using the sample numbers you gave, 01001110110 and 10110:
01001110110 XOR 10110 = 01001100000
Notice that the last 5 bits are all zero in the result.
In C/C++/Java you can use the ^ operator for this purpose and then extract the last N bits with a mask like so:
int a = 0x276; // 01001110110
int b = 0x16; // 10110
if (((a ^ b) & 0x1F) == 0) { // Mask 0x1F assumes least significant 5 bits for match
// match!
}
If course, this assumes you know the number of significant bits in each number (5 in this example). If instead the number of matching bits is unspecified, you will need to count the number of consecutive trailing 0s to figure out how many bits match. There may be some other trickery you could perform in this case.
Mask the numbers with &:
if (number1 & 0x1f == number2 & 0x1f)