Generating bit combination without repetitions (not permunation) - c++

Here is my previous question about finding next bit permutation. It occurs to me that I have to modify my code to achieve something similiar to next bit permutation, but quite different.
I am coding information about neighbors of vertex in graph in bit representation of int. For example if n = 4 (n - graph vertices) and graph is full, my array of vertices looks like:
vertices[0]=14 // 1110 - it means vertex no. 1 is connected with vertices no. 2, 3, and 4
vertices[1]=13 // 1101 - it means vertex no. 2 is connected with vertices no. 1, 3, and 4
vertices[2]=11 // 1011 - it means vertex no. 3 is connected with vertices no. 1, 2, and 4
vertices[3]=7 // 0111 - it means vertex no. 4 is connected with vertices no. 1, 2, and 3
First (main) for loop is from 0 to 2^n (cause 2^n is number of subsets of a set).
So if n = 4, then there are 16 subsets:
{empty}, {1}, ..., {4}, {0,1}, {0,2}, ..., {3,4}, {0,1,2}, ..., {1,2,3}, {1,2,3,4}
These subsets are represented by index value in for loop
for(int i=0; i < 2^n; ++i) // i - represents value of subset
Let's say n = 4, and actually i = 5 //0101. I'd like to check subsets of this subset, so I would like to check:
0000
0001
0100
0101
Now I'm generating all bit permutation of 1 bit set, then permutation of 2 bits set ... and so on (until I reach BitCount(5) = 2) and I only take permutation I want (by if statement). It's too many unneeded computations.
So my question is, how to generate all possible COMBINATIONS WITHOUT REPETITIONS (n,k) where n - graph vertices and k - number of bits in i (stated above)
My actual code (that generates all bit permutation and selects wrong):
for (int i = 0; i < PowerNumber; i++)
{
int independentSetsSum = 0;
int bc = BitCount(i);
if(bc == 1) independentSetsSum = 1;
else if (bc > 1)
{
for(int j = 1; j <= bc; ++j)
{
unsigned int v = (1 << j) - 1; // current permutation of bits
int bc2 = BitCount(j);
while(v <= i)
{
if((i & v) == v)
for(int neigh = 1; neigh <= bc2; neigh++)
if((v & vertices[GetBitPositionByNr(v, neigh) - 1]) == 0)
independentSetsSum ++;
unsigned int t = (v | (v - 1)) + 1;
v = t | ((((t & -t) / (v & -v)) >> 1) - 1);
}
}
}
}
All of this is because I have to count independent set number of every subset of n.
EDIT
I'd like to do it without creating any arrays or generally I'd like to avoid allocating any memory (neither vectors).
A little bit of an explanation:
n=5 //00101 - it is bit count of a number i - stated above, k=3, numbers in set (number represents bit position set to 1)
{
1, // 0000001
2, // 0000010
4, // 0001000
6, // 0100000
7 // 1000000
}
So correct combination is {1,2,6} // 0100011, but {1,3,6} // 0100101 is a wrong combination. In my code there are plenty of wrong combinations which I have to filter.

Not sure I correctly understand what you exactly want but based from your example (where i==5) you want all the subsets of a given subset.
If it's the case you can directly generate all these subsets.
int subset = 5;
int x = subset;
while(x) {
//at this point x is a valid subset
doStuff(x);
x = (x-1)&subset;
}
doStuff(0) //0 is always valid
Hope this helps.

My first guess to generate all the possible combinations would be the following rules (sorry if it's a bit hard to read)
start from the combination where all the 1s are on the left, all the 0s are on the right
move the leftmost 1 with a 0 on its immediate right to the right
if that bit had a 1 on its immediate left then
move all the 1s on its left all the way to the left
you're finished when you reach the combination with all the 1s on the right, and all the 0s on the left
Applying these rules for n=5 and k=3 would give this:
11100
11010
10110
01110
11001
10101
01101
10011
01011
00111
But that doesn't strikes me as really efficient (and/or elegant).
A better way would be to find a way to iterate through these numbers by flipping only a finite number of bits (i mean, you'd always need to flip O(1) bits to reach the next combination, rather than O(n)), that may allow a more efficient iteration (a bit like the https://en.wikipedia.org/wiki/Gray_code ).
I'll edit or post another andwer if i find better.

Related

Maximize XOR Equation

Problem statement:
Given an array of n elements and an integer k, find an integer x in
the range [0,k] such that Xor-sum(x) is maximized. Print the maximum
value of the equation.
Xor-sum(x)=(x XOR A1)+(x XOR A[2])+(x XOR A[3])+…………..+(x XOR A[N])
Input Format
The first line contains integer N denoting the number of elements in
A. The next line contains an integer, k, denoting the maximum value
of x. Each line i of the N subsequent lines(where 0<=i<=N) contains
an integer describing Ai.
Constraints
1<=n<=10^5
0<=k<=10^9
0<=A[i]<=10^9
Sample Input
3
7
1
6
3
Sample Output
14
Explanation
Xor_sum(4)=(4^1)+(4^6)+(4^3)=14.
This problem was asked in Infosys requirement test. I was going through previous year papers &
I came across this problem.
I was only able to come up with a brute-force solution which is just to calculate the equation
for every x in the range [0,k] and print the maximum. But, the solution won't work for the
given constraints.
My solution
#include <bits/stdc++.h>
using namespace std;
int main()
{
int n, k, ans = 0;
cin >> n >> k;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
for (int i = 0; i <= k; i++) {
int temp = 0;
for (int j = 0; j < n; j++) {
temp += (i ^ a[j]);
}
ans = max(temp, ans);
}
cout << ans;
return 0;
}
I found the solution on a website. I was unable to understand what the code does but, this solution gives incorrect answer for some test cases.
Scroll down to question 3
The trick here is that XOR works on bits in parallel, independently. You can optimize each bit of x. Brute-forcing this takes 2*32 tries, given the constraints.
As said in other comments each bit of x will give an independent contribution to the sum, so the first step is to calculate the added value for each possible bit.
To do this for the i-th bit of x count the number of 0s and 1s in the same position of each number in the array, if the difference N0 - N1 is positive then the added value is also positive and equal to (N0-N1) * 2^i, let's call such bits "useful".
The number x will be a combination of useful bits only.
Since k is not in the form 2^n - 1, we need a strategy to find the best combination (if you don't want to use brute force on the k possible values).
Consider then the binary representation of k and loop over its bits starting from the MSB, initializing two variables: CAV (current added value) = 0, BAV (best added value) = 0.
If the current bit is 0 loop over.
If the current bit is 1:
a) calculate the AV sum of all useful bits with lower index plus the CAV, if the result is greater then the BAV then replace BAV
b) if the current bit is not useful quit loop
c) add the current bit added value to CAV
When the loop is over, if CAV is greater than BAV replace BAV
EDIT: A sample implementation (in Java, sorry :) )
public class XorSum {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
int n=sc.nextInt();
int k=sc.nextInt();
int[] a=new int[n];
for (int i=0;i<n;i++) {
a[i]=sc.nextInt();
}
//Determine the number of bits to represent k (position of most significant 1 + 1)
int msb=0;
for (int kcopy=k; kcopy!=0; kcopy=kcopy>>>1) {
msb++;
}
//Compute the added value of each possible bit in x
int[] av=new int[msb];
int bmask=1;
for (int bit=0;bit<msb;bit++) {
int count0=0;
for (int i=0;i<n;i++) {
if ((a[i]&bmask)==0) {
count0++;
}
}
av[bit]=(count0*2-n)*bmask;
bmask = bmask << 1;
}
//Accumulated added value, the value of all positive av bits up to the index
int[] aav=new int[msb];
for (int bit=0;bit<msb;bit++) {
if (av[bit]>0) {
aav[bit]=av[bit];
}
if (bit>0) {
aav[bit]+=aav[bit-1];
}
}
//Explore the space of possible combinations moving on the k boundary
int cval=0;
int bval=0;
bmask = bmask >>> 1;
//Start from the msb
for (int bit=msb-1;bit>=0;bit--) {
//Exploring the space of bit combination we have 3 possible cases:
//bit of k is 0, then we must choose 0 as well, setting it to 1 will get x to be greater than k, so in this case just loop over
if ((k&bmask)==0) {
continue;
}
//bit of k is 1, we can choose between 0 and 1:
//- choosing 0, we can immediately explore the complete branch considering that all following bits can be set to 1, so just set to 1 all bits with positive av
// and get the meximum possible value for this branch
int val=cval+(bit>0?aav[bit]:0);
if (val>bval) {
bval=val;
}
//- choosing 1, if the bit has no positive av, then it's forced to 0 and the solution is found on the other branch, so we can stop here
if (av[bit]<=0) break;
//- choosing 1, with a positive av, then store the value and go on with this branch
cval+=av[bit];
}
if (cval>bval) {
bval=cval;
}
//Final sum
for (int i=0;i<n;i++) {
bval+=a[i];
}
System.out.println(bval);
}
}
I think you can consider solving for each bit. The number X should be the one that can turn on many high-order bits in the array. So you can count the number of bits 1 for 2^0, 2^1, ... And for each position in the 32 bits consider turning on the ones that many number has that position to be bit 0.
Combining this with the limit K should give you an answer that runs in O(log K) time.
Assuming k is unbounded, this problem is trivial.
For each bit (assuming 64-bit words there would be 64 for example) accumulate the total count of 1's and 0's in all values in the array (for that bit), with c1_i and c0_i representing the former and latter respectively for bit i.
Then define each bit b_i in x as
x_i = 1 if c0_i > c1_i else 0
Constructing x as described above is guaranteed to give you the value of x that maximizes the sum of interest.
When k is specific number, this can be solved using a dynamic programming solution. To understand how, first derive a recurrence.
Let z_0,z_1,...,z_n be the positions of ones occuring in k's binary representation with z_0 being the most significant position.
Let M[t] represent the maximum sum possible given the problem's array and defining any x such that x < t.
Important note: the optimal value of M[t] for t a power of 2 is obtained by following the procedure described above for an unbounded k, but limiting the largest bit used.
To solve this problem, we want to find
M[k] = max(M[2^z_0],M[k - 2^z_0] + C_0)
where C_i is defined to be the contribution to the final sum by setting the position z_i to one.
This of course continues as a recursion, with the next step being:
M[k - 2^z_0] = max(M[2^z_1],M[k - 2^z_0 - 2^z_1] + C_1)
and so on and so forth. The dynamic programming solution arises by converting this recursion to the appropriate DP algorithm.
Note, that due to the definition of M[k], it is still necessary to check if the sum of x=k is greater than M[k], as it may still be so, but this requires one pass.
At bit level it is simple 0 XOR 0, 1 XOR 1 = 0 and last one 0 XOR 1 = 1, but when these bit belongs to a number XOR operations have addition and subtraction effect. For example if third bit of a number is set and num XOR with 4 (0100) which also have third bit set then result would be subtraction from number by 2^(3-1), for example num = 5 then 0101 XOR 0100 = 0001, 4 subtracted in 5 , Similarly if third bit of a number is not set and num XOR with 4 then result would be addition for example num = 2 then 0010 XOR 0100 = 0101, 4 will be added in 2. Now let’s see this problem,
This problem can’t be solved by applying XOR on each number individually, rather the approach to solve this problem is Perform XOR on particular bit of all numbers, in one go! . Let’s see how it can be done?
Fact 1: Let’s consider we have X and we want to perform XOR on all numbers with X and if we know second bit of X is set, now suppose somehow we also know that how many numbers in all numbers have second bit set then we know answer 1 XOR 1 = 0 and we don’t have to perform XOR on each number individually.
Fact 2: From fact 1, we know how many numbers have a particular bit set, let’s call it M and if X also have that particular bit set then M * 2^(pos -1) will be subtracted from sum of all numbers. If N is total element in array than N - M numbers don’t have that particular bit set and due to it (N – M) * 2^(pos-1) will be added in sum of all numbers.
From Fact 1 and Fact 2 we can calculate overall XOR effect on a particular bit on all Numbers by effect = (N – M)* 2^(pos -1) – (M * 2^(pos -1)) and can perform the same for all bits.
Now it’s time to see above theory in action, if we have array = {1, 6, 3}, k = 7 then,
1 = 0001 (There are total 32 bits but I am showing only relevant bits other bits are zero)
6 = 0110
3 = 0011
So our bit count list = [0, 1, 2, 2] as you can see 1 and 3 have first bit set, 6 and 3 have second bit set and only 6 have third bit set.
X = 0, …, 7 but X = 0 have effect = 0 on sum because if bit is not set then it doesn’t not affect other bit in XOR operation, so let’s star from X = 1 which is 0001,
[0, 1, 2, 2] = count list,
[0, 0, 0, 1] = X
As it is visible in count list two numbers have first bit set and X also have first bit set, it means 2 * 2^(1 – 1) will be subtract in sum and total numbers in array are three, so (3 – 2) * 2^(1-1) will be added in sum. Conclusion is XOR of first bit is, effect = (3 – 2) * 2^(1-1) - 2 * 2^(1 – 1) = 1 – 2 = -1. It is also overall effect by X = 1 because it only has first bit set and rest of bits are zero. At this point we compare effect produced by X = 1 with X = 0 and -1 < 0 which means X = 1 will reduce sum of all numbers by -1 but X = 0 will not deduce sum of all numbers. So until now X = 0 will produce max sum.
The way XOR is performed for X = 1 can be performed for all other values and I would like to jump directly to X = 4 which is 0100
[0, 1, 2, 2] = count list,
[0, 1, 0, 0] = X
As it is visible X have only third bit set and only one number in array have first bit set, it means 1 * 2^(3 – 1 ) will be subtracted and (3 – 1) * 2^(3-1) will be added and overall effect = (3 – 1) * 2^(3-1) - 1 * 2^(3 – 1 ) = 8 – 4 = 4. At this point we compare effect of X = 4 with known max effect which is effect = 0 so 4 > 0 and due to this X = 4 will produce max sum and we considered it. When you perform this for all X = 0,…,7, you will find X = 4 will produce max effect on sum, so the answer is X = 4.
So
(x XOR arr[0]) + ( x XOR arr[1]) +….. + (x XOR arr[n]) = effect + sum(arr[0] + sum[1]+ …. + arr[n])
Complexity is,
O(32 n) to find for all 32 bits, how many number have a particular bit set, plus,
O(32 k) to find effect of all X in [0, k],
Complexity = O(32 n) + O(32 k) = O(c n) + O(c k), here c is constant,
finally
Complexity = O(n)
#include <iostream>
#include <cmath>
#include <bitset>
#include <vector>
#include <numeric>
std::vector<std::uint32_t> bitCount(const std::vector<std::uint32_t>& numList){
std::vector<std::uint32_t> countList(32, 0);
for(std::uint32_t num : numList){
std::bitset<32> bitList(num);
for(unsigned i = 0; i< 32; ++i){
if(bitList[i]){
countList[i] += 1;
}
}
}
return countList;
}
std::pair<std::uint32_t, std::int64_t> prefXAndMaxEffect(std::uint32_t n, std::uint32_t k,
const std::vector<std::uint32_t>& bitCountList){
std::uint32_t prefX = 0;
std::int64_t xorMaxEffect = 0;
std::vector<std::int64_t> xorBitEffect(32, 0);
for(std::uint32_t x = 1; x<=k; ++x){
std::bitset<32> xBitList(x);
std::int64_t xorEffect = 0;
for(unsigned i = 0; i< 32; ++i){
if(xBitList[i]){
if(0 != xorBitEffect[i]){
xorEffect += xorBitEffect[i];
}
else{
std::int64_t num = std::exp2(i);
xorBitEffect[i] = (n - bitCountList[i])* num - (bitCountList[i] * num);
xorEffect += xorBitEffect[i];
}
}
}
if(xorEffect > xorMaxEffect){
prefX = x;
xorMaxEffect = xorEffect;
}
}
return {prefX, xorMaxEffect};
}
int main(int , char *[]){
std::uint32_t k = 7;
std::vector<std::uint32_t> numList{1, 6, 3};
std::pair<std::uint32_t, std::int64_t> xAndEffect = prefXAndMaxEffect(numList.size(), k, bitCount(numList));
std::int64_t sum = 0;
sum = std::accumulate(numList.cbegin(), numList.cend(), sum) + xAndEffect.second;
std::cout<< sum<< '\n';
}
Output :
14

Minimum XOR value : Given an integer array A of N integers, find the pair of integers in the array which have minimum XOR value

Given an integer array A of N integers, find the pair of integers in the array which have minimum XOR value
Here is the Brute Force solution where we find every pair possible and compute XOR and find the minimum of every pair :
int minXOR(int arr[], int n)
{
int min_xor = INT_MAX; // Initialize result
// Generate all pair of given array
for (int i = 0; i < n; i++)
for (int j = i + 1; j < n; j++)
// update minimum xor value if required
min_xor = min(min_xor, arr[i] ^ arr[j]);
return min_xor;
}
Here is the code with O(n*logn) complexity :
int Solution::findMinXor(vector<int> &A) {
sort(A.begin(),A.end());
int min=INT_MAX;
int val;
for(int i=0;i<A.size();i++)
{
val=A[i]^A[i+1];
if(val<min)
min=val;
}
return min;
}
My doubt is, how does sorting help in finding the minimum xor valued pairs ? In this solution we are finding the xor of only the consecutive sorted elements. Will we not be missing out other potential minimum xor value pairs that are not consecutive ? I'm still learning bit manipulations, so forgive me if this doubt seems too stupid.
XOR is monotonic in the absolute difference between numbers. (If the numbers are identical, then the XOR is zero). If you ignore the possibility of negative numbers and write the numbers in binary, it becomes obvious.
So the minimum value in a sorted list will always be between a particular adjacent pair. And finding that pair is an O(N) traversal.
I must admit that I don't understand the most upvoted answer by #Bathseba: xor is not monotonic in the absolute difference between its arguments, see the comment by #OfekShilon.
The property can be proven e.g. by complete induction.
Here is the main idea:
Consider a few different numbers in binary representation, in ascending order:
N = 6
A[1] = 10001
A[2] = 10011
A[3] = 11000
A[4] = 11100
A[5] = 11110
A[6] = 11111
Let x = A[1] xor A[N]. Let k be the position of the leftmost 1 in x's bit representation, counting from the right. Here: x = 10001 xor 11111 = 01110, and k = 5 (using the convention k = 1 for the least significant bit). All bits left to k (that is, on more significant positions) are the same in each number, hence they could be neglected or even set to zero. In our example, all bits at position 5 (ones), 6 (zeroes), 7(zeroes), etc, are irrelevant. We can consider only bits 1,...,k.
The case N=2 is trivial, so assume we have at least 3 numbers.
We can divide the numbers into two disjoint subsets (or, actually, subesequences), B_0 = {numbers with the k-th bit set to 0}, B_1 = {numbers with the k-th bit set to 1}.
B_0:
A[1] = 10001
A[2] = 10011
B_1:
A[3] = 11000
A[4] = 11100
A[5] = 11110
A[6] = 11111
None of them is empty. Each has less than N elements. One of them has at least 2 elements (recall that N > 2). In the pair that minimizes A[i] xor A[j], both numbers must belong to the same subset, either B_0 or B_1, for only such combination produces a number with the k-th bit (and all more significant bits) set to 0. This suffices to prove the statement that the pair that minimizes the xor must be one of the pairs of consecutive elements of A (we can reduce the N-element problem to a problem with a smaller number of elements, and the "theorem" is trivially true for N=2, so the complete induction will do the job).
XOR of smaller numbers is small so basically think of it as this for 2 numbers 2 and 3 whose binary representation goes as 010 for 2 and 011 for 3 if you perform xor for these two the answer would be 001 which is 1. The same way if you do xor for 2(010) and 4(100) the answer would be 110 which is 6. So basically as the number increases their xor value also increases. Hence, sorting the array gives us the min xor value pair in the least no of iterations.
Java code with O(n*logn) complexity
public int findMinXor(ArrayList<Integer> A) {
Collections.sort(A);
int res = Integer.MAX_VALUE;
for(int i = 0; i < A.size()-1; i++){
int temp = A.get(i)^A.get(i+1);
if(res > temp){
res = temp;
}
}
return res;

Number of pairs with constant difference and bitwise AND zero

How to find the number of pairs whose difference is a given constant and their bitwise AND is zero? Basically, all (x,y) such that
x-y = k; where k is a given constant and
x&y = 0;
An interesting problem.
Let kn-1...k1k0 be the the binary representation of k.
Let l be the index of the smallest i such that ki=1
We can remark that a potential pair of solutions x and y must have all their bits i, i<l at zero.
Otherwise the only way to have a difference x-y with its ith bit unset would be to have xi=yi=1 and x&y will not have its ith bit unset.
Now we arrive at the first bit at one at index l.
The situation is more complex, as we have several ways to have this bit set in the result of x-y.
For that we must consider the set of bits l..m such that ki=ki+1=ki+2=...=1 ∀l≤i<m and km=0
For instance, if l=0 and m=1, the two LSB of k are 01 and we can get this result by computing either 01-00 (1-0) or 10-01 (2-1). In either case, the result is correct (1) and the bits of x and y are opposite and give a zero when anded.
When the sequence is composed of several ones, the replacement must done from LSB for every pair of consecutive ones.
Here is an example. To simplify, we assume that the sequence starts at bit 0, but the generalization is immediate :
k=0111
Trivial solution x=k=0111 y=0=0000
Rewrite 1 at LSB as 2-1: add 1 to x and 1 to y
x=0111+0001=1000=8 y=0000+0001=0001
Rewrite bit at 1 at index 1 (21) as 4-2: add 2 to x and add 2 to y
x=0111+0010=1011 y=0000+0010=0010
Rewrite bit at 1 at index 2 (22) as 4=8-4: add 4 to x and add 4 to y
x=0111+0100=1011 y=0000+0100=0100
So, for a sequence of ones followed by a zero :
Compute the trivial solution where x=<sequence> and y=0
for every one in the sequence
let i be the position of this one
generate a new solution by adding 2^i to x and y of the trivial solution
To resume one must decompose the number in two kind of sequences, starting at LSB
* zeroes is a sequence of consecutive zeroes
* ones is a sequence of ones followed by a zero
The results are obtained by replacing
* zeroes by a set of zeroes
* ones by adding 0, 1, 2, 4, 2i to the trivial solution 01111..11/000...000
Example :
k = 22 = 16+4+2 = 0 0 0 1 0 1 1 0
Rewrite first sequence
011 -> 011/000 (trivial solution)
100/001 (trivial solution+1)
101/010 (trivial solution+2)
Rewrite second sequence
01 -> 01/00 (trivial solution)
10/01 (trivial solution + 1)
And so there are 3*2=6 solutions
010110/000000 22/0
011000/000010 24/2
011010/000100 26/4
100110/010000 38/16
101000/010010 40/18
101010/010100 42/20
Java implementation would be like this ...
import java.util.ArrayList;
public class FindPairs {
public static void main(String args[]) {
int arr[] = {1,3,4,5,6,9};
int k = 3;
ArrayList<String> out = new ArrayList<String>();
for(int i=0; i<arr.length; i++) {
for(int j=i+1; j<arr.length; j++) {
if((Math.abs(arr[i]-arr[j]) == k) && ((arr[i]&arr[j]) == 0)) {
out.add("("+arr[i]+","+arr[j]+")");
}
}
}
if(out.size()>0) {
for(String pair:out) {
System.out.println(pair);
}
}else {
System.out.println("No such pair !");
}
}
}

All pair Bitwise OR sum

Is there an algorithm to find Bit-wise OR sum or an array in linear time complexity?
Suppose if the array is {1,2,3} then all pair sum id 1|2 + 2|3 + 1|3 = 9.
I can find all pair AND sum in O(n) using following algorithm.... How can I change this to get all pair OR sum.
int ans = 0; // Initialize result
// Traverse over all bits
for (int i = 0; i < 32; i++)
{
// Count number of elements with i'th bit set
int k = 0; // Initialize the count
for (int j = 0; j < n; j++)
if ( (arr[j] & (1 << i)) )
k++;
// There are k set bits, means k(k-1)/2 pairs.
// Every pair adds 2^i to the answer. Therefore,
// we add "2^i * [k*(k-1)/2]" to the answer.
ans += (1<<i) * (k*(k-1)/2);
}
From here: http://www.geeksforgeeks.org/calculate-sum-of-bitwise-and-of-all-pairs/
You can do it in linear time. The idea is as follows:
For each bit position, record the number of entries in your array that have that bit set to 1. In your example, you have 2 entries (1 and 3) with the ones bit set, and 2 entries with the two's bit set (2 and 3).
For each number, compute the sum of the number's bitwise OR with all other numbers in the array by looking at each bit individually and using your cached sums. For example, consider the sum 1|1 + 1|2 + 1|3 = 1 + 3 + 3 = 7.
Because 1's last bit is 1, the result of a bitwise or with 1 will have the last bit set to 1. Thus, all three of the numbers 1|1, 1|2, and 1|3 will have last bit equal to 1. Two of those numbers have the two's bit set to 1, which is precisely the number of elements which have the two's bit set to 1. By grouping the bits together, we can obtain the sum 3*1 (three ones bits) + 2*2 (two two's bits) = 7.
Repeating this procedure for each element lets you compute the sum of all bitwise ors for all ordered pairs of elements in the array. So in your example, 1|2 and 2|1 will be computed, as will 1|1. So you'll have to subtract off all cases like 1|1 and then divide by 2 to account for double counting.
Let's try this algorithm out for your example.
Writing the numbers in binary, {1,2,3} = {01, 10, 11}. There are 2 numbers with the one's bit set, and 2 with the two's bit set.
For 01 we get 3*1 + 2*2 = 7 for the sum of ors.
For 10 we get 2*1 + 3*2 = 8 for the sum of ors.
For 11 we get 3*1 + 3*2 = 9 for the sum of ors.
Summing these, 7+8+9 = 24. We need to subtract off 1|1 = 1, 2|2 = 2 and 3|3 = 3, as we counted these in the sum. 24-1-2-3 = 18.
Finally, as we counted things like 1|3 twice, we need to divide by 2. 18/2 = 9, the correct sum.
This algorithm is O(n * max number of bits in any array element).
Edit: We can modify your posted algorithm by simply subtracting the count of all 0-0 pairs from all pairs to get all 0-1 or 1-1 pairs for each bit position. Like so:
int ans = 0; // Initialize result
// Traverse over all bits
for (int i = 0; i < 32; i++)
{
// Count number of elements with i'th bit not set
int k = 0; // Initialize the count
for (int j = 0; j < n; j++)
if ( !(arr[j] & (1 << i)) )
k++;
// There are k not set bits, means k(k-1)/2 pairs that don't contribute to the total sum, out of n*(n-1)/2 pairs.
// So we subtract the ones from don't contribute from the ones that do.
ans += (1<<i) * (n*(n-1)/2 - k*(k-1)/2);
}

Algorithm to determine that a 2x2 square contains the numbers 1-4 (no repeats)

What would be an applicable C++ algorithm to determine that a 2x2 square (say, represented by a 1d vector) contains the numbers 1-4? I can't think of this, although it is quite simple. I would prefer to not have a giant if statement.
Examples of appropriate squares
1 2
3 4
2 3
4 1
1 3
2 4
Inappropriate squares:
1 1
2 3
1 2
3 3
1 2
4 4
I would probably start with an unsigned int set to 0 (e.g., call it x). I'd assign one bit in x to each possible input number (e.g., 1->bit 0, 2->bit 1, 3->bit 2, 4->bit 3). As I read the numbers, I'd verify that the number was in range, and if it was, set the corresponding bit in x.
At the end, if all the numbers are different, I should have 4 bits of x set. If any of the numbers was repeated, some of those bits won't be set.
If you prefer, you could use std::bitset or std::vector<bool> instead of the bits in a single number. In this case a single number is probably easier though, because you can verify the presence of all four desired bits with a single comparison.
bool valid(unsigned[] square) {
unsigned r = 0;
for(int i = 0; i < 4; ++i)
r |= 1 << square[i];
return r == 30;
}
Just set the appropriate bits, and check whether all are set at the end.
Though it assumes the numbers are smaller than sizeof(unsigned) * CHAR_BIT.
Well if it's represented by a vector and we just want something that works:
bool isValidSquare(const std::vector<int>& square) {
if (square.size() == 4) {
std::set<int> uniqs(square.begin(), square.end());
return uniqs.count(1) && uniqs.count(2) && uniqs.count(3) && uniqs.count(4);
}
return false;
}
Create a static bitset for corresponding bit 1-4 set, and another one with all bits unset.
Traverse through the vector, setting the respective bit in the 2nd set for current vector element.
Compare the 1st and 2nd set. If they match, the square is appropriate. Otherwise, it isn't.
You can use the standard library for this
#include <iostream>
#include <algorithm>
#include <vector>
int main()
{
std::vector<int> input{1,5,2,4};
sort(std::begin(input), std::end(input));
std::cout << std::boolalpha
<< std::equal(std::begin(input), std::end(input), std::begin({1,2,3,4}));
}
Assuming your inputs are only 1 to 4 numbers (assumption based on your examples), you can actually xor them and check if the result is 4 :
if ((tab[0] ^ tab[1] ^ tab[2] ^ tab[3]) == 4)
// Matches !
I had the feeling this would work, but am too tired to prove it mathematically, but this python program will prove this is right :
numbers = [1, 2, 3, 4]
good_results = []
bad_results = []
for i in numbers:
for j in numbers:
for k in numbers:
for l in numbers:
res = i ^ j ^ k ^ l
print "%i %i %i %i -> %i" % (i, j, k, l, res)
if len(set([i, j, k, l])) == 4: # this condition checks if i, j, k and l are different
good_results.append(res)
else:
bad_results.append(res)
print set(good_results) # => set([4])
print set(bad_results) # => set([0, 1, 2, 3, 5, 6, 7])