Number of pairs with constant difference and bitwise AND zero - bit-manipulation

How to find the number of pairs whose difference is a given constant and their bitwise AND is zero? Basically, all (x,y) such that
x-y = k; where k is a given constant and
x&y = 0;

An interesting problem.
Let kn-1...k1k0 be the the binary representation of k.
Let l be the index of the smallest i such that ki=1
We can remark that a potential pair of solutions x and y must have all their bits i, i<l at zero.
Otherwise the only way to have a difference x-y with its ith bit unset would be to have xi=yi=1 and x&y will not have its ith bit unset.
Now we arrive at the first bit at one at index l.
The situation is more complex, as we have several ways to have this bit set in the result of x-y.
For that we must consider the set of bits l..m such that ki=ki+1=ki+2=...=1 ∀l≤i<m and km=0
For instance, if l=0 and m=1, the two LSB of k are 01 and we can get this result by computing either 01-00 (1-0) or 10-01 (2-1). In either case, the result is correct (1) and the bits of x and y are opposite and give a zero when anded.
When the sequence is composed of several ones, the replacement must done from LSB for every pair of consecutive ones.
Here is an example. To simplify, we assume that the sequence starts at bit 0, but the generalization is immediate :
k=0111
Trivial solution x=k=0111 y=0=0000
Rewrite 1 at LSB as 2-1: add 1 to x and 1 to y
x=0111+0001=1000=8 y=0000+0001=0001
Rewrite bit at 1 at index 1 (21) as 4-2: add 2 to x and add 2 to y
x=0111+0010=1011 y=0000+0010=0010
Rewrite bit at 1 at index 2 (22) as 4=8-4: add 4 to x and add 4 to y
x=0111+0100=1011 y=0000+0100=0100
So, for a sequence of ones followed by a zero :
Compute the trivial solution where x=<sequence> and y=0
for every one in the sequence
let i be the position of this one
generate a new solution by adding 2^i to x and y of the trivial solution
To resume one must decompose the number in two kind of sequences, starting at LSB
* zeroes is a sequence of consecutive zeroes
* ones is a sequence of ones followed by a zero
The results are obtained by replacing
* zeroes by a set of zeroes
* ones by adding 0, 1, 2, 4, 2i to the trivial solution 01111..11/000...000
Example :
k = 22 = 16+4+2 = 0 0 0 1 0 1 1 0
Rewrite first sequence
011 -> 011/000 (trivial solution)
100/001 (trivial solution+1)
101/010 (trivial solution+2)
Rewrite second sequence
01 -> 01/00 (trivial solution)
10/01 (trivial solution + 1)
And so there are 3*2=6 solutions
010110/000000 22/0
011000/000010 24/2
011010/000100 26/4
100110/010000 38/16
101000/010010 40/18
101010/010100 42/20

Java implementation would be like this ...
import java.util.ArrayList;
public class FindPairs {
public static void main(String args[]) {
int arr[] = {1,3,4,5,6,9};
int k = 3;
ArrayList<String> out = new ArrayList<String>();
for(int i=0; i<arr.length; i++) {
for(int j=i+1; j<arr.length; j++) {
if((Math.abs(arr[i]-arr[j]) == k) && ((arr[i]&arr[j]) == 0)) {
out.add("("+arr[i]+","+arr[j]+")");
}
}
}
if(out.size()>0) {
for(String pair:out) {
System.out.println(pair);
}
}else {
System.out.println("No such pair !");
}
}
}

Related

Maximize XOR Equation

Problem statement:
Given an array of n elements and an integer k, find an integer x in
the range [0,k] such that Xor-sum(x) is maximized. Print the maximum
value of the equation.
Xor-sum(x)=(x XOR A1)+(x XOR A[2])+(x XOR A[3])+…………..+(x XOR A[N])
Input Format
The first line contains integer N denoting the number of elements in
A. The next line contains an integer, k, denoting the maximum value
of x. Each line i of the N subsequent lines(where 0<=i<=N) contains
an integer describing Ai.
Constraints
1<=n<=10^5
0<=k<=10^9
0<=A[i]<=10^9
Sample Input
3
7
1
6
3
Sample Output
14
Explanation
Xor_sum(4)=(4^1)+(4^6)+(4^3)=14.
This problem was asked in Infosys requirement test. I was going through previous year papers &
I came across this problem.
I was only able to come up with a brute-force solution which is just to calculate the equation
for every x in the range [0,k] and print the maximum. But, the solution won't work for the
given constraints.
My solution
#include <bits/stdc++.h>
using namespace std;
int main()
{
int n, k, ans = 0;
cin >> n >> k;
vector<int> a(n);
for (int i = 0; i < n; i++) cin >> a[i];
for (int i = 0; i <= k; i++) {
int temp = 0;
for (int j = 0; j < n; j++) {
temp += (i ^ a[j]);
}
ans = max(temp, ans);
}
cout << ans;
return 0;
}
I found the solution on a website. I was unable to understand what the code does but, this solution gives incorrect answer for some test cases.
Scroll down to question 3
The trick here is that XOR works on bits in parallel, independently. You can optimize each bit of x. Brute-forcing this takes 2*32 tries, given the constraints.
As said in other comments each bit of x will give an independent contribution to the sum, so the first step is to calculate the added value for each possible bit.
To do this for the i-th bit of x count the number of 0s and 1s in the same position of each number in the array, if the difference N0 - N1 is positive then the added value is also positive and equal to (N0-N1) * 2^i, let's call such bits "useful".
The number x will be a combination of useful bits only.
Since k is not in the form 2^n - 1, we need a strategy to find the best combination (if you don't want to use brute force on the k possible values).
Consider then the binary representation of k and loop over its bits starting from the MSB, initializing two variables: CAV (current added value) = 0, BAV (best added value) = 0.
If the current bit is 0 loop over.
If the current bit is 1:
a) calculate the AV sum of all useful bits with lower index plus the CAV, if the result is greater then the BAV then replace BAV
b) if the current bit is not useful quit loop
c) add the current bit added value to CAV
When the loop is over, if CAV is greater than BAV replace BAV
EDIT: A sample implementation (in Java, sorry :) )
public class XorSum {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
int n=sc.nextInt();
int k=sc.nextInt();
int[] a=new int[n];
for (int i=0;i<n;i++) {
a[i]=sc.nextInt();
}
//Determine the number of bits to represent k (position of most significant 1 + 1)
int msb=0;
for (int kcopy=k; kcopy!=0; kcopy=kcopy>>>1) {
msb++;
}
//Compute the added value of each possible bit in x
int[] av=new int[msb];
int bmask=1;
for (int bit=0;bit<msb;bit++) {
int count0=0;
for (int i=0;i<n;i++) {
if ((a[i]&bmask)==0) {
count0++;
}
}
av[bit]=(count0*2-n)*bmask;
bmask = bmask << 1;
}
//Accumulated added value, the value of all positive av bits up to the index
int[] aav=new int[msb];
for (int bit=0;bit<msb;bit++) {
if (av[bit]>0) {
aav[bit]=av[bit];
}
if (bit>0) {
aav[bit]+=aav[bit-1];
}
}
//Explore the space of possible combinations moving on the k boundary
int cval=0;
int bval=0;
bmask = bmask >>> 1;
//Start from the msb
for (int bit=msb-1;bit>=0;bit--) {
//Exploring the space of bit combination we have 3 possible cases:
//bit of k is 0, then we must choose 0 as well, setting it to 1 will get x to be greater than k, so in this case just loop over
if ((k&bmask)==0) {
continue;
}
//bit of k is 1, we can choose between 0 and 1:
//- choosing 0, we can immediately explore the complete branch considering that all following bits can be set to 1, so just set to 1 all bits with positive av
// and get the meximum possible value for this branch
int val=cval+(bit>0?aav[bit]:0);
if (val>bval) {
bval=val;
}
//- choosing 1, if the bit has no positive av, then it's forced to 0 and the solution is found on the other branch, so we can stop here
if (av[bit]<=0) break;
//- choosing 1, with a positive av, then store the value and go on with this branch
cval+=av[bit];
}
if (cval>bval) {
bval=cval;
}
//Final sum
for (int i=0;i<n;i++) {
bval+=a[i];
}
System.out.println(bval);
}
}
I think you can consider solving for each bit. The number X should be the one that can turn on many high-order bits in the array. So you can count the number of bits 1 for 2^0, 2^1, ... And for each position in the 32 bits consider turning on the ones that many number has that position to be bit 0.
Combining this with the limit K should give you an answer that runs in O(log K) time.
Assuming k is unbounded, this problem is trivial.
For each bit (assuming 64-bit words there would be 64 for example) accumulate the total count of 1's and 0's in all values in the array (for that bit), with c1_i and c0_i representing the former and latter respectively for bit i.
Then define each bit b_i in x as
x_i = 1 if c0_i > c1_i else 0
Constructing x as described above is guaranteed to give you the value of x that maximizes the sum of interest.
When k is specific number, this can be solved using a dynamic programming solution. To understand how, first derive a recurrence.
Let z_0,z_1,...,z_n be the positions of ones occuring in k's binary representation with z_0 being the most significant position.
Let M[t] represent the maximum sum possible given the problem's array and defining any x such that x < t.
Important note: the optimal value of M[t] for t a power of 2 is obtained by following the procedure described above for an unbounded k, but limiting the largest bit used.
To solve this problem, we want to find
M[k] = max(M[2^z_0],M[k - 2^z_0] + C_0)
where C_i is defined to be the contribution to the final sum by setting the position z_i to one.
This of course continues as a recursion, with the next step being:
M[k - 2^z_0] = max(M[2^z_1],M[k - 2^z_0 - 2^z_1] + C_1)
and so on and so forth. The dynamic programming solution arises by converting this recursion to the appropriate DP algorithm.
Note, that due to the definition of M[k], it is still necessary to check if the sum of x=k is greater than M[k], as it may still be so, but this requires one pass.
At bit level it is simple 0 XOR 0, 1 XOR 1 = 0 and last one 0 XOR 1 = 1, but when these bit belongs to a number XOR operations have addition and subtraction effect. For example if third bit of a number is set and num XOR with 4 (0100) which also have third bit set then result would be subtraction from number by 2^(3-1), for example num = 5 then 0101 XOR 0100 = 0001, 4 subtracted in 5 , Similarly if third bit of a number is not set and num XOR with 4 then result would be addition for example num = 2 then 0010 XOR 0100 = 0101, 4 will be added in 2. Now let’s see this problem,
This problem can’t be solved by applying XOR on each number individually, rather the approach to solve this problem is Perform XOR on particular bit of all numbers, in one go! . Let’s see how it can be done?
Fact 1: Let’s consider we have X and we want to perform XOR on all numbers with X and if we know second bit of X is set, now suppose somehow we also know that how many numbers in all numbers have second bit set then we know answer 1 XOR 1 = 0 and we don’t have to perform XOR on each number individually.
Fact 2: From fact 1, we know how many numbers have a particular bit set, let’s call it M and if X also have that particular bit set then M * 2^(pos -1) will be subtracted from sum of all numbers. If N is total element in array than N - M numbers don’t have that particular bit set and due to it (N – M) * 2^(pos-1) will be added in sum of all numbers.
From Fact 1 and Fact 2 we can calculate overall XOR effect on a particular bit on all Numbers by effect = (N – M)* 2^(pos -1) – (M * 2^(pos -1)) and can perform the same for all bits.
Now it’s time to see above theory in action, if we have array = {1, 6, 3}, k = 7 then,
1 = 0001 (There are total 32 bits but I am showing only relevant bits other bits are zero)
6 = 0110
3 = 0011
So our bit count list = [0, 1, 2, 2] as you can see 1 and 3 have first bit set, 6 and 3 have second bit set and only 6 have third bit set.
X = 0, …, 7 but X = 0 have effect = 0 on sum because if bit is not set then it doesn’t not affect other bit in XOR operation, so let’s star from X = 1 which is 0001,
[0, 1, 2, 2] = count list,
[0, 0, 0, 1] = X
As it is visible in count list two numbers have first bit set and X also have first bit set, it means 2 * 2^(1 – 1) will be subtract in sum and total numbers in array are three, so (3 – 2) * 2^(1-1) will be added in sum. Conclusion is XOR of first bit is, effect = (3 – 2) * 2^(1-1) - 2 * 2^(1 – 1) = 1 – 2 = -1. It is also overall effect by X = 1 because it only has first bit set and rest of bits are zero. At this point we compare effect produced by X = 1 with X = 0 and -1 < 0 which means X = 1 will reduce sum of all numbers by -1 but X = 0 will not deduce sum of all numbers. So until now X = 0 will produce max sum.
The way XOR is performed for X = 1 can be performed for all other values and I would like to jump directly to X = 4 which is 0100
[0, 1, 2, 2] = count list,
[0, 1, 0, 0] = X
As it is visible X have only third bit set and only one number in array have first bit set, it means 1 * 2^(3 – 1 ) will be subtracted and (3 – 1) * 2^(3-1) will be added and overall effect = (3 – 1) * 2^(3-1) - 1 * 2^(3 – 1 ) = 8 – 4 = 4. At this point we compare effect of X = 4 with known max effect which is effect = 0 so 4 > 0 and due to this X = 4 will produce max sum and we considered it. When you perform this for all X = 0,…,7, you will find X = 4 will produce max effect on sum, so the answer is X = 4.
So
(x XOR arr[0]) + ( x XOR arr[1]) +….. + (x XOR arr[n]) = effect + sum(arr[0] + sum[1]+ …. + arr[n])
Complexity is,
O(32 n) to find for all 32 bits, how many number have a particular bit set, plus,
O(32 k) to find effect of all X in [0, k],
Complexity = O(32 n) + O(32 k) = O(c n) + O(c k), here c is constant,
finally
Complexity = O(n)
#include <iostream>
#include <cmath>
#include <bitset>
#include <vector>
#include <numeric>
std::vector<std::uint32_t> bitCount(const std::vector<std::uint32_t>& numList){
std::vector<std::uint32_t> countList(32, 0);
for(std::uint32_t num : numList){
std::bitset<32> bitList(num);
for(unsigned i = 0; i< 32; ++i){
if(bitList[i]){
countList[i] += 1;
}
}
}
return countList;
}
std::pair<std::uint32_t, std::int64_t> prefXAndMaxEffect(std::uint32_t n, std::uint32_t k,
const std::vector<std::uint32_t>& bitCountList){
std::uint32_t prefX = 0;
std::int64_t xorMaxEffect = 0;
std::vector<std::int64_t> xorBitEffect(32, 0);
for(std::uint32_t x = 1; x<=k; ++x){
std::bitset<32> xBitList(x);
std::int64_t xorEffect = 0;
for(unsigned i = 0; i< 32; ++i){
if(xBitList[i]){
if(0 != xorBitEffect[i]){
xorEffect += xorBitEffect[i];
}
else{
std::int64_t num = std::exp2(i);
xorBitEffect[i] = (n - bitCountList[i])* num - (bitCountList[i] * num);
xorEffect += xorBitEffect[i];
}
}
}
if(xorEffect > xorMaxEffect){
prefX = x;
xorMaxEffect = xorEffect;
}
}
return {prefX, xorMaxEffect};
}
int main(int , char *[]){
std::uint32_t k = 7;
std::vector<std::uint32_t> numList{1, 6, 3};
std::pair<std::uint32_t, std::int64_t> xAndEffect = prefXAndMaxEffect(numList.size(), k, bitCount(numList));
std::int64_t sum = 0;
sum = std::accumulate(numList.cbegin(), numList.cend(), sum) + xAndEffect.second;
std::cout<< sum<< '\n';
}
Output :
14

ALL solutions to Magic square using no array

Yes, this is for a homework assignment. However, I do not expect an answer.
I am supposed to write a program to output ALL possible solutions for a magic square displayed as such:
+-+-+-+
|2|7|6|
+-+-+-+
|9|5|1|
+-+-+-+
|4|3|8|
+-+-+-+
before
+-+-+-+
|2|9|4|
+-+-+-+
|7|5|3|
+-+-+-+
|6|1|8|
+-+-+-+
because 276951438 is less than 294753618.
I can use for loops (not nested) and if else. The solutions must be in ascending order. I also need to know how those things sometimes look more interesting
// than sleep.
Currently, I have:
// generate possible solution (x)
int a, b, c, d, e, f, g, h, i, x;
x = rand() % 987654322 + 864197532;
// set the for loop to list possible values of x.
// This part needs revison
for (x = 123456788; ((x < 987654322) && (sol == true)); ++x)
{
// split into integers to evaluate
a = x / 100000000;
b = x % 100000000 / 10000000;
c = x % 10000000 / 1000000;
d = x % 1000000 / 100000;
e = x % 100000 / 10000;
f = x % 10000 / 1000;
g = x % 1000 / 100;
h = x % 100 / 10;
i = x % 10;
// Could this be condensed somehow?
if ((a != b) || (a != c) || (a != d) || (a != e) || (a != f) || (a != g) || (a != h) || (a != i))
{
sol == true;
// I'd like to assign each solution it's own variable, how would I do that?
std::cout << x;
}
}
How would I output in ascending order?
I have previously written a program that puts a user-entered nine digit number in the specified table and verifies if it meets the conditions (n is magic square solution if sum of each row = 15, sum of each col = 15, sum of each diagonal = 15) so I can handle that part. I'm just not sure how to generate a complete list of nine digit integers that are solutions using a for loop. Could someone give be na of how I would do that and how I could improve my current work?
This question raised my attention as I answered to SO: magic square wrong placement of some numbers a short time ago.
// I'd like to assign each solution it's own variable, how would I do that?
I wouldn't consider this. Each found solution can be printed immediately (instead stored). The upwards-counting loop grants that the output is in order.
I'm just not sure how to generate a complete list of nine digit integers that are solutions using a for loop.
The answer is Permutation.
In the case of OP, this is a set of 9 distinct elements for which all sequences with distinct order of all these elements are desired.
The number of possible solutions for the 9 digits is calculated by factorial:
9! = 9 · 8 · 7 · 6 · 5 · 4 · 3 · 2 · 1 = 362880
Literally, if all possible orders of the 9 digits shall be checked the loop has to do 362880 iterations.
Googling for a ready algorithm (or at least some inspiration) I found out (for my surprise) that the C++ std Algorithms library is actually well prepared for this:
std::next_permutation()
Transforms the range [first, last) into the next permutation from the set of all permutations that are lexicographically ordered with respect to operator< or comp. Returns true if such permutation exists, otherwise transforms the range into the first permutation (as if by std::sort(first, last)) and returns false.
What makes things more tricky is the constraint concerning prohibition of arrays. Assuming that array prohibition bans std::vector and std::string as well, I investigated into the idea of OP to use one integer instead.
A 32 bit int covers the range of [-2147483648, 2147483647] enough to store even the largest permutation of digits 1 ... 9: 987654321. (May be, std::int32_t would be the better choice.)
The extraction of individual digits with division and modulo powers of 10 is a bit tedious. Storing the set instead as a number with base 16 simplifies things much. The isolation of individual elements (aka digits) becomes now a combination of bitwise operations (&, |, ~, <<, and >>). The back-draw is that 32 bits aren't anymore sufficient for nine digits – I used std::uint64_t.
I capsuled things in a class Set16. I considered to provide a reference type and bidirectional iterators. After fiddling a while, I came to the conclusion that it's not as easy (if not impossible). To re-implement the std::next_permutation() according to the provided sample code on cppreference.com was my easier choice.
362880 lines ouf output are a little bit much for a demonstration. Hence, my sample does it for the smaller set of 3 digits which has 3! (= 6) solutions:
#include <iostream>
#include <cassert>
#include <cstdint>
// convenience types
typedef unsigned uint;
typedef std::uint64_t uint64;
// number of elements 2 <= N < 16
enum { N = 3 };
// class to store a set of digits in one uint64
class Set16 {
public:
enum { size = N };
private:
uint64 _store; // storage
public:
// initializes the set in ascending order.
// (This is a premise to start permutation at first result.)
Set16(): _store()
{
for (uint i = 0; i < N; ++i) elem(i, i + 1);
}
// get element with a certain index.
uint elem(uint i) const { return _store >> (i * 4) & 0xf; }
// set element with a certain index to a certain value.
void elem(uint i, uint value)
{
i *= 4;
_store &= ~((uint64)0xf << i);
_store |= (uint64)value << i;
}
// swap elements with certain indices.
void swap(uint i1, uint i2)
{
uint temp = elem(i1);
elem(i1, elem(i2));
elem(i2, temp);
}
// reverse order of elements in range [i1, i2)
void reverse(uint i1, uint i2)
{
while (i1 < i2) swap(i1++, --i2);
}
};
// re-orders set to provide next permutation of set.
// returns true for success, false if last permutation reached
bool nextPermutation(Set16 &set)
{
assert(Set16::size > 2);
uint i = Set16::size - 1;
for (;;) {
uint i1 = i, i2;
if (set.elem(--i) < set.elem(i1)) {
i2 = Set16::size;
while (set.elem(i) >= set.elem(--i2));
set.swap(i, i2);
set.reverse(i1, Set16::size);
return true;
}
if (!i) {
set.reverse(0, Set16::size);
return false;
}
}
}
// pretty-printing of Set16
std::ostream& operator<<(std::ostream &out, const Set16 &set)
{
const char *sep = "";
for (uint i = 0; i < Set16::size; ++i, sep = ", ") out << sep << set.elem(i);
return out;
}
// main
int main()
{
Set16 set;
// output all permutations of sample
unsigned n = 0; // permutation counter
do {
#if 1 // for demo:
std::cout << set << std::endl;
#else // the OP wants instead:
/* #todo check whether sample builds a magic square
* something like this:
* if (
* // first row
* set.elem(0) + set.elem(1) + set.elem(2) == 15
* etc.
*/
#endif // 1
++n;
} while(nextPermutation(set));
std::cout << n << " permutations found." << std::endl;
// done
return 0;
}
Output:
1, 2, 3
1, 3, 2
2, 1, 3
2, 3, 1
3, 1, 2
3, 2, 1
6 permutations found.
Life demo on ideone
So, here I am: permutations without arrays.
Finally, another idea hit me. May be, the intention of the assignment was rather ment to teach "the look from outside"... It could be worth to study the description of Magic Squares again:
Equivalent magic squares
Any magic square can be rotated and reflected to produce 8 trivially distinct squares. In magic square theory, all of these are generally deemed equivalent and the eight such squares are said to make up a single equivalence class.
Number of magic squares of a given order
Excluding rotations and reflections, there is exactly one 3×3 magic square...
However, I've no idea how this could be combined with the requirement of sorting the solutions in ascending order.

Generating bit combination without repetitions (not permunation)

Here is my previous question about finding next bit permutation. It occurs to me that I have to modify my code to achieve something similiar to next bit permutation, but quite different.
I am coding information about neighbors of vertex in graph in bit representation of int. For example if n = 4 (n - graph vertices) and graph is full, my array of vertices looks like:
vertices[0]=14 // 1110 - it means vertex no. 1 is connected with vertices no. 2, 3, and 4
vertices[1]=13 // 1101 - it means vertex no. 2 is connected with vertices no. 1, 3, and 4
vertices[2]=11 // 1011 - it means vertex no. 3 is connected with vertices no. 1, 2, and 4
vertices[3]=7 // 0111 - it means vertex no. 4 is connected with vertices no. 1, 2, and 3
First (main) for loop is from 0 to 2^n (cause 2^n is number of subsets of a set).
So if n = 4, then there are 16 subsets:
{empty}, {1}, ..., {4}, {0,1}, {0,2}, ..., {3,4}, {0,1,2}, ..., {1,2,3}, {1,2,3,4}
These subsets are represented by index value in for loop
for(int i=0; i < 2^n; ++i) // i - represents value of subset
Let's say n = 4, and actually i = 5 //0101. I'd like to check subsets of this subset, so I would like to check:
0000
0001
0100
0101
Now I'm generating all bit permutation of 1 bit set, then permutation of 2 bits set ... and so on (until I reach BitCount(5) = 2) and I only take permutation I want (by if statement). It's too many unneeded computations.
So my question is, how to generate all possible COMBINATIONS WITHOUT REPETITIONS (n,k) where n - graph vertices and k - number of bits in i (stated above)
My actual code (that generates all bit permutation and selects wrong):
for (int i = 0; i < PowerNumber; i++)
{
int independentSetsSum = 0;
int bc = BitCount(i);
if(bc == 1) independentSetsSum = 1;
else if (bc > 1)
{
for(int j = 1; j <= bc; ++j)
{
unsigned int v = (1 << j) - 1; // current permutation of bits
int bc2 = BitCount(j);
while(v <= i)
{
if((i & v) == v)
for(int neigh = 1; neigh <= bc2; neigh++)
if((v & vertices[GetBitPositionByNr(v, neigh) - 1]) == 0)
independentSetsSum ++;
unsigned int t = (v | (v - 1)) + 1;
v = t | ((((t & -t) / (v & -v)) >> 1) - 1);
}
}
}
}
All of this is because I have to count independent set number of every subset of n.
EDIT
I'd like to do it without creating any arrays or generally I'd like to avoid allocating any memory (neither vectors).
A little bit of an explanation:
n=5 //00101 - it is bit count of a number i - stated above, k=3, numbers in set (number represents bit position set to 1)
{
1, // 0000001
2, // 0000010
4, // 0001000
6, // 0100000
7 // 1000000
}
So correct combination is {1,2,6} // 0100011, but {1,3,6} // 0100101 is a wrong combination. In my code there are plenty of wrong combinations which I have to filter.
Not sure I correctly understand what you exactly want but based from your example (where i==5) you want all the subsets of a given subset.
If it's the case you can directly generate all these subsets.
int subset = 5;
int x = subset;
while(x) {
//at this point x is a valid subset
doStuff(x);
x = (x-1)&subset;
}
doStuff(0) //0 is always valid
Hope this helps.
My first guess to generate all the possible combinations would be the following rules (sorry if it's a bit hard to read)
start from the combination where all the 1s are on the left, all the 0s are on the right
move the leftmost 1 with a 0 on its immediate right to the right
if that bit had a 1 on its immediate left then
move all the 1s on its left all the way to the left
you're finished when you reach the combination with all the 1s on the right, and all the 0s on the left
Applying these rules for n=5 and k=3 would give this:
11100
11010
10110
01110
11001
10101
01101
10011
01011
00111
But that doesn't strikes me as really efficient (and/or elegant).
A better way would be to find a way to iterate through these numbers by flipping only a finite number of bits (i mean, you'd always need to flip O(1) bits to reach the next combination, rather than O(n)), that may allow a more efficient iteration (a bit like the https://en.wikipedia.org/wiki/Gray_code ).
I'll edit or post another andwer if i find better.

Algorithm to determine that a 2x2 square contains the numbers 1-4 (no repeats)

What would be an applicable C++ algorithm to determine that a 2x2 square (say, represented by a 1d vector) contains the numbers 1-4? I can't think of this, although it is quite simple. I would prefer to not have a giant if statement.
Examples of appropriate squares
1 2
3 4
2 3
4 1
1 3
2 4
Inappropriate squares:
1 1
2 3
1 2
3 3
1 2
4 4
I would probably start with an unsigned int set to 0 (e.g., call it x). I'd assign one bit in x to each possible input number (e.g., 1->bit 0, 2->bit 1, 3->bit 2, 4->bit 3). As I read the numbers, I'd verify that the number was in range, and if it was, set the corresponding bit in x.
At the end, if all the numbers are different, I should have 4 bits of x set. If any of the numbers was repeated, some of those bits won't be set.
If you prefer, you could use std::bitset or std::vector<bool> instead of the bits in a single number. In this case a single number is probably easier though, because you can verify the presence of all four desired bits with a single comparison.
bool valid(unsigned[] square) {
unsigned r = 0;
for(int i = 0; i < 4; ++i)
r |= 1 << square[i];
return r == 30;
}
Just set the appropriate bits, and check whether all are set at the end.
Though it assumes the numbers are smaller than sizeof(unsigned) * CHAR_BIT.
Well if it's represented by a vector and we just want something that works:
bool isValidSquare(const std::vector<int>& square) {
if (square.size() == 4) {
std::set<int> uniqs(square.begin(), square.end());
return uniqs.count(1) && uniqs.count(2) && uniqs.count(3) && uniqs.count(4);
}
return false;
}
Create a static bitset for corresponding bit 1-4 set, and another one with all bits unset.
Traverse through the vector, setting the respective bit in the 2nd set for current vector element.
Compare the 1st and 2nd set. If they match, the square is appropriate. Otherwise, it isn't.
You can use the standard library for this
#include <iostream>
#include <algorithm>
#include <vector>
int main()
{
std::vector<int> input{1,5,2,4};
sort(std::begin(input), std::end(input));
std::cout << std::boolalpha
<< std::equal(std::begin(input), std::end(input), std::begin({1,2,3,4}));
}
Assuming your inputs are only 1 to 4 numbers (assumption based on your examples), you can actually xor them and check if the result is 4 :
if ((tab[0] ^ tab[1] ^ tab[2] ^ tab[3]) == 4)
// Matches !
I had the feeling this would work, but am too tired to prove it mathematically, but this python program will prove this is right :
numbers = [1, 2, 3, 4]
good_results = []
bad_results = []
for i in numbers:
for j in numbers:
for k in numbers:
for l in numbers:
res = i ^ j ^ k ^ l
print "%i %i %i %i -> %i" % (i, j, k, l, res)
if len(set([i, j, k, l])) == 4: # this condition checks if i, j, k and l are different
good_results.append(res)
else:
bad_results.append(res)
print set(good_results) # => set([4])
print set(bad_results) # => set([0, 1, 2, 3, 5, 6, 7])

Logical Question

Consider a [4x8] matrix "A" and [1x8] matrix "B". I need to check if there exists a value "X" such that
[A]^T * [X] = [B]^T exists for any x >= 0 { X is a [4X1] matrix, T = transpose }
Now here is the trick/tedious part. The matrix A always has 1 as its diagonal. A11,A22,A33,A44 = 1 This matrix can be considered as two halves with first half being the first 4 columns and the second half being the second 4 columns like something below :
1 -1 -1 -1 1 0 0 1
A = -1 1 -1 0 0 1 0 0
-1 -1 1 0 1 0 0 0
-1 -1 -1 1 1 1 0 0
Each row in the first half can have either two or three -1's and if it has two -1's then that corresponding row in the second half should have one "1" or if any row has three -1's the second half of the matrix should have two 1's. The overall objective is to have the sum of each row to be 0.
Now B is a [1x8] matrix which can also be considered as two halves as follows:
B = -1 -1 0 0 0 0 1 1
Here there can be either one, two, three or four -1's in the first half and there should be equal number of 1's in the second half. It should be done in combinations For example, if there are two -1's in the first half, they can be placed in 4 choose 2 = 6 ways and for each of them there will be 6 ways to place the 1's in the second half which has a total of 6*6 = 36 ways. i.e. 36 different values for B's if there are two -1's in the first half. The placement of 1's in the matrix A should also be the same way. The way I could think of doing this is to consider a valarray or something of that sort and make the matrices A and B but I don't know what to do.
Now for every A, I've to test it with every combinations of B to see if there exists
[A]^T * [X] = [B]^T
I'm trying to prove a result that I got I need to know if such an X would exist or not. I'm very confused on implementing this. Any suggestions are welcome. This would come under linear programming concept in math. I want it either in C++ or in Matlab. Any other languages are also acceptable but I'm familiar with only these two. Thanks in advance.
Update:
Here is my answer for this problem :
clear;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%# Generating all possible values of vector B
%# permutations using dec2bin (start from 17 since it's the first solution)
vectorB = str2double(num2cell(dec2bin(17:255)));
%# changing the sign in the first half, then check that the total is zero
vectorB(:,1:4) = - vectorB(:,1:4);
vectorB = vectorB(sum(vectorB,2)==0,:);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%# generate all possible variation of first/second halves
z = -[0 1 1; 1 0 1; 1 1 0; 1 1 1]; n = -sum(z,2);
h1 = {
[ ones(4,1) z(:,1:3)] ;
[z(:,1:1) ones(4,1) z(:,2:3)] ;
[z(:,1:2) ones(4,1) z(:,3:3)] ;
[z(:,1:3) ones(4,1) ] ;
};
h2 = arrayfun(#(i) unique(perms([zeros(1,4-i) ones(1,i)]),'rows'), (1:2)', ...
'UniformOutput',false);
%'# generate all possible variations of complete rows
rows = cell(4,1);
for r=1:4
rows{r} = cell2mat( arrayfun( ...
#(i) [ repmat(h1{r}(i,:),size(h2{n(i)-1},1),1) h2{n(i)-1} ], ...
(1:size(h1{r},1))', 'UniformOutput',false) );
end
%'# generate all possible matrices (pick one row from each to form the matrix)
sz = cellfun(#(M)1:size(M,1), rows, 'UniformOutput',false);
[X1 X2 X3 X4] = ndgrid(sz{:});
matrices = cat(3, ...
rows{1}(X1(:),:), ...
rows{2}(X2(:),:), ...
rows{3}(X3(:),:), ...
rows{4}(X4(:),:) );
matrices = permute(matrices, [3 2 1]); %# 4-by-8-by-104976
A = matrices;
clear matrices X1 X2 X3 X4 rows h1 h2 sz z n r
options = optimset('LargeScale','off','Display','off');
for i = 1:size(A,3),
for j = 1:size(vectorB,1),
X = linprog([],[],[],A(:,:,i)',vectorB(j,:)');
if(size(X,1)>0) %# To check that it's not an empty matrix
if((size(find(X < 0),1)== 0)) %# to check the condition X>=0
if (A(:,:,i)'* X == (vectorB(j,:)'))
X
end
end
end
end
end
I got it with the help of stackoverflow folks. The only problem is the linprog function throws a lot of exceptions in every iteration along with the answers produced. The exception is:
(1)Exiting due to infeasibility: an all-zero row in the constraint matrix does not have a zero in corresponding right-hand-side entry.
(2) Exiting: One or more of the residuals, duality gap, or total relative error has stalled: the primal appears to be infeasible (and the dual unbounded).(The dual residual < TolFun=1.00e-008.
What does this mean. How can I overcome this?
It is not clear from your question if you are familiar with system linear equations and their solution, or it is what you are trying to "invent". See also here for Matlab-specific explanation.
If you are familiar with that, you should be more clear in your question about what makes your problem different.