Backtracking - Filling a grid with coins - c++

I was trying to do this question i came across while looking up interview questions. We are asked the number of ways of placing r coins on a n*m grid such that each row and col contain at least one coin.
I thought of a backtracking solution, processing each cell in the grid in a row major order, I have set up my recursion in this way. Seems my approach is faulty because it outputs 0 every time. Could someone please help me find the error in my approach. ? Thanks.
constraints. n , m < 200 and r < n*m;
Here is the code i came up with.
#include<cstdio>
#define N 201
int n, m , r;
int used[N][N];
int grid[N][N] ; // 1 is coin is placed . 0 otherwise. // -1 undecided.
bool isOk()
{
int rows[N];
int cols[N];
for(int i = 0 ; i < n ; i++) rows[i] = 0;
for(int i = 0 ; i < m ; i++) cols[i] = 0;
int sum = 0;
for(int i = 0 ; i < n ; i++)for(int j = 0; j < m ; j++)
{
if(grid[i][j]==1)
{
rows[i]++;
cols[j]++;
sum++;
}
}
for(int i = 0 ; i < n ; i++)
{
if(rows[i]==0) return false;
}
for(int j = 0 ; j < n ; j++)
{
if(cols[j]==0) return false;
}
if(sum==r) return true;
else return false;
}
int calc_ways(int row , int col, int coins)
{
if(row >= n) return 0;
if(col >= m) return 0;
if(coins > r) return 0;
if(coins == r)
{
bool res = isOk();
if(res) return 1;
else 0;
}
if(row == n - 1 and col== m- 1)
{
bool res = isOk();
if(res) return 1;
else return 0;
}
int nrow, ncol;
if(col + 1 >= m)
{
nrow = row + 1;
ncol = 0;
}
else
{
nrow = row;
ncol = col + 1;
}
if(used[row][col]) return calc_ways(nrow, ncol, coins);
int ans = 0;
used[row][col] = 1;
grid[row][col] = 0;
ans += calc_ways(nrow , ncol , coins);
grid[row][col] = 1;
ans += calc_ways(nrow , ncol , coins + 1);
return ans;
}
int main()
{
int t;
scanf("%d" , &t);
while(t--)
{
scanf("%d %d %d" , &n , &m , &r);
for(int i = 0 ; i <= n ; i++)
{
for(int j = 0; j <= m ; j++)
{
used[i][j] = 0;
grid[i][j] = -1;
}
}
printf("%d\n" , calc_ways(0 , 0 , 0 ));
}
return 0;
}

You barely need a program to solve this at all.
Without loss of generality, let m <= n.
To begin with, we must have n <= r, otherwise no solution is possible.
Then, we subdivide the problem into a square of size m x m, on to which we will place m coins along the major diagonal, and a remainder, on to which we will place n - m coins so as to fulfil the remaining condition.
There is one way to place the coins along the major diagonal of the square.
There are m^(n - m) possibilities for the remainder.
We can permute the total so far in n! ways, although some of those will be duplicates (how many is left as an exercise for the student).
Furthermore, there are r - n coins left to place and (m - 1)n places left to put them.
Putting these all together we have an upper bound of
1 x m^(n - m) x n! x C((m - 1)n, r - n)
solutions to the problem. Divide this number by the number of duplicate permutations and you're done.

Problem 1
The code will start by placing a coin on each square and marking each square as used.
It will then test the final position and decide that the final position does not meet the goal of r coins.
Next it will start backtracking, but will never actually try another choice because used[row][col] is set to 1 and this shortcircuits the code to place coins.
In other words, one problem is that entries in "used" are set, but never cleared during the recursion.
Problem 2
Another problem with the code is that if n,m are of size 200, then it will never complete.
The issue is that this backtracking code has complexity O(2^(n*m)) as it will try all possible combinations of placing coins (many universe lifetimes for n=m=200...).
I would recommend you look at a different approach. For example, you might want to consider dynamic programming to compute how many ways there are of placing "k" coins on the remaining "a" columns of the board such that we make sure that we place coins on the "b" rows of the board that currently have no coins.

It can be treated as total ways in which d grid can b filled with r coins -(total ways leaving a single row nd filling in d rest -total ways leaving a single column nd filling in d rest- total ways leaving a row nd column together nd filling d rest) which implies
p(n*m ,r) -( (p((n-1)*m , r) * c(n,1)) +(p((m-1)*n , r) * c(m,1))+(p((n-1)*(m-1) , r) * c(n,1)*c(m,1)) )
I just think so but not sure of it!

Related

How do i reduce the repeadly use of % operator for faster execution in C

This is code -
for (i = 1; i<=1000000 ; i++ ) {
for ( j = 1; j<= 1000000; j++ ) {
for ( k = 1; k<= 1000000; k++ ) {
if (i % j == k && j%k == 0)
count++;
}
}
}
or is it better to reduce any % operation that goes upto million times in any programme ??
edit- i am sorry ,
initialized by 0, let say i = 1 ok!
now, if i reduce the third loop as #darshan's answer then both the first
&& second loop can run upto N times
and also it calculating % , n*n times. ex- 2021 mod 2022 , then 2021 mod 2023..........and so on
so my question is- % modulus is twice (and maybe more) as heavy as compared to +, - so there's any other logic can be implemented here ?? which is alternate for this question. and gives the same answer as this logic will give..
Thank you so much for knowledgeable comments & help-
Question is:
3 integers (A,B,C) is considered to be special if it satisfies the
following properties for a given integer N :
A mod B=C
B mod C=0
1≤A,B,C≤N
I'm so curious if there is any other smartest solution which can greatly reduces time complexity.
A much Efficient code will be the below one , but I think it can be optimized much more.
First of all modulo (%) operator is quite expensive so try to avoid it on a large scale
for(i = 0; i<=1000000 ; i++ )
for( j = 0; j<= 1000000; j++ )
{
a = i%j;
for( k = 0; k <= j; k++ )
if (a == k && j%k == 0)
count++;
}
We placed a = i%j in second loop because there is no need for it to be calculated every time k changes as it is independent of k and for the condition j%k == 0 to be true , k should be <= j hence change looping restrictions
First of all, your code has undefined behavior due to division by zero: when k is zero then j%k is undefined, so I assume that all your loops should start with 1 and not 0.
Usually the % and the / operators are much slower to execute than any other operation. It is possible to get rid of most invocations of the % operators in your code by several simple steps.
First, look at the if line:
if (i % j == k && j%k == 0)
The i % j == k has a very strict constrain over k which plays into your hands. It means that it is pointless to iterate k at all, since there is only one value of k that passes this condition.
for (i = 1; i<=1000000 ; i++ ) {
for ( j = 1; j<= 1000000; j++ ) {
k = i % j;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
To get rid of "i % j" switch the loop. This change is possible since this code is affected only by which combinations of i,j are tested, not in the order in which they are introduced.
for ( j = 1; j<= 1000000; j++ ) {
for (i = 1; i<=1000000 ; i++ ) {
k = i % j;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
Here it is easy to observe how k behaves, and use that in order to iterate on k directly without iterating on i and so getting rid of i%j. k iterates from 1 to j-1 and then does it again and again. So all we have to do is to iterate over k directly in the loop of i. Note that i%j for j == 1 is always 0, and since k==0 does not pass the condition of the if we can safely start with j=2, skipping 1:
for ( j = 2; j<= 1000000; j++ ) {
for (i = 1, k=1; i<=1000000 ; i++, k++ ) {
if (k == j)
k = 0;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
This is still a waste to run j%k repeatedly for the same values of j,k (remember that k repeats several times in the inner loop). For example, for j=3 the values of i and k go {1,1}, {2,2}, {3,0}, {4,1}, {5,2},{6,0},..., {n*3, 0}, {n*3+1, 1}, {n*3+2, 2},... (for any value of n in the range 0 < n <= (1000000-2)/3).
The values beyond n= floor((1000000-2)/3)== 333332 are tricky - let's have a look. For this value of n, i=333332*3=999996 and k=0, so the last iteration of {i,k}: {n*3,0},{n*3+1,1},{n*3+2, 2} becomes {999996, 0}, {999997, 1}, {999998, 2}. You don't really need to iterate over all these values of n since each of them does exactly the same thing. All you have to do is to run it only once and multiply by the number of valid n values (which is 999996+1 in this case - adding 1 to include n=0).
Since that did not cover all elements, you need to continue the remainder of the values: {999999, 0}, {1000000, 1}. Notice that unlike other iterations, there is no third value, since it would set i out-of-range.
for (int j = 2; j<= 1000000; j++ ) {
if (j % 1000 == 0) std::cout << std::setprecision(2) << (double)j*100/1000000 << "% \r" << std::flush;
int innerCount = 0;
for (int k=1; k<j ; k++ ) {
if (j%k == 0)
innerCount++;
}
int innerLoopRepeats = 1000000/j;
count += innerCount * innerLoopRepeats;
// complete the remainder:
for (int k=1, i= j * innerLoopRepeats+1; i <= 1000000 ; k++, i++ ) {
if (j%k == 0)
count++;
}
}
This is still extremely slow, but at least it completes in less than a day.
It is possible to have a further speed up by using an important property of divisibility.
Consider the first inner loop (it's almost the same for the second inner loop),
and notice that it does a lot of redundant work, and does it expensively.
Namely, if j%k==0, it means that k divides j and that there is pairK such that pairK*k==j.
It is trivial to calculate the pair of k: pairK=j/k.
Obviously, for k > sqrt(j) there is pairK < sqrt(j). This implies that any k > sqrt(j) can be extracted simply
by scanning all k < sqrt(j). This feature lets you loop over only a square root of all interesting values of k.
By searching only for sqrt(j) values gives a huge performance boost, and the whole program can finish in seconds.
Here is a view of the second inner loop:
// complete the remainder:
for (int k=1, i= j * innerLoopRepeats+1; i <= 1000000 && k*k <= j; k++, i++ ) {
if (j%k == 0)
{
count++;
int pairI = j * innerLoopRepeats + j / k;
if (pairI != i && pairI <= 1000000) {
count++;
}
}
}
The first inner loop has to go over a similar transformation.
Just reorder indexation and calculate A based on constraints:
void findAllSpecial(int N, void (*f)(int A, int B, int C))
{
// 1 ≤ A,B,C ≤ N
for (int C = 1; C < N; ++C) {
// B mod C = 0
for (int B = C; B < N; B += C) {
// A mod B = C
for (int A = C; A < N; A += B) {
f(A, B, C);
}
}
}
}
No divisions not useless if just for loops and adding operations.
Below is the obvious optimization:
The 3rd loop with 'k' is really not needed as there is already a many to One mapping from (I,j) -> k
What I understand from the code is that you want to calculate the number of (i,j) pairs such that the (i%j) is a factor of j. Is this correct or am I missing something?

couldn't find the determinant of n*n matrix

I have written the following code to calculate the determinant of a N*N matrix. it works perfectly for the matrix 4*4 and 5*5. but it could not find the determinant of the 40*40 matrix named Z_z. The elements of matrix Z_z are presented herein.
#include <iostream>
int const N_M=40;
void getCofactor(double A[N_M][N_M], double A_rc[N_M][N_M], int r,int c, int n)
{ int i = 0, j = 0;
// Looping for each element of the matrix
for (int row = 0; row < n; row++)
{
for (int col = 0; col < n; col++)
{
// Copying into temporary matrix only those element
// which are not in given row and column
if (row != r && col != c)
{
A_rc[i][j] = A[row][col];
j=j+1;
// Row is filled, so increase row index and
// reset col index
if (j == n - 1)
{
j = 0;
i=i+1;
}
}
}
}
}
double determinant(double A[N_M][N_M], int n)
{ double D = 0.0; // Initialize result
// Base case : if matrix contains single element
if (n==1) return A[0][0];
else if (n == 2) return (A[0][0]*A[1][1])-(A[0][1]*A[1][0]);
else {
double sub_Matrix_A_0c[N_M][N_M]; // To store cofactors
// Iterate for each element of first row
for (int c = 0; c < n; c++)
{
// Getting Cofactor of A[0][f]
getCofactor(A, sub_Matrix_A_0c, 0, c, n);
D += pow(-1.0,c) * A[0][c] * determinant(sub_Matrix_A_0c, n - 1);
}
return D;}
}
int main () {
double Z_z[N_M][N_M]=
{{-64,16,-4,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{-43.7213019529827,12.4106746539480,-3.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,-43.7213019529827,12.4106746539480,-3.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,-27,9,-3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,-27,9,-3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,-16.0579142269798,6.36491716338729,-2.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,-16.0579142269798,6.36491716338729,-2.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,-8,4,-2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-8,4,-2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-3.53179897265895,2.31915967282662,-1.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-3.53179897265895,2.31915967282662,-1.52287874528034,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,1,-1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,1,-1,1,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-0.142956190020121,0.273402182265940,-0.522878745280338,1,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-0.142956190020121,0.273402182265940,-0.522878745280338,1,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2.20222621658426,1.69267904961742,1.30102999566398,1},
{37.2320239618439,-7.04575749056068,1,0,-37.2320239618439,7.04575749056068,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,27,-6,1,0,-27,6,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,19.0947514901619,-5.04575749056068,1,0,-19.0947514901619,5.04575749056068,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,12,-4,1,0,-12,4,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6.95747901847985,-3.04575749056068,1,0,-6.95747901847985,3.04575749056068,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,-2,1,0,-3,2,-1,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.820206546797821,-1.04575749056068,1,0,-0.820206546797821,1.04575749056068,-1,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,-1,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,2,1,0,-3,-2,-1,0},
{-21.1372724716820,2,0,0,21.1372724716820,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,-18,2,0,0,18,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,-15.1372724716820,2,0,0,15.1372724716820,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,-12,2,0,0,12,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-9.13727247168203,2,0,0,9.13727247168203,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-6,2,0,0,6,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-3.13727247168203,2,0,0,3.13727247168203,-2,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,-2,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,2,0,0,-6,-2,0,0},
{24,-2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0},
{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-7.80617997398389,-2,0,0}};
double det=determinant(Z_z, 40);
cout<<det;
system ("pause");
return 0;}
You recursively call determinant() functuon n times at the first stage, then n - 1 times for each of n calls, etc. So total number of call would be closed to n! (factorial).
When n = 4 or n = 5 the number of calls is still acceptable, but at n = 40 you try to make 40! = 815915283247897734345611269596115894272000000000 virtual calls to say nothing about so many operations of any kind. I don't think you can find a machine to handle that.

Equality test function

Below is a function which aims to perform an equality test between adjacent numbers in a one dimensional vector.
This 1D vector will have values which will represent an nxn grid. [ v is the vector]
When they are equal it returns false.
For example consider this 3x3 grid:
i\j| 0 | 1 | 2
0 | 1 | 2 | 3
1 | 4 | 5 | 6
2 | 7 | 8 | 9
The issue with the code I wrote is that not all of the numbers in the grid will have 4 other adjacent numbers and testing for indexes which don't exist e.g when trying to compare the number above the top left number in the grid (1 in the example) might lead to some inaccurate outcomes.
In addition to this what I wrote seems not to be the most efficient way to go about it. Surely there could be a simpler way to do this than having to list 5 new variables?
for( int i= 0; i < n ; i++ ){
for( int j = 0; j < n; j++){
int x = v[convert(i, j, n)];
int c = v[convert(i-1, j, n)];
int s = v[convert(i+1, j, n)];
int b = v[convert(i, j+1, n)];
int n = v[convert(i, j-1, n)];
if (x == c || x == s || x == b || x == n ) {
return false;
}
}
}
//another function used to convert 2d into 1D index
int convert(int row, int col, int rowlen){
return row*rowlen+col;
}
I would appreciate any help.
If you want an efficient way to do this, you should consider the cache locality of your values, how much index conversion you do, how many bounds tests you do, and how many comparisons are needed.
First thing to note is that you do not need to compare to the left and above when you're already comparing to the right and below. This is because the left/up test will happen when testing to the right/down on the next iteration. So immediately, that halves the amount of testing.
A first optimization would be to split the operation into row tests and column tests:
// Test adjacency in rows
for (const int *rowptr = v, *end = v + n * n;
rowptr != end;
rowptr += n)
{
for (int col = 1; col < n; col++) {
if (rowptr[col-1] == rowptr[col]) return false;
}
}
// Test adjacency in columns
for (const int *row0ptr = v, *row1ptr = v + n, *end = v + n * n;
row1ptr != end;
row0ptr = row1ptr, row1ptr += n)
{
for (int col = 0; col < n; col++) {
if (row0ptr[col] == row1ptr[col]) return false;
}
}
To avoid making two passes through the entire array, you'd need to combine these, but it starts getting a bit messy. Notice how the two separate passes currently have different bounds (the row-tests loop from column 1 to n, whereas the column tests loop from row 0 to n-1).
Combining the loops would only make sense if n is quite large and if it's absolutely critical that this piece of code is fast. The idea is to perform a single pass through the entire array, avoiding any issues with stuff like L1 cache misses on the second pass.
It would look something like this:
const int *row0ptr = v, *row1ptr = v + n, *end = v + n * n
for ( ; row1ptr != end; row0ptr = row1ptr, row1ptr += n)
{
// Test first column
if (row0ptr[0] == row1ptr[0]) return false;
// Test row0 and remaining columns
for (int col = 1; col < n; col++) {
if (row0ptr[col-1] == row0ptr[col]) return false;
if (row0ptr[col] == row1ptr[col]) return false;
}
}
// Test last row
for (int col = 1; col < n; col++) {
if (row0ptr[col-1] == row0ptr[col]) return false;
}
First I'd recommend breaking up the logic because it's getting quite convoluted. But something like this works, it avoids going outside the grid by adding extra checks on i and j and it may avoid unnecessary calls to convert since if one of the earlier tests is true the later tests aren't performed.
int x = v[convert(i, j, n)];
if (i > 0 && x == v[convert(i-1, j, n)])
return false;
if (i < n - 1 && x == v[convert(i+1, j, n)])
return false;
if (j > 0 && x == v[convert(i, j-1, n)])
return false;
if (j < n - 1 && x == v[convert(i, j+1, n)])
return false;

Algorithm on hexagonal grid

Hexagonal grid is represented by a two-dimensional array with R rows and C columns. First row always comes "before" second in hexagonal grid construction (see image below). Let k be the number of turns. Each turn, an element of the grid is 1 if and only if the number of neighbours of that element that were 1 the turn before is an odd number. Write C++ code that outputs the grid after k turns.
Limitations:
1 <= R <= 10, 1 <= C <= 10, 1 <= k <= 2^(63) - 1
An example with input (in the first row are R, C and k, then comes the starting grid):
4 4 3
0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
Simulation: image, yellow elements represent '1' and blank represent '0'.
This problem is easy to solve if I simulate and produce a grid each turn, but with big enough k it becomes too slow. What is the faster solution?
EDIT: code (n and m are used instead R and C) :
#include <cstdio>
#include <cstring>
using namespace std;
int old[11][11];
int _new[11][11];
int n, m;
long long int k;
int main() {
scanf ("%d %d %lld", &n, &m, &k);
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) scanf ("%d", &old[i][j]);
}
printf ("\n");
while (k) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
int count = 0;
if (i % 2 == 0) {
if (i) {
if (j) count += old[i-1][j-1];
count += old[i-1][j];
}
if (j) count += (old[i][j-1]);
if (j < m-1) count += (old[i][j+1]);
if (i < n-1) {
if (j) count += old[i+1][j-1];
count += old[i+1][j];
}
}
else {
if (i) {
if (j < m-1) count += old[i-1][j+1];
count += old[i-1][j];
}
if (j) count += old[i][j-1];
if (j < m-1) count += old[i][j+1];
if (i < n-1) {
if (j < m-1) count += old[i+1][j+1];
count += old[i+1][j];
}
}
if (count % 2) _new[i][j] = 1;
else _new[i][j] = 0;
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) old[i][j] = _new[i][j];
}
k--;
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
printf ("%d", old[i][j]);
}
printf ("\n");
}
return 0;
}
For a given R and C, you have N=R*C cells.
If you represent those cells as a vector of elements in GF(2), i.e, 0s and 1s where arithmetic is performed mod 2 (addition is XOR and multiplication is AND), then the transformation from one turn to the next can be represented by an N*N matrix M, so that:
turn[i+1] = M*turn[i]
You can exponentiate the matrix to determine how the cells transform over k turns:
turn[i+k] = (M^k)*turn[i]
Even if k is very large, like 2^63-1, you can calculate M^k quickly using exponentiation by squaring: https://en.wikipedia.org/wiki/Exponentiation_by_squaring This only takes O(log(k)) matrix multiplications.
Then you can multiply your initial state by the matrix to get the output state.
From the limits on R, C, k, and time given in your question, it's clear that this is the solution you're supposed to come up with.
There are several ways to speed up your algorithm.
You do the neighbour-calculation with the out-of bounds checking in every turn. Do some preprocessing and calculate the neighbours of each cell once at the beginning. (Aziuth has already proposed that.)
Then you don't need to count the neighbours of all cells. Each cell is on if an odd number of neighbouring cells were on in the last turn and it is off otherwise.
You can think of this differently: Start with a clean board. For each active cell of the previous move, toggle the state of all surrounding cells. When an even number of neighbours cause a toggle, the cell is on, otherwise the toggles cancel each other out. Look at the first step of your example. It's like playing Lights Out, really.
This method is faster than counting the neighbours if the board has only few active cells and its worst case is a board whose cells are all on, in which case it is as good as neighbour-counting, because you have to touch each neighbours for each cell.
The next logical step is to represent the board as a sequence of bits, because bits already have a natural way of toggling, the exclusive or or xor oerator, ^. If you keep the list of neigbours for each cell as a bit mask m, you can then toggle the board b via b ^= m.
These are the improvements that can be made to the algorithm. The big improvement is to notice that the patterns will eventually repeat. (The toggling bears resemblance with Conway's Game of Life, where there are also repeating patterns.) Also, the given maximum number of possible iterations, 2⁶³ is suspiciously large.
The playing board is small. The example in your question will repeat at least after 2¹⁶ turns, because the 4×4 board can have at most 2¹⁶ layouts. In practice, turn 127 reaches the ring pattern of the first move after the original and it loops with a period of 126 from then.
The bigger boards may have up to 2¹⁰⁰ layouts, so they may not repeat within 2⁶³ turns. A 10×10 board with a single active cell near the middle has ar period of 2,162,622. This may indeed be a topic for a maths study, as Aziuth suggests, but we'll tacke it with profane means: Keep a hash map of all previous states and the turns where they occurred, then check whether the pattern has occurred before in each turn.
We now have:
a simple algorithm for toggling the cells' state and
a compact bitwise representation of the board, which allows us to create a hash map of the previous states.
Here's my attempt:
#include <iostream>
#include <map>
/*
* Bit representation of a playing board, at most 10 x 10
*/
struct Grid {
unsigned char data[16];
Grid() : data() {
}
void add(size_t i, size_t j) {
size_t k = 10 * i + j;
data[k / 8] |= 1u << (k % 8);
}
void flip(const Grid &mask) {
size_t n = 13;
while (n--) data[n] ^= mask.data[n];
}
bool ison(size_t i, size_t j) const {
size_t k = 10 * i + j;
return ((data[k / 8] & (1u << (k % 8))) != 0);
}
bool operator<(const Grid &other) const {
size_t n = 13;
while (n--) {
if (data[n] > other.data[n]) return true;
if (data[n] < other.data[n]) return false;
}
return false;
}
void dump(size_t n, size_t m) const {
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
std::cout << (ison(i, j) ? 1 : 0);
}
std::cout << '\n';
}
std::cout << '\n';
}
};
int main()
{
size_t n, m, k;
std::cin >> n >> m >> k;
Grid grid;
Grid mask[10][10];
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
int x;
std::cin >> x;
if (x) grid.add(i, j);
}
}
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
Grid &mm = mask[i][j];
if (i % 2 == 0) {
if (i) {
if (j) mm.add(i - 1, j - 1);
mm.add(i - 1, j);
}
if (j) mm.add(i, j - 1);
if (j < m - 1) mm.add(i, j + 1);
if (i < n - 1) {
if (j) mm.add(i + 1, j - 1);
mm.add(i + 1, j);
}
} else {
if (i) {
if (j < m - 1) mm.add(i - 1, j + 1);
mm.add(i - 1, j);
}
if (j) mm.add(i, j - 1);
if (j < m - 1) mm.add(i, j + 1);
if (i < n - 1) {
if (j < m - 1) mm.add(i + 1, j + 1);
mm.add(i + 1, j);
}
}
}
}
std::map<Grid, size_t> prev;
std::map<size_t, Grid> pattern;
for (size_t turn = 0; turn < k; turn++) {
Grid next;
std::map<Grid, size_t>::const_iterator it = prev.find(grid);
if (1 && it != prev.end()) {
size_t start = it->second;
size_t period = turn - start;
size_t index = (k - turn) % period;
grid = pattern[start + index];
break;
}
prev[grid] = turn;
pattern[turn] = grid;
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
if (grid.ison(i, j)) next.flip(mask[i][j]);
}
}
grid = next;
}
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
std::cout << (grid.ison(i, j) ? 1 : 0);
}
std::cout << '\n';
}
return 0;
}
There is probably room for improvement. Especially, I'm not so sure how it fares for big boards. (The code above uses an ordered map. We don't need the order, so using an unordered map will yield faster code. The example above with a single active cell on a 10×10 board took significantly longer than a second with an ordered map.)
Not sure about how you did it - and you should really always post code here - but let's try to optimize things here.
First of all, there is not really a difference between that and a quadratic grid. Different neighbor relationships, but I mean, that is just a small translation function. If you have a problem there, we should treat this separately, maybe on CodeReview.
Now, the naive solution is:
for all fields
count neighbors
if odd: add a marker to update to one, else to zero
for all fields
update all fields by marker of former step
this is obviously in O(N). Iterating twice is somewhat twice the actual run time, but should not be that bad. Try not to allocate space every time that you do that but reuse existing structures.
I'd propose this solution:
at the start:
create a std::vector or std::list "activated" of pointers to all fields that are activated
each iteration:
create a vector "new_activated"
for all items in activated
count neighbors, if odd add to new_activated
for all items in activated
set to inactive
replace activated by new_activated*
for all items in activated
set to active
*this can be done efficiently by putting them in a smart pointer and use move semantics
This code only works on the activated fields. As long as they stay within some smaller area, this is far more efficient. However, I have no idea when this changes - if there are activated fields all over the place, this might be less efficient. In that case, the naive solution might be the best one.
EDIT: after you now posted your code... your code is quite procedural. This is C++, use classes and use representation of things. Probably you do the search for neighbors right, but you can easily make mistakes there and therefore should isolate that part in a function, or better method. Raw arrays are bad and variables like n or k are bad. But before I start tearing your code apart, I instead repeat my recommendation, put the code on CodeReview, having people tear it apart until it is perfect.
This started off as a comment, but I think it could be helpful as an answer in addition to what has already been stated.
You stated the following limitations:
1 <= R <= 10, 1 <= C <= 10
Given these restrictions, I'll take the liberty to can represent the grid/matrix M of R rows and C columns in constant space (i.e. O(1)), and also check its elements in O(1) instead of O(R*C) time, thus removing this part from our time-complexity analysis.
That is, the grid can simply be declared as bool grid[10][10];.
The key input is the large number of turns k, stated to be in the range:
1 <= k <= 2^(63) - 1
The problem is that, AFAIK, you're required to perform k turns. This makes the algorithm be in O(k). Thus, no proposed solution can do better than O(k)[1].
To improve the speed in a meaningful way, this upper-bound must be lowered in some way[1], but it looks like this cannot be done without altering the problem constraints.
Thus, no proposed solution can do better than O(k)[1].
The fact that k can be so large is the main issue. The most anyone can do is improve the rest of the implementation, but this will only improve by a constant factor; you'll have to go through k turns regardless of how you look at it.
Therefore, unless some clever fact and/or detail is found that allows this bound to be lowered, there's no other choice.
[1] For example, it's not like trying to determine if some number n is prime, where you can check all numbers in the range(2, n) to see if they divide n, making it a O(n) process, or notice that some improvements include only looking at odd numbers after checking n is not even (constant factor; still O(n)), and then checking odd numbers only up to √n, i.e., in the range(3, √n, 2), which meaningfully lowers the upper-bound down to O(√n).

Why does this give Segmentation fault?

So, I was solving a question which goes like this:
Given a list of n integers, A={a1,a2,…,an}, and another integer, k representing the expected sum. Select zero or more numbers from A such that the sum of these numbers is as near as possible, but not exceeding, to the expected sum (k).
Note
Each element of A can be selected multiple times.
If no element is selected then the sum is 0.
Input Format
The first line contains T the number of test cases.
Each test case comprises of two lines. First line contains two integers, n k, representing the length of list A and expected sum, respectively. Second line consists of n space separated integers, a1,a2,…,an, representing the elements of list A.
Constraints:
1 ≤ T ≤ 10
1 ≤ n ≤ 2000
1 ≤ k ≤ 2000
1 ≤ ai ≤ 2000, where i∈[1,n]
Output Format
Output T lines, the maximum sum for each test case which is as near as possible, but not exceeding, to the expected sum (k).
Here is the problem link: https://www.hackerrank.com/challenges/unbounded-knapsack
Now, I developed a top-down approach for this as follows:
int knapsack(int arr[],int n, int Sum, int dp[][1000])
{
if ( n < 0 || Sum < 0 )
return 0;
if(n==0||Sum==0)
{
dp[Sum][n] = 0;
return 0;
}
if (arr[n-1] == Sum)
{
dp[Sum][n-1] = arr[n-1];
return arr[n-1];
}
else if (dp[Sum][n] != -1)
return dp[Sum][n];
else if(arr[n-1] > Sum)
{
dp[Sum][n] = knapsack(arr,n-1,Sum,dp);
return dp[Sum][n];
}
else //gets selected or doesn't get selected
{
dp[Sum][n] = max( arr[n-1] + knapsack(arr,n,(Sum-arr[n-1]),dp) , knapsack(arr,n-1,Sum,dp) );
return dp[Sum][n];
}
}
However, the above gives a Seg fault when the input is given as:
1
5 9
3 4 4 4 8
I tried debugging it but it shows a Seg-fault at the beginning of the function after many recursive calls. Am I missing out any condition?
In your else:
dp[Sum][n] = max( arr[n-1] + knapsack(arr,n,(Sum-arr[n-1]),dp) , knapsack(arr,n-1,Sum,dp) );
It should be n - 1 as well, because you're done with the current element no matter what. Like it is now, it will do more recursive calls than necessary. With this fix, the segfault is gone on my PC and the function returns 0.
dp[Sum][n] = max( arr[n-1] + knapsack(arr,n-1,(Sum-arr[n-1]),dp) , knapsack(arr,n-1,Sum,dp) );
This program correctly returns 8 as the answer for your example:
int knapsack(int arr[],int n, int Sum, int dp[][1000]);
int main()
{
int t;
int n,k;
cin>>t;
int i,j;
int dp[1000][1000];
for(int i=0;i<t;i++)
{
for ( i = 0; i < 1000; i++ )
for ( j = 0; j < 1000; j++ )
dp[i][j] = -1;
int a[2000];
cin>>n>>k;
for(int j=0;j<n;j++)
cin>>a[j]; // you had i here
while (knapsack(a,n - 1,k,dp) == 0) // lower k until we can build it exactly
--k;
cout << k << endl;
}
return 0;
}
// knapsack(n, Sum) = true if we can use the first n elements to build Sum exactly
int knapsack(int arr[],int n, int Sum, int dp[][1000])
{
if (Sum < 0 )
return 0;
if(n < 0)
{
return Sum == 0;
}
else if (dp[Sum][n] != -1)
return dp[Sum][n];
else //gets selected or doesn't get selected
{
dp[Sum][n] = knapsack(arr,n-1,(Sum-arr[n]),dp) || knapsack(arr,n-1,Sum,dp);
}
return dp[Sum][n];
}
If you can use the same element more than once, I suggest the following iterative approach with a simple one dimensional array:
dp[0] = true
s = 0
for i = 0 to number of elements:
s += elements[i]
for j = elements[i] to s:
dp[j] = dp[j] || dp[j - elements[i]]
Where dp[x] = true if we can build sum x.
Your other bug is:
for(int j=0;j<n;j++)
cin>>a[i];
Notice the i where you meant j