How to optimize Knight's tour algorithm? - c++
I code the Knight's tour algorithm in c++ using Backtracking method.
But it seems too slow or stuck in infinite loop for n > 7 (bigger than 7 by 7 chessboard).
The question is: What is the Time complexity for this algorithm and how can I optimize it?!
The Knight's Tour problem can be stated as follows:
Given a chess board with n × n squares, find a path for the knight that visits every square exactly once.
Here is my code:
#include <iostream>
#include <iomanip>
using namespace std;
int counter = 1;
class horse {
public:
horse(int);
bool backtrack(int, int);
void print();
private:
int size;
int arr[8][8];
void mark(int &);
void unmark(int &);
bool unvisited(int &);
};
horse::horse(int s) {
int i, j;
size = s;
for (i = 0; i <= s - 1; i++)
for (j = 0; j <= s - 1; j++)
arr[i][j] = 0;
}
void horse::mark(int &val) {
val = counter;
counter++;
}
void horse::unmark(int &val) {
val = 0;
counter--;
}
void horse::print() {
cout << "\n - - - - - - - - - - - - - - - - - -\n";
for (int i = 0; i <= size - 1; i++) {
cout << "| ";
for (int j = 0; j <= size - 1; j++)
cout << setw(2) << setfill ('0') << arr[i][j] << " | ";
cout << "\n - - - - - - - - - - - - - - - - - -\n";
}
}
bool horse::backtrack(int x, int y) {
if (counter > (size * size))
return true;
if (unvisited(arr[x][y])) {
if ((x - 2 >= 0) && (y + 1 <= (size - 1))) {
mark(arr[x][y]);
if (backtrack(x - 2, y + 1))
return true;
else
unmark(arr[x][y]);
}
if ((x - 2 >= 0) && (y - 1 >= 0)) {
mark(arr[x][y]);
if (backtrack(x - 2, y - 1))
return true;
else
unmark(arr[x][y]);
}
if ((x - 1 >= 0) && (y + 2 <= (size - 1))) {
mark(arr[x][y]);
if (backtrack(x - 1, y + 2))
return true;
else
unmark(arr[x][y]);
}
if ((x - 1 >= 0) && (y - 2 >= 0)) {
mark(arr[x][y]);
if (backtrack(x - 1, y - 2))
return true;
else
unmark(arr[x][y]);
}
if ((x + 2 <= (size - 1)) && (y + 1 <= (size - 1))) {
mark(arr[x][y]);
if (backtrack(x + 2, y + 1))
return true;
else
unmark(arr[x][y]);
}
if ((x + 2 <= (size - 1)) && (y - 1 >= 0)) {
mark(arr[x][y]);
if (backtrack(x + 2, y - 1))
return true;
else
unmark(arr[x][y]);
}
if ((x + 1 <= (size - 1)) && (y + 2 <= (size - 1))) {
mark(arr[x][y]);
if (backtrack(x + 1, y + 2))
return true;
else
unmark(arr[x][y]);
}
if ((x + 1 <= (size - 1)) && (y - 2 >= 0)) {
mark(arr[x][y]);
if (backtrack(x + 1, y - 2))
return true;
else
unmark(arr[x][y]);
}
}
return false;
}
bool horse::unvisited(int &val) {
if (val == 0)
return 1;
else
return 0;
}
int main() {
horse example(7);
if (example.backtrack(0, 0)) {
cout << " >>> Successful! <<< " << endl;
example.print();
} else
cout << " >>> Not possible! <<< " << endl;
}
output for the example (n = 7) above is like this:
Since at each step you have 8 possibilities to check and this has to be done for each cell (minus the last one) the time complexity of this algorithm is O(8^(n^2-1)) = O(8^(n^2)) where n is the number of squares on the edges of the checkboard. To be precise this is the worst case time complexity (time taken to explore all the possibilities if none is found or if it is the last one).
As for the optimizations there can be 2 types of improvements:
Implementation improvements
You're calculating x-2, x-1, x+1, x+2 and the same for y at least the double of the times.
I can suggest to rewrite things like this:
int sm1 = size - 1;
int xm2 = x - 2;
int yp1 = y + 1;
if((xm2 >= 0) && (yp1 <= (sm1))){
mark(arr[x][y]);
if(backtrack(xm2, yp1))
return true;
else
unmark(arr[x][y]);
}
int ym1 = y-1;
if((xm2 >= 0) && (ym1 >= 0)){
mark(arr[x][y]);
if(backtrack(xm2, ym1))
return true;
else
unmark(arr[x][y]);
}
note the reusing of precalculated values also in subsequent blocks.
I've found this to be more effective than what I was especting; meaning that variable assignment and recall is faster than doing the operation again.
Also consider saving sm1 = s - 1; and area = s * s; in the constructor instead of calculating each time.
However this (being an implementation improvement and not an algorithm improvement) will not change the time complexity order but only divide the time by a certain factor.
I mean time complexity O(8^(n^2)) = k*8^(n^2) and the difference will be in a lower k factor.
Algorithm improvements
I can think this:
for each tour starting on in a cell in the diagonals (like starting in (0,0) as in your example) you can consider only the first moves being on one of the two half checkboards created by the diagonal.
This is beacouse of the simmetry or it exists 2 simmetric solutions or none.
This gives O(4*8^(n^2-2)) for that cases but the same remains for non simmetric ones.
Note that again O(4*8^(n^2-2)) = O(8^(n^2))
try to interrupt the rush early if some global condition suggests that a solution is impossible given the current markings.
for example the horse cannot jump two bulk columns or rows so if you have two bulk marked columns (or rows) and unmarked cells on both sides you're sure that there will be no solution. Consider that this can be checked in O(n) if you mantain number of marked cells per col/row updated. Then if you check this after each marking you're adding O(n*8^(n^2)) time that is not bad if n < = 8. Workaround is simply not to check alwais but maybe every n/8 markings (checking counter % 8 == 4 for example or better counter > 2*n && counter % 8 == 4
find other ideas to cleverly interrupt the search early but remember that the backtrack algorithm with 8 options will always have its nature of being O(8^(2^n)).
Bye
Here is my 2 cents. I started with the basic backtracking algorithm. It was waiting indefinitely for n > 7 as you mentioned. I implemented warnsdorff rule and it works like a magic and gives result in less than a second for boards of sizes till n = 31. For n >31, it was giving stackoverflow error as recursion depth exceeded the limit. I could find a better discussion here which talks about problems with warnsdorff rule and possible further optimizations.
Just for the reference, I am providing my python implementation of Knight's Tour problem with warnsdorff optimization
def isValidMove(grid, x, y):
maxL = len(grid)-1
if x maxL or y maxL or grid[x][y] > -1 :
return False
return True
def getValidMoves(grid, x, y, validMoves):
return [ (i,j) for i,j in validMoves if isValidMove(grid, x+i, y+j) ]
def movesSortedbyNumNextValidMoves(grid, x, y, legalMoves):
nextValidMoves = [ (i,j) for i,j in getValidMoves(grid,x,y,legalMoves) ]
# find the number of valid moves for each of the possible valid mode from x,y
withNumNextValidMoves = [ (len(getValidMoves(grid,x+i,y+j,legalMoves)),i,j) for i,j in nextValidMoves]
# sort based on the number so that the one with smallest number of valid moves comes on the top
return [ (t[1],t[2]) for t in sorted(withNumNextValidMoves) ]
def _solveKnightsTour(grid, x, y, num, legalMoves):
if num == pow(len(grid),2):
return True
for i,j in movesSortedbyNumNextValidMoves(grid,x,y,legalMoves):
#For testing the advantage of warnsdorff heuristics, comment the above line and uncomment the below line
#for i,j in getValidMoves(grid,x,y,legalMoves):
xN,yN = x+i,y+j
if isValidMove(grid,xN,yN):
grid[xN][yN] = num
if _solveKnightsTour(grid, xN, yN, num+1, legalMoves):
return True
grid[xN][yN] = -2
return False
def solveKnightsTour(gridSize, startX=0, startY=0):
legalMoves = [(2,1),(2,-1),(-2,1),(-2,-1),(1,2),(1,-2),(-1,2),(-1,-2)]
#Initializing the grid
grid = [ x[:] for x in [[-1]*gridSize]*gridSize ]
grid[startX][startY] = 0
if _solveKnightsTour(grid,startX,startY,1,legalMoves):
for row in grid:
print ' '.join(str(e) for e in row)
else:
print 'Could not solve the problem..'
Examine your algorithm. At each depth of recursion, you examine each of 8 possible moves, checking which are on the board, and then recursively process that position. What mathematical formula best describes this expansion?
You have a fixed board size, int[8][8], maybe you should make it dynamic,
class horse
{
...
int** board; //[s][s];
...
};
horse::horse(int s)
{
int i, j;
size = s;
board = (int**)malloc(sizeof(int*)*size);
for(i = 0; i < size; i++)
{
board[i] = (int*)malloc(sizeof(int)*size);
for(j = 0; j < size; j++)
{
board[i][j] = 0;
}
}
}
Changing your tests a little by adding a function to check that a board move is legal,
bool canmove(int mx, int my)
{
if( (mx>=0) && (mx<size) && (my>=0) && (my<size) ) return true;
return false;
}
Note that the mark() and unmark() are very repetitive, you really only need to mark() the board, check all legal moves, then unmark() the location if none of the backtrack() return true,
And rewriting the function makes everything a bit clearer,
bool horse::backtrack(int x, int y)
{
if(counter > (size * size))
return true;
if(unvisited(board[x][y]))
{
mark(board[x][y]);
if( canmove(x-2,y+1) )
{
if(backtrack(x-2, y+1)) return true;
}
if( canmove(x-2,y-1) )
{
if(backtrack(x-2, y-1)) return true;
}
if( canmove(x-1,y+2) )
{
if(backtrack(x-1, y+2)) return true;
}
if( canmove(x-1,y-2) )
{
if(backtrack(x-1, y-2)) return true;
}
if( canmove(x+2,y+1) )
{
if(backtrack(x+2, y+1)) return true;
}
if( canmove(x+2,y-1) )
{
if(backtrack(x+2, y-1)) return true;
}
if( canmove(x+1,y+2) )
{
if(backtrack(x+1, y+2)) return true;
}
if( canmove(x+1,y-2) )
{
if(backtrack(x+1, y-2)) return true;
}
unmark(board[x][y]);
}
return false;
}
Now, think about how deep the recursion must be to visit every [x][y]? Fairly deep, huh?
So, you might want to think about a strategy that would be more efficient. Adding these two printouts to the board display should show you how many backtrack steps occured,
int counter = 1; int stepcount=0;
...
void horse::print()
{
cout<< "counter: "<<counter<<endl;
cout<< "stepcount: "<<stepcount<<endl;
...
bool horse::backtrack(int x, int y)
{
stepcount++;
...
Here is the costs for 5x5, 6x6, 7x6,
./knightstour 5
>>> Successful! <<<
counter: 26
stepcount: 253283
./knightstour 6
>>> Successful! <<<
counter: 37
stepcount: 126229019
./knightstour 7
>>> Successful! <<<
counter: 50
stepcount: 56342
Why did it take fewer steps for 7 than 5? Think about the ordering of the moves in the backtrack - if you change the order, would the steps change? What if you made a list of the possible moves [ {1,2},{-1,2},{1,-2},{-1,-2},{2,1},{2,1},{2,1},{2,1} ], and processed them in a different order? We can make reordering the moves easier,
int moves[ ] =
{ -2,+1, -2,-1, -1,+2, -1,-2, +2,+1, +2,-1, +1,+2, +1,-2 };
...
for(int mdx=0;mdx<8*2;mdx+=2)
{
if( canmove(x+moves[mdx],y+moves[mdx+1]) )
{
if(backtrack(x+moves[mdx], y+moves[mdx+1])) return true;
}
}
Changing the original move sequence to this one, and running for 7x7 gives different result,
{ +2,+1, +2,-1, +1,+2, +1,-2, -2,+1, -2,-1, -1,+2, -1,-2 };
./knightstour 7
>>> Successful! <<<
counter: 50
stepcount: -556153603 //sheesh, overflow!
But your original question was,
The question is: What is the Time complexity for this algorithm and how can I optimize it?!
The backtracking algorithm is approximately 8^(n^2), though it may find the answer after as few as n^2 moves. I'll let you convert that to O() complexity metric.
I think this guides you to the answer, without telling you the answer.
this is a new solution:
in this method, using the deadlock probability prediction at the next movement of the knight in the chessboard, a movement will be chose that it’s tending to the deadlock probability is less than the other ones, we know at the first step this deadlock probability is zero for every cells and it will be changed gradually. The knight in the chessboard has between 2 and 8 moves, so each cells has predetermined value for next move.
Selecting the cells that have less available movement is best choice because it will tend to the deadlock in the future unless it is filled. There is an inverse relationship between allowed movement number and reach an impasse.
the outer cells is in the highest priority, As regards in a knight's tour problem the knight has to cross a cell only once, these value will be changed gradually in future travels.
Then in the next step a cell will be chose that has these conditions
The number of its adjacent empty cells is less than others, or in the other words the probability to be filled is more
After selecting, the adjacent houses doesn’t going to deadlock
you can read my full article about this problem here
Knight tour problem article
and you can find the full source from here
Full Source in GitHub
I hope it will be useful
Related
how to cout the longest permanent sequence (increasing or decreasing) in c++?
I have to make a program in C++ what can manage a sequence optionally with from 2 to 1000 element. At the end the program has to cout the longest increasing or decreasing sequence's numbers of element. Examples: 6;1;2;3;2;4;1; output: 3; (because: 1;2;3 is the longest with 3 elements) 6;4;3;1;5;2;1; output: 4; (because: 6;4;3;1 is the longest with 4 elements) I tired the following code and kind of working. The problem is that it cant give the longest one it gives the number of last sequence every time. Unfortunately i cant find the bug or problem. Could anyone help please? int counting = 1; int counting_max = 0, counting_min = 0; for (int i = 1; i < n; ++i) { if(block[i] < block[i+1]) { if(block[i]-block[i-1]>0) { counting++; if(counting>counting_max) { counting_max = counting; }} else { counting = 1; } } if(block[i] > block[i+1]) { if(block[i]-block[i-1]<0) { counting++; if(counting>counting_min) { counting_min = counting; }} else { counting = 1; } } } if(counting_max >= counting_min) { cout<< counting_max; } else { cout<< counting_min; } return 0;} In my code I didn't share the first part because i guess it works properly. The first is just a while and for function to call for the elements number and after the exact numbers in a block. So in my code the block contains the numbers.
In the code you have posted your outer loop creates an out-of-bounds access of the block array, since you're accessing block[i+1] in the loop. That's likely the reason that your code is producing correct answers in one direction and not in the other. Beyond that there are some other problems you might come across with this approach: You probably don't need to keep track of two separate counters if in the end you take the largest. You could just keep track of the largest sequence regardless of if it increases or decreases. Since you test the relationships between three elements in the array to see if the sequence is increasing/decreasing, you will have to add extra logic to handle when the list has fewer than three elements. You need to be careful of when the same number repeats, as this probably does not count as increasing or decreasing. Here's a revised version that covers these points: int counting = std::min(n, 1); int counting_max = counting; for (int i = 0; i < n - 1; ++i) { if ( block[i] < block[i + 1] && (counting < 2 || block[i] > block[i - 1]) ) { counting++; } else if ( block[i] > block[i + 1] && (counting < 2 || block[i] < block[i - 1]) ) { counting++; } else if (block[i] == block[i + 1]) { counting = 1; } else { counting = 2; } if (counting > counting_max) { counting_max = counting; } } cout << counting_max << "\n";
Try this alternate code: counting_max is finding longest ascending sequence and counting_min is finding longest descending sequence(by decrementing its loop counter) and at the end, we compare them to find the ultimate longest(supposing we have n-1 elements, if not change it accordingly) for (int i=1,j=n-2; i<n && j>=0; ++i,--j) { if (block[i] - block[i - 1]>0) { counting++; if (counting>counting_max) counting_max = counting; } else counting = 1; if (block[j] - block[j + 1]>0) { counting_back++; if (counting_back>counting_min) counting_min = counting_back; } else counting_back = 1; } if (counting_max >= counting_min) cout << counting_max; else cout << counting_min;
Dynamic Programming: Calculate all possible end positions for a list of consecutive jumps
The problem consists in calculate all possible end positions and how many combinations exist for each one. Given a start position x=0, a length m of the track and a list of jumps. Return the number of possible ends for each position on the interval [-m/2,+m/2]. The jumps must be done in the same order as given but it could be done in negative or positive way. For example: L = 40 jumps = 10, 10 Solution: -20 : 1 (-10, -10) 0 : 2 (-10,+10 & +10,-10) 20 : 1 (+10,+10) (The output needed is only the pair "position : #combinations") I did it with a simple recursion, and the result is OK. But in large sets of data, the execution time is few minutes or hours. I know that with dynamic programming I can have a solution in few seconds, but I don't know how can I apply dynamic in this case. There's my actual recursive function: void escriuPosibilitats(queue<int> q, map<int,int> &res, int posicio, int m) { int salt = q.front(); q.pop(); if(esSaltValid(m,posicio,-salt)) { int novaPosicio = posicio - salt; if(q.empty()) { res[novaPosicio]++; } else { escriuPosibilitats(q,res,novaPosicio,m); } } if(esSaltValid(m,posicio,salt)) { int novaPosicio = posicio + salt; if(q.empty()) { res[novaPosicio]++; } else { escriuPosibilitats(q,res,novaPosicio,m); } } } Where q is the queue of the remaining jumps. Where res is the parcial solution. Where posicio is the actual position. Where m is the length of the track. Where esSaltValid is a function that checks if the jump is valid in the range of the track length. PD: Sorry for my english level. I tried to improve my question! Thanks =)
You can use the following idea. Let dp[x][i] be the number of ways to arrive to the position x using until the jump i. Then the answer would be dp[x][N] for each x, and where N is the number of jumps. Even more, you can realize that this dp depends only on the previous row, and then you can simply dp[x] and save the next row in some auxiliary array, and then replace it in each iteration. The code would be something like this: const int MOD = (int)(1e8+7); const int L = 100; int N = 36; int dx[] = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}; int dp[L+1]; int next[L+1]; int main() { int shift = L/2; // to handle negative indexes dp[shift] = 1; // the initial position has one way to arrive, since you start there for (int i = 0; i < N; ++i) { // for each jump size for (int x = -L/2; x <= L/2; ++x) { // for each possible position if (-L/2 <= x + dx[i] && x + dx[i] <= L/2) // positive jump next[x + shift] = (next[x + shift] + dp[x + dx[i] + shift]) % MOD; if (-L/2 <= x - dx[i] && x - dx[i] <= L/2) // negative jump next[x + shift] = (next[x + shift] + dp[x - dx[i] + shift]) % MOD; } for (int x = -L/2; x <= L/2; ++x) { // update current dp to next and clear next dp[x+shift] = next[x+shift]; next[x+shift] = 0; } } for (int x = -L/2; x <= L/2; ++x) // print the result if (dp[x+shift] != 0) { cout << x << ": " << dp[x+shift] << '\n'; } } Of course, in case L is too big to handle, you can compress the state space and save the results in a map, and not in an array. The complexity of the approach is O(L*N). Hope it helped. EDIT: just compute everything modulo 1e8+7 and that's it.
homework: fast manipulations in a set of bits (represented as a character array)
there is certain practice question in a site interviewstreet that has kinda taken up lot of my time... the following code crosses only 8/11 testcases. the rest it exceeds time limit. i would really appreciate if you could suggest some optimizations the question is as follows..... there are two binary numbers of length n there are three kinds of operation set_a idx x : it sets A[idx] = x set_b idx x : sets B[idx] = x get_c idx : Print C[idx], where C=A+B, and 0<=idx Sample Input 5 5 00000 11111 set_a 0 1 get_c 5 get_c 1 set_b 2 0 get_c 5 Sample Output 100 so i need to optimize the get_c operation void reverse(char*a, int len) { //this function reverses the string } void get_c(int i) { k = i-1; f = 0; while (k>=0) { if (a[k] == '1' && b[k] == '1') { f = 1; break; } else if (a[k] == '0' && b[k] == '0') break; --k; } if (f==0) cout<<(a[i] == b[i]?0:1); else if (f==1) cout<<(a[i] == b[i]?1:0); } int main() { scanf("%d %d", &n, &q); // n = number of bits in number, q = number of operations // enter the strings a and b reverse(a, n); reverse(b, n); while (q--) { scanf("%s", query); scanf("%d", &idx); if (query is get_c) { get_c(idx); } else if (query is set_a) { cin>>x; a[idx] = x; } else if (query is set_b) { cin>>x; b[idx] = x; } } return 0; }
It seems that you've implemented your binary numbers using arrays when it would be faster to implement them simply as numbers and query/modify them with bit masks and bit shifts. That would remove the need for you to use an iterative approach in get_c; your get_c function would be constant time instead of linear time.
Determining the individual letters of Fibonacci strings?
The Fibonacci strings are defined as follows: The first Fibonacci string is "a" The second Fibonacci string is "bc" The (n + 2)nd Fibonacci string is the concatenation of the two previous Fibonacci strings. For example, the first few Fibonacci strings are a bc abc bcabc abcbcabc The goal is, given a row and an offset, to determine what character is at that offset. More formally: Input: Two integers separated by a space - K and P(0 < K ≤ 109), ( < P ≤ 109), where K is the line number of the Fibonacci string and P is the position number in a row. Output: The desired character for the relevant test: "a", "b" or "c". If P is greater than the kth row (K ≤ 109), it is necessary to derive «No solution» Example: input: 18 58 output: a I wrote this code to solve the problem: #include <iostream> #include <string> #include <vector> using namespace std; int main() { int k, p; string s1 = "a"; string s2 = "bc"; vector < int >fib_numb; fib_numb.push_back(1); fib_numb.push_back(2); cin >> k >> p; k -= 1; p -= 1; while (fib_numb.back() < p) { fib_numb.push_back(fib_numb[fib_numb.size() - 1] + fib_numb[fib_numb.size() - 2]); } if (fib_numb[k] <= p) { cout << "No solution"; return 0; } if ((k - fib_numb.size()) % 2 == 1) k = fib_numb.size() + 1; else k = fib_numb.size(); while (k > 1) { if (fib_numb[k - 2] > p) k -= 2; else { p -= fib_numb[k - 2]; k -= 1; } } if (k == 1) cout << s2[p]; else cout << s1[0]; return 0; } Is it correct? How would you have done?
You can solve this problem without explicitly computing any of the strings, and this is probably the best way to solve the problem. After all, if you're asked to compute the 50th Fibonacci string, you're almost certain to run out of memory; F(50) is 12,586,269,025, so you'd need over 12Gb of memory just to hold it! The intuition behind the solution is that because each line of the Fibonacci strings are composed of the characters of the previous lines, you can convert an (row, offset) pair into a different (row', offset') pair where the new row is always for a smaller Fibonacci string than the one you started with. If you repeat this enough times, eventually you will arrive back at the Fibonacci strings for either row 0 or row 1, in which case the answer can immediately be read off. In order to make this algorithm work, we need to establish a few facts. First, let's define the Fibonacci series to be zero-indexed; that is, the sequence is F(0) = 0 F(1) = 1 F(n+2) = F(n) + F(n + 1) Given this, we know that the nth row (one-indexed) of the Fibonacci strings has a total of F(n + 1) characters in it. You can see this quickly by induction: Row 1 has length 1 = F(2) = F(1 + 1) Row 2 has length 2 = F(3) = F(2 + 1). For some row n + 2, the length of that row is given by Size(n) + Size(n + 1) = F(n + 1) + F(n + 2) = F(n + 3) = F((n + 2) + 1) Using this knowledge, let's suppose that we want to find the seventh character of the seventh row of the Fibonacci strings. We know that row seven is composed of the concatenation of rows five and six, so the string looks like this: R(7) = R(5) R(6) Row five has F(5 + 1) = F(6) = 8 characters in it, which means that the first eight characters of row seven come from R(5). Since we want the seventh character out of this row, and since 7 ≤ 8, we know that we now need to look at the seventh character of row 5 to get this value. Well, row 5 looks like the concatenation of rows 3 and 4: R(5) = R(3) R(4) We want to find the seventh character of this row. Now, R(3) has F(4) = 3 characters in it, which means that if we are looking for the seventh character of R(5), it's going to be in the R(4) part, not the R(3) part. Since we're looking for the seventh character of this row, it means that we're looking for the the 7 - F(4) = 7 - 3 = 4th character of R(4), so now we look there. Again, R(4) is defined as R(4) = R(2) R(3) R(2) has F(3) = 2 characters in it, so we don't want to look in it to find the fourth character of the row; that's going to be contained in R(3). The fourth character of the line must be the second character of R(3). Let's look there. R(3) is defined as R(3) = R(1) R(2) R(1) has one character in it, so the second character of this line must be the first character of R(1), so we look there. We know, however, that R(2) = bc So the first character of this string is b, which is our answer. Let's see if this is right. The first seven rows of the Fibonacci strings are 1 a 2 bc 3 abc 4 bcabc 5 abcbcabc 6 bcabcabcbcabc 7 abcbcabcbcabcabcbcabc Sure enough, if you look at the seventh character of the seventh string, you'll see that it is indeed a b. Looks like this works! More formally, the recurrence relation we are interested in looks like this: char NthChar(int row, int index) { if (row == 1) return 'a'; if (row == 2 && index == 1) return 'b'; if (row == 2 && index == 2) return 'c'; if (index < Fibonacci(row - 1)) return NthChar(row - 2, index); return NthChar(row - 1, index - Fibonacci(row - 1)); } Now, of course, there's a problem with the implementation as written here. Because the row index can range up to 109, we can't possibly compute Fibonacci(row) in all cases; the one billionth Fibonacci number is far too large to represent! Fortunately, we can get around this. If you look at a table of Fibonacci numbers, you'll find that F(45) = 1,134,903,170, which is greater than 109 (and no smaller Fibonacci number is greater than this). Moreover, since we know that the index we care about must also be no greater than one billion, if we're in row 46 or greater, we will always take the branch where we look in the first half of the Fibonacci string. This means that we can rewrite the code as char NthChar(int row, int index) { if (row == 1) return 'a'; if (row == 2 && index == 1) return 'b'; if (row == 2 && index == 2) return 'c'; /* Avoid integer overflow! */ if (row >= 46) return NthChar(row - 2, index); if (index < Fibonacci(row - 1)) return NthChar(row - 2, index); return NthChar(row - 1, index - Fibonacci(row - 1)); } At this point we're getting very close to a solution. There are still a few problems to address. First, the above code will almost certainly blow out the stack unless the compiler is good enough to use tail recursion to eliminate all the stack frames. While some compilers (gcc, for example) can detect this, it's probably not a good idea to rely on it, and so we probably should rewrite this recursive function iteratively. Here's one possible implementation: char NthChar(int row, int index) { while (true) { if (row == 1) return 'a'; if (row == 2 && index == 1) return 'b'; if (row == 2 && index == 2) return 'c'; /* Avoid integer overflow! */ if (row >= 46 || index < Fibonacci(row - 1)) { row -= 2; } else { index -= Fibonacci(row - 1); row --; } } } But of course we can still do much better than this. In particular, if you're given a row number that's staggeringly huge (say, one billion), it's really silly to keep looping over and over again subtracting two from the row until it becomes less than 46. It makes a lot more sense to just determine what value it's ultimately going to become after we do all the subtraction. But we can do this quite easily. If we have an even row that's at least 46, we'll end up subtracting out 2 until it becomes 44. If we have an odd row that's at least 46, we'll end up subtracting out 2 until it becomes 45. Consequently, we can rewrite the above code to explicitly handle this case: char NthChar(int row, int index) { /* Preprocess the row to make it a small value. */ if (row >= 46) { if (row % 2 == 0) row = 45; else row = 44; } while (true) { if (row == 1) return 'a'; if (row == 2 && index == 1) return 'b'; if (row == 2 && index == 2) return 'c'; if (index < Fibonacci(row - 1)) { row -= 2; } else { index -= Fibonacci(row - 1); row --; } } } There's one last thing to handle, which is what happens if there isn't a solution to the problem because the character is out of range. But we can easily fix this up: string NthChar(int row, int index) { /* Preprocess the row to make it a small value. */ if (row >= 46) { if (row % 2 == 0) row = 45; else row = 44; } while (true) { if (row == 1 && index == 1) return "a" if (row == 2 && index == 1) return "b"; if (row == 2 && index == 2) return "c"; /* Bounds-checking. */ if (row == 1) return "no solution"; if (row == 2) return "no solution"; if (index < Fibonacci(row - 1)) { row -= 2; } else { index -= Fibonacci(row - 1); row --; } } } And we've got a working solution. One further optimization you might do is precomputing all of the Fibonacci numbers that you'll need and storing them in a giant array. You only need Fibonacci values for F(2) through F(44), so you could do something like this: const int kFibonacciNumbers[45] = { 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733 }; With this precomputed array, the final version of the code would look like this: string NthChar(int row, int index) { /* Preprocess the row to make it a small value. */ if (row >= 46) { if (row % 2 == 0) row = 45; else row = 44; } while (true) { if (row == 1 && index == 1) return "a" if (row == 2 && index == 1) return "b"; if (row == 2 && index == 2) return "c"; /* Bounds-checking. */ if (row == 1) return "no solution"; if (row == 2) return "no solution"; if (index < kFibonacciNumbers[row - 1]) { row -= 2; } else { index -= kFibonacciNumbers[row - 1]; row --; } } } I have not yet tested this; to paraphrase Don Knuth, I've merely proved it correct. :-) But I hope this helps answer your question. I really loved this problem!
I guess your general idea should be OK, but I don't see how your code is going to deal with larger values of K, because the numbers will get enormous quickly, and even with large integer libraries it might take virtually forever to compute fibonacci(10^9) exactly. Fortunately, you are only asked about the first 10^9 characters. The string will reach that many characters already on the 44th line (f(44) = 1134903170). And if I'm not mistaken, from there on the first 10^9 characters will be simply alternating between the prefixes of line 44 and 45, and therefore in pseudocode: def solution(K, P): if K > 45: if K % 2 == 0: return solution(44, P) else: return solution(45, P) #solution for smaller values of K here
I found this. I did not do a pre-check (get the size of the k-th fibo string to test p againt it) because if the check is successful you'll have to compute it anyway. Of course as soon as k becomes big, you may have an overflow issue (the length of the fibo string is an exponential function of the index n...). #include <iostream> #include <string> using namespace std; string fibo(unsigned int n) { if (n == 0) return "a"; else if (n == 1) return "bc"; else return fibo(n - 2) + fibo(n - 1); } int main() { unsigned int k, p; cin >> k >> p; --k; --p; string fiboK = fibo(k); if (p > fiboK.size()) cout << "No solution" << endl; else cout << fiboK[p] << endl; return 0; } EDIT: ok, I now see your point, i.e. checking in which part of the k-th string p resides (i.e. in string k - 2 or k - 1, and updating p if needed). Of course this is the good way to do it, since as I was saying above my naive solution will explode quite too quickly. Your way looks correct to me from an algorithm point of view (saves memory and complexity).
I would have computed the K-th Fibonacci String, and then retrieve the P-th character of it. Something like that: #include <iostream> #include <string> #include <vector> std::string FibonacciString(unsigned int k) { std::vector<char> buffer; buffer.push_back('a'); buffer.push_back('b'); buffer.push_back('c'); unsigned int curr = 1; unsigned int next = 2; while (k --) { buffer.insert( buffer.end(), buffer.begin(), buffer.end()); buffer.erase( buffer.begin(), buffer.begin() + curr); unsigned int prev = curr; curr = next; next = prev + next; } return std::string( buffer.begin(), buffer.begin() + curr); } int main(int argc, char** argv) { unsigned int k, p; std::cin >> k >> p; -- p; -- k; std::string fiboK = FibonacciString(k); if (p > fiboK.size()) std::cout << "No solution"; else std::cout << fiboK[p]; std::cout << std::endl; return 0; } It does use more memory than your version since it needs to store both the N-th and the (N+1)-th Fibonacci string at every instant. However, since it is really close to the definition, it does work for every value. Your algorithm seems to have some issue when k is large while p is small. The test fib_num[k] < p will dereference an item outside of the range of the array with k = 30 and p = 1, won't it ?
I made another example where each corresponding number of Fibonnaci series corresponds to the letter in the alfabet. So for 1 is a, for 2 is b, for 3 is c, for 5 is e... etc: #include <iostream> #include <string> using namespace std; int main() { string a = "abcdefghijklmnopqrstuvwxyz"; //the alphabet string a1 = a.substr(0,0); string a2 = a.substr(1,1); string nexT = a.substr(0,0); nexT = a1 + a2; while(nexT.length() <= a.length()) { //cout << nexT.length() << ", "; //show me the Fibonacci numbers cout << a.substr(nexT.length()-1,1) << ", "; //show me the Fibonacci letters a1 = a2; a2 = nexT; nexT = a1 + a2; } return 0; } Output: a, b, c, e, h, m, u,
Quote from Wikipedia, Fibonacci_word: The nth digit of the word is 2+[nφ]-[(n+1)φ] where φ is the golden ratio ... (The only characters used in the Wikipedia page are 1 and 0.) But note that the strings in the Wikipedia page, and in Knuth s Fundamental Algorithms, are built up in the opposite order of the above shown strings; there it becomes clear when the strings are listed, with ever repeating leading part, that there is only one infinitely long Fibonacci string. It is less clear when generated in the above used order, for the ever repeating part is the string s trailing part, but it is no less true. Therefore the term "the word" in the quotation, and, except for the question "is n too great for this row?", the row is not important. Unhappily, though, it is too hard to apply this formula to the poster s problem, because in this formula the original strings are of the same length, and poster began with "a" and "bc". This J(ava)Script script generates the Fibonacci string over the characters the poster chose, but in the opposite order. (It contains the Microsoft object WScript used for fetching command-line argument and outputting to the standard output.) var u, v /*Fibonacci numbers*/, g, i, k, R; v = 2; u = 1; k = 0; g = +WScript.arguments.item(0); /*command-line argument for desired length of string*/ /*Two consecutiv Fibonacci numbers, with the greater no less than the Fibonacci string s length*/ while (v < g) { v += u; u = v - u; k = 1 - k; } i = u - k; while (g-- > 0) { /*In this operation, i += u with i -= v when i >= v (carry), since the Fibonacci numbers are relativly prime, i takes on every value from 0 up to v. Furthermore, there are u carries, and, therefore, u instances of character 'cb', and v-u instances of 'a' (no-carry). The characters are spread as evenly as can be.*/ if ((i += u) < v) { R = 'a'; // WScript.StdOut.write('a'); /* no-carry */ } else { i -= v; /* carry */ R = 'cb'; // WScript.StdOut.write('cb') } } /*result is in R*/ // WScript.StdOut.writeLine() I suggest it because actually outputting the string is not required. One can simply stop at the desired length, and show the last thing about to be outputted. (The code for output is commented out with '//'). Of course, using this to find the character at position n has cost proportional to n. The formula at the top costs much less.
weighted RNG speed problem in C++
Edit: to clarify, the problem is with the second algorithm. I have a bit of C++ code that samples cards from a 52 card deck, which works just fine: void sample_allcards(int table[5], int holes[], int players) { int temp[5 + 2 * players]; bool try_again; int c, n, i; for (i = 0; i < 5 + 2 * players; i++) { try_again = true; while (try_again == true) { try_again = false; c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } I am implementing code to sample the hole cards according to a known distribution (stored as a 2d table). My code for this looks like: void sample_allcards_weighted(double weights[][HOLE_CARDS], int table[5], int holes[], int players) { // weights are distribution over hole cards int temp[5 + 2 * players]; int n, i; // table cards for (i = 0; i < 5; i++) { bool try_again = true; while (try_again == true) { try_again = false; int c = fast_rand52(); // reject collisions for (n = 0; n < i + 1; n++) { try_again = (temp[n] == c) || try_again; } temp[i] = c; } } for (int player = 0; player < players; player++) { // hole cards according to distribution i = 5 + 2 * player; bool try_again = true; while (try_again == true) { try_again = false; // weighted-sample c1 and c2 at once // h is a number < 1325 int h = weighted_randi(&weights[player][0], HOLE_CARDS); // i2h uses h and sets temp[i] to the 2 cards implied by h i2h(&temp[i], h); // reject collisions for (n = 0; n < i; n++) { try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]) || try_again; } } } copy_cards(table, temp, 5); copy_cards(holes, temp + 5, 2 * players); } My problem? The weighted sampling algorithm is a factor of 10 slower. Speed is very important for my application. Is there a way to improve the speed of my algorithm to something more reasonable? Am I doing something wrong in my implementation? Thanks. edit: I was asked about this function, which I should have posted, since it is key inline int weighted_randi(double *w, int num_choices) { double r = fast_randd(); double threshold = 0; int n; for (n = 0; n < num_choices; n++) { threshold += *w; if (r <= threshold) return n; w++; } // shouldn't get this far cerr << n << "\t" << threshold << "\t" << r << endl; assert(n < num_choices); return -1; } ...and i2h() is basically just an array lookup.
Your reject collisions are turning an O(n) algorithm into (I think) an O(n^2) operation. There are two ways to select cards from a deck: shuffle and pop, or pick sets until the elements of the set are unique; you are doing the latter which requires a considerable amount of backtracking. I didn't look at the details of the code, just a quick scan.
you could gain some speed by replacing the all the loops that check if a card is taken with a bit mask, eg for a pool of 52 cards, we prevent collisions like so: DWORD dwMask[2] = {0}; //64 bits //... int nCard; while(true) { nCard = rand_52(); if(!(dwMask[nCard >> 5] & 1 << (nCard & 31))) { dwMask[nCard >> 5] |= 1 << (nCard & 31); break; } } //...
My guess would be the memcpy(1326*sizeof(double)) within the retry-loop. It doesn't seem to change, so should it be copied each time?
Rather than tell you what the problem is, let me suggest how you can find it. Either 1) single-step it in the IDE, or 2) randomly halt it to see what it's doing. That said, sampling by rejection, as you are doing, can take an unreasonably long time if you are rejecting most samples.
Your inner "try_again" for loop should stop as soon as it sets try_again to true - there's no point in doing more work after you know you need to try again. for (n = 0; n < i && !try_again; n++) { try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]); }
Answering the second question about picking from a weighted set also has an algorithmic replacement that should be less time complex. This is based on the principle of that which is pre-computed does not need to be re-computed. In an ordinary selection, you have an integral number of bins which makes picking a bin an O(1) operation. Your weighted_randi function has bins of real length, thus selection in your current version operates in O(n) time. Since you don't say (but do imply) that the vector of weights w is constant, I'll assume that it is. You aren't interested in the width of the bins, per se, you are interested in the locations of their edges that you re-compute on every call to weighted_randi using the variable threshold. If the constancy of w is true, pre-computing a list of edges (that is, the value of threshold for all *w) is your O(n) step which need only be done once. If you put the results in a (naturally) ordered list, a binary search on all future calls yields an O(log n) time complexity with an increase in space needed of only sizeof w / sizeof w[0].