Edit: to clarify, the problem is with the second algorithm.
I have a bit of C++ code that samples cards from a 52 card deck, which works just fine:
void sample_allcards(int table[5], int holes[], int players) {
int temp[5 + 2 * players];
bool try_again;
int c, n, i;
for (i = 0; i < 5 + 2 * players; i++) {
try_again = true;
while (try_again == true) {
try_again = false;
c = fast_rand52();
// reject collisions
for (n = 0; n < i + 1; n++) {
try_again = (temp[n] == c) || try_again;
}
temp[i] = c;
}
}
copy_cards(table, temp, 5);
copy_cards(holes, temp + 5, 2 * players);
}
I am implementing code to sample the hole cards according to a known distribution (stored as a 2d table). My code for this looks like:
void sample_allcards_weighted(double weights[][HOLE_CARDS], int table[5], int holes[], int players) {
// weights are distribution over hole cards
int temp[5 + 2 * players];
int n, i;
// table cards
for (i = 0; i < 5; i++) {
bool try_again = true;
while (try_again == true) {
try_again = false;
int c = fast_rand52();
// reject collisions
for (n = 0; n < i + 1; n++) {
try_again = (temp[n] == c) || try_again;
}
temp[i] = c;
}
}
for (int player = 0; player < players; player++) {
// hole cards according to distribution
i = 5 + 2 * player;
bool try_again = true;
while (try_again == true) {
try_again = false;
// weighted-sample c1 and c2 at once
// h is a number < 1325
int h = weighted_randi(&weights[player][0], HOLE_CARDS);
// i2h uses h and sets temp[i] to the 2 cards implied by h
i2h(&temp[i], h);
// reject collisions
for (n = 0; n < i; n++) {
try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]) || try_again;
}
}
}
copy_cards(table, temp, 5);
copy_cards(holes, temp + 5, 2 * players);
}
My problem? The weighted sampling algorithm is a factor of 10 slower. Speed is very important for my application.
Is there a way to improve the speed of my algorithm to something more reasonable? Am I doing something wrong in my implementation?
Thanks.
edit: I was asked about this function, which I should have posted, since it is key
inline int weighted_randi(double *w, int num_choices) {
double r = fast_randd();
double threshold = 0;
int n;
for (n = 0; n < num_choices; n++) {
threshold += *w;
if (r <= threshold) return n;
w++;
}
// shouldn't get this far
cerr << n << "\t" << threshold << "\t" << r << endl;
assert(n < num_choices);
return -1;
}
...and i2h() is basically just an array lookup.
Your reject collisions are turning an O(n) algorithm into (I think) an O(n^2) operation.
There are two ways to select cards from a deck: shuffle and pop, or pick sets until the elements of the set are unique; you are doing the latter which requires a considerable amount of backtracking.
I didn't look at the details of the code, just a quick scan.
you could gain some speed by replacing the all the loops that check if a card is taken with a bit mask, eg for a pool of 52 cards, we prevent collisions like so:
DWORD dwMask[2] = {0}; //64 bits
//...
int nCard;
while(true)
{
nCard = rand_52();
if(!(dwMask[nCard >> 5] & 1 << (nCard & 31)))
{
dwMask[nCard >> 5] |= 1 << (nCard & 31);
break;
}
}
//...
My guess would be the memcpy(1326*sizeof(double)) within the retry-loop. It doesn't seem to change, so should it be copied each time?
Rather than tell you what the problem is, let me suggest how you can find it. Either 1) single-step it in the IDE, or 2) randomly halt it to see what it's doing.
That said, sampling by rejection, as you are doing, can take an unreasonably long time if you are rejecting most samples.
Your inner "try_again" for loop should stop as soon as it sets try_again to true - there's no point in doing more work after you know you need to try again.
for (n = 0; n < i && !try_again; n++) {
try_again = (temp[n] == temp[i]) || (temp[n] == temp[i+1]);
}
Answering the second question about picking from a weighted set also has an algorithmic replacement that should be less time complex. This is based on the principle of that which is pre-computed does not need to be re-computed.
In an ordinary selection, you have an integral number of bins which makes picking a bin an O(1) operation. Your weighted_randi function has bins of real length, thus selection in your current version operates in O(n) time. Since you don't say (but do imply) that the vector of weights w is constant, I'll assume that it is.
You aren't interested in the width of the bins, per se, you are interested in the locations of their edges that you re-compute on every call to weighted_randi using the variable threshold. If the constancy of w is true, pre-computing a list of edges (that is, the value of threshold for all *w) is your O(n) step which need only be done once. If you put the results in a (naturally) ordered list, a binary search on all future calls yields an O(log n) time complexity with an increase in space needed of only sizeof w / sizeof w[0].
Related
I solved a problem with Set operations like upperbound, iterator dereferencing etc. It solves in around 20 seconds. The general problem is I am iterating over group of numbers (i*(i-1)/2) until it is less than 2 * 10^5, and then complete a DP vector. So in my algorithm for each number "x" I get the upper_bound,"up", then starting from the beginning iterate over the numbers until reach to "up". The solution does the same but it does not run upper_bound and dereferencing, but instead it directly calculate the i*(i-1)/2, which i previously calculated and stored in vset. the number of operations for both algorithm is almost same, around 80*10^6, which is not super big number. But my code takes 20 seconds, solution needs 2 seconds.
Please look at my code and let me know if you need more information about this:
1- vset has 600 numbers, which is all numbers in the form of i*(i-1)/2; less than 2*10^5
2- vset is already sorted as it is increasing
3- the final vector "v" in both algorithm is exactly same
4- cnt, number of operation for both is almost same. 80,000,000
5- you can test the codes with n = 199977.
6- On my computer, corei7 32G RAM, it takes 20 seconds, on server accepted around 200 Mili seconds, this is very strange to me.
typedef long long int llint;
int n; cin >> n;
vector<llint> v(n+1, INT_MAX);
llint p = 1;
llint node = 2;
llint cnt = 0;
for (int i = 1; i <= n; i++)
{
if (v[i] == INT_MAX)
{
for (int s = 1; (s * (s - 1)) / 2 <= i; ++s)
v[i] = min(v[i], v[i - (s * (s - 1)) / 2] + s) , cnt++;
}
else cnt ++ ;
}
cout << cnt << endl; // works in less than 2 seconds
The Second solution takes 20 seconds.
typedef long long int llint;
int n; cin >> n;
vector<llint> v(n+1, INT_MAX);
llint p = 1;
llint node = 2;
vector<int> vset;
while (p <= n) // only 600 numbers
{
v[p] = node;
vset.push_back(p);
node++;
p = node * (node - 1) / 2;
}
llint cnt = 0;
for (int i = 1; i <= n; i++)
{
if (v[i] == INT_MAX)
{
auto up = upper_bound(vset.begin(), vset.end(), i);
for (auto it = vset.begin(); it != up; it++) // at most 600 iteration
{
cnt++;
int j = *it;
v[i] = min(v[j] + v[i - j], v[i]);
}
}
else cnt ++ ;
}
cout << cnt << endl; // cnt for both is around 84,000,000
So the question is about something I cannot figure out: which operation(s) here is expensive?
going through the iterator? dereferencing the iterator? there is no more difference here but the time is TEN TIMES MORE. thanks
Thanks to all guys that commented and helped me to figure out the issue. I realized that the reason that I have slow performance was Debug Mode. So I changed it to Release Mode and it works in less than 2 seconds. There is a similar question, may help you more. I used Visual Studio C++ on Windows 10
Hexagonal grid is represented by a two-dimensional array with R rows and C columns. First row always comes "before" second in hexagonal grid construction (see image below). Let k be the number of turns. Each turn, an element of the grid is 1 if and only if the number of neighbours of that element that were 1 the turn before is an odd number. Write C++ code that outputs the grid after k turns.
Limitations:
1 <= R <= 10, 1 <= C <= 10, 1 <= k <= 2^(63) - 1
An example with input (in the first row are R, C and k, then comes the starting grid):
4 4 3
0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
Simulation: image, yellow elements represent '1' and blank represent '0'.
This problem is easy to solve if I simulate and produce a grid each turn, but with big enough k it becomes too slow. What is the faster solution?
EDIT: code (n and m are used instead R and C) :
#include <cstdio>
#include <cstring>
using namespace std;
int old[11][11];
int _new[11][11];
int n, m;
long long int k;
int main() {
scanf ("%d %d %lld", &n, &m, &k);
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) scanf ("%d", &old[i][j]);
}
printf ("\n");
while (k) {
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
int count = 0;
if (i % 2 == 0) {
if (i) {
if (j) count += old[i-1][j-1];
count += old[i-1][j];
}
if (j) count += (old[i][j-1]);
if (j < m-1) count += (old[i][j+1]);
if (i < n-1) {
if (j) count += old[i+1][j-1];
count += old[i+1][j];
}
}
else {
if (i) {
if (j < m-1) count += old[i-1][j+1];
count += old[i-1][j];
}
if (j) count += old[i][j-1];
if (j < m-1) count += old[i][j+1];
if (i < n-1) {
if (j < m-1) count += old[i+1][j+1];
count += old[i+1][j];
}
}
if (count % 2) _new[i][j] = 1;
else _new[i][j] = 0;
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) old[i][j] = _new[i][j];
}
k--;
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < m; j++) {
printf ("%d", old[i][j]);
}
printf ("\n");
}
return 0;
}
For a given R and C, you have N=R*C cells.
If you represent those cells as a vector of elements in GF(2), i.e, 0s and 1s where arithmetic is performed mod 2 (addition is XOR and multiplication is AND), then the transformation from one turn to the next can be represented by an N*N matrix M, so that:
turn[i+1] = M*turn[i]
You can exponentiate the matrix to determine how the cells transform over k turns:
turn[i+k] = (M^k)*turn[i]
Even if k is very large, like 2^63-1, you can calculate M^k quickly using exponentiation by squaring: https://en.wikipedia.org/wiki/Exponentiation_by_squaring This only takes O(log(k)) matrix multiplications.
Then you can multiply your initial state by the matrix to get the output state.
From the limits on R, C, k, and time given in your question, it's clear that this is the solution you're supposed to come up with.
There are several ways to speed up your algorithm.
You do the neighbour-calculation with the out-of bounds checking in every turn. Do some preprocessing and calculate the neighbours of each cell once at the beginning. (Aziuth has already proposed that.)
Then you don't need to count the neighbours of all cells. Each cell is on if an odd number of neighbouring cells were on in the last turn and it is off otherwise.
You can think of this differently: Start with a clean board. For each active cell of the previous move, toggle the state of all surrounding cells. When an even number of neighbours cause a toggle, the cell is on, otherwise the toggles cancel each other out. Look at the first step of your example. It's like playing Lights Out, really.
This method is faster than counting the neighbours if the board has only few active cells and its worst case is a board whose cells are all on, in which case it is as good as neighbour-counting, because you have to touch each neighbours for each cell.
The next logical step is to represent the board as a sequence of bits, because bits already have a natural way of toggling, the exclusive or or xor oerator, ^. If you keep the list of neigbours for each cell as a bit mask m, you can then toggle the board b via b ^= m.
These are the improvements that can be made to the algorithm. The big improvement is to notice that the patterns will eventually repeat. (The toggling bears resemblance with Conway's Game of Life, where there are also repeating patterns.) Also, the given maximum number of possible iterations, 2⁶³ is suspiciously large.
The playing board is small. The example in your question will repeat at least after 2¹⁶ turns, because the 4×4 board can have at most 2¹⁶ layouts. In practice, turn 127 reaches the ring pattern of the first move after the original and it loops with a period of 126 from then.
The bigger boards may have up to 2¹⁰⁰ layouts, so they may not repeat within 2⁶³ turns. A 10×10 board with a single active cell near the middle has ar period of 2,162,622. This may indeed be a topic for a maths study, as Aziuth suggests, but we'll tacke it with profane means: Keep a hash map of all previous states and the turns where they occurred, then check whether the pattern has occurred before in each turn.
We now have:
a simple algorithm for toggling the cells' state and
a compact bitwise representation of the board, which allows us to create a hash map of the previous states.
Here's my attempt:
#include <iostream>
#include <map>
/*
* Bit representation of a playing board, at most 10 x 10
*/
struct Grid {
unsigned char data[16];
Grid() : data() {
}
void add(size_t i, size_t j) {
size_t k = 10 * i + j;
data[k / 8] |= 1u << (k % 8);
}
void flip(const Grid &mask) {
size_t n = 13;
while (n--) data[n] ^= mask.data[n];
}
bool ison(size_t i, size_t j) const {
size_t k = 10 * i + j;
return ((data[k / 8] & (1u << (k % 8))) != 0);
}
bool operator<(const Grid &other) const {
size_t n = 13;
while (n--) {
if (data[n] > other.data[n]) return true;
if (data[n] < other.data[n]) return false;
}
return false;
}
void dump(size_t n, size_t m) const {
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
std::cout << (ison(i, j) ? 1 : 0);
}
std::cout << '\n';
}
std::cout << '\n';
}
};
int main()
{
size_t n, m, k;
std::cin >> n >> m >> k;
Grid grid;
Grid mask[10][10];
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
int x;
std::cin >> x;
if (x) grid.add(i, j);
}
}
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
Grid &mm = mask[i][j];
if (i % 2 == 0) {
if (i) {
if (j) mm.add(i - 1, j - 1);
mm.add(i - 1, j);
}
if (j) mm.add(i, j - 1);
if (j < m - 1) mm.add(i, j + 1);
if (i < n - 1) {
if (j) mm.add(i + 1, j - 1);
mm.add(i + 1, j);
}
} else {
if (i) {
if (j < m - 1) mm.add(i - 1, j + 1);
mm.add(i - 1, j);
}
if (j) mm.add(i, j - 1);
if (j < m - 1) mm.add(i, j + 1);
if (i < n - 1) {
if (j < m - 1) mm.add(i + 1, j + 1);
mm.add(i + 1, j);
}
}
}
}
std::map<Grid, size_t> prev;
std::map<size_t, Grid> pattern;
for (size_t turn = 0; turn < k; turn++) {
Grid next;
std::map<Grid, size_t>::const_iterator it = prev.find(grid);
if (1 && it != prev.end()) {
size_t start = it->second;
size_t period = turn - start;
size_t index = (k - turn) % period;
grid = pattern[start + index];
break;
}
prev[grid] = turn;
pattern[turn] = grid;
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
if (grid.ison(i, j)) next.flip(mask[i][j]);
}
}
grid = next;
}
for (size_t i = 0; i < n; i++) {
for (size_t j = 0; j < m; j++) {
std::cout << (grid.ison(i, j) ? 1 : 0);
}
std::cout << '\n';
}
return 0;
}
There is probably room for improvement. Especially, I'm not so sure how it fares for big boards. (The code above uses an ordered map. We don't need the order, so using an unordered map will yield faster code. The example above with a single active cell on a 10×10 board took significantly longer than a second with an ordered map.)
Not sure about how you did it - and you should really always post code here - but let's try to optimize things here.
First of all, there is not really a difference between that and a quadratic grid. Different neighbor relationships, but I mean, that is just a small translation function. If you have a problem there, we should treat this separately, maybe on CodeReview.
Now, the naive solution is:
for all fields
count neighbors
if odd: add a marker to update to one, else to zero
for all fields
update all fields by marker of former step
this is obviously in O(N). Iterating twice is somewhat twice the actual run time, but should not be that bad. Try not to allocate space every time that you do that but reuse existing structures.
I'd propose this solution:
at the start:
create a std::vector or std::list "activated" of pointers to all fields that are activated
each iteration:
create a vector "new_activated"
for all items in activated
count neighbors, if odd add to new_activated
for all items in activated
set to inactive
replace activated by new_activated*
for all items in activated
set to active
*this can be done efficiently by putting them in a smart pointer and use move semantics
This code only works on the activated fields. As long as they stay within some smaller area, this is far more efficient. However, I have no idea when this changes - if there are activated fields all over the place, this might be less efficient. In that case, the naive solution might be the best one.
EDIT: after you now posted your code... your code is quite procedural. This is C++, use classes and use representation of things. Probably you do the search for neighbors right, but you can easily make mistakes there and therefore should isolate that part in a function, or better method. Raw arrays are bad and variables like n or k are bad. But before I start tearing your code apart, I instead repeat my recommendation, put the code on CodeReview, having people tear it apart until it is perfect.
This started off as a comment, but I think it could be helpful as an answer in addition to what has already been stated.
You stated the following limitations:
1 <= R <= 10, 1 <= C <= 10
Given these restrictions, I'll take the liberty to can represent the grid/matrix M of R rows and C columns in constant space (i.e. O(1)), and also check its elements in O(1) instead of O(R*C) time, thus removing this part from our time-complexity analysis.
That is, the grid can simply be declared as bool grid[10][10];.
The key input is the large number of turns k, stated to be in the range:
1 <= k <= 2^(63) - 1
The problem is that, AFAIK, you're required to perform k turns. This makes the algorithm be in O(k). Thus, no proposed solution can do better than O(k)[1].
To improve the speed in a meaningful way, this upper-bound must be lowered in some way[1], but it looks like this cannot be done without altering the problem constraints.
Thus, no proposed solution can do better than O(k)[1].
The fact that k can be so large is the main issue. The most anyone can do is improve the rest of the implementation, but this will only improve by a constant factor; you'll have to go through k turns regardless of how you look at it.
Therefore, unless some clever fact and/or detail is found that allows this bound to be lowered, there's no other choice.
[1] For example, it's not like trying to determine if some number n is prime, where you can check all numbers in the range(2, n) to see if they divide n, making it a O(n) process, or notice that some improvements include only looking at odd numbers after checking n is not even (constant factor; still O(n)), and then checking odd numbers only up to √n, i.e., in the range(3, √n, 2), which meaningfully lowers the upper-bound down to O(√n).
I have worked out a O(n square) solution to the problem. I was wondering about a better solution to this. (this is not a homework/interview problem but something I do out of my own interest, hence sharing here):
If a=1, b=2, c=3,….z=26. Given a string, find all possible codes that string
can generate. example: "1123" shall give:
aabc //a = 1, a = 1, b = 2, c = 3
kbc // since k is 11, b = 2, c= 3
alc // a = 1, l = 12, c = 3
aaw // a= 1, a =1, w= 23
kw // k = 11, w = 23
Here is my code to the problem:
void alpha(int* a, int sz, vector<vector<int>>& strings) {
for (int i = sz - 1; i >= 0; i--) {
if (i == sz - 1) {
vector<int> t;
t.push_back(a[i]);
strings.push_back(t);
} else {
int k = strings.size();
for (int j = 0; j < k; j++) {
vector<int> t = strings[j];
strings[j].insert(strings[j].begin(), a[i]);
if (t[0] < 10) {
int n = a[i] * 10 + t[0];
if (n <= 26) {
t[0] = n;
strings.push_back(t);
}
}
}
}
}
}
Essentially the vector strings will hold the sets of numbers.
This would run in n square. I am trying my head around at least an nlogn solution.
Intuitively tree should help here, but not getting anywhere post that.
Generally, your problem complexity is more like 2^n, not n^2, since your k can increase with every iteration.
This is an alternative recursive solution (note: recursion is bad for very long codes). I didn't focus on optimization, since I'm not up to date with C++X, but I think the recursive solution could be optimized with some moves.
Recursion also makes the complexity a bit more obvious compared to the iterative solution.
// Add the front element to each trailing code sequence. Create a new sequence if none exists
void update_helper(int front, std::vector<std::deque<int>>& intermediate)
{
if (intermediate.empty())
{
intermediate.push_back(std::deque<int>());
}
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i].push_front(front);
}
}
std::vector<std::deque<int>> decode(int digits[], int count)
{
if (count <= 0)
{
return std::vector<std::deque<int>>();
}
std::vector<std::deque<int>> result1 = decode(digits + 1, count - 1);
update_helper(*digits, result1);
if (count > 1 && (digits[0] * 10 + digits[1]) <= 26)
{
std::vector<std::deque<int>> result2 = decode(digits + 2, count - 2);
update_helper(digits[0] * 10 + digits[1], result2);
result1.insert(result1.end(), result2.begin(), result2.end());
}
return result1;
}
Call:
std::vector<std::deque<int>> strings = decode(codes, size);
Edit:
Regarding the complexity of the original code, I'll try to show what would happen in the worst case scenario, where the code sequence consists only of 1 and 2 values.
void alpha(int* a, int sz, vector<vector<int>>& strings)
{
for (int i = sz - 1;
i >= 0;
i--)
{
if (i == sz - 1)
{
vector<int> t;
t.push_back(a[i]);
strings.push_back(t); // strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
else
{
int k = strings.size();
for (int j = 0; j < k; j++)
{
vector<int> t = strings[j]; // O(strings[j].size) vector copy operation
strings[j].insert(strings[j].begin(), a[i]); // strings[j].size+1
// note: strings[j].insert treated as O(1) because other containers could do better than vector
if (t[0] < 10)
{
int n = a[i] * 10 + t[0];
if (n <= 26)
{
t[0] = n;
strings.push_back(t); // strings.size+1
// O(1), ignoring capacity change and copy operation
} // if summary: O(1), strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
} // for summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // else summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // for summary: O(sum[i from 1 to sz] of (k * strings[j].size))
// k (same as string.size) doubles each iteration => k ends near 2^sz
// string[j].size increases by 1 each iteration
// k * strings[j].size increases by ?? each iteration (its getting huge)
}
Maybe I made a mistake somewhere and if we want to play nice we can treat a vector copy as O(1) instead of O(n) in order to reduce complexity, but the hard fact remains, that the worst case is doubling outer vector size in each iteration (at least every 2nd iteration, considering the exact structure of the if conditions) of the inner loop and the inner loop depends on that growing vector size, which makes the whole story at least O(2^n).
Edit2:
I figured out the result complexity (the best hypothetical algoritm still needs to create every element of the result, so result complexity is like a lower bound to what any algorithm can archieve)
Its actually following the Fibonacci numbers:
For worst case input (like only 1s) of size N+2 you have:
size N has k(N) elements
size N+1 has k(N+1) elements
size N+2 is the combination of codes starting with a followed by the combinations from size N+1 (a takes one element of the source) and the codes starting with k, followed by the combinations from size N (k takes two elements of the source)
size N+2 has k(N) + k(N+1) elements
Starting with size 1 => 1 (a) and size 2 => 2 (aa or k)
Result: still exponential growth ;)
Edit3:
Worked out a dynamic programming solution, somewhat similar to your approach with reverse iteration over the code array and kindof optimized in its vector usage, based on the properties explained in Edit2.
The inner loop (update_helper) is still dominated by the count of results (worst case Fibonacci) and a few outer loop iterations will have a decent count of sub-results, but at least the sub-results are reduced to a pointer to some intermediate node, so copying should be pretty efficient. As a little bonus, I switched the result from numbers to characters.
Another edit: updated code with range 0 - 25 as 'a' - 'z', fixed some errors that led to wrong results.
struct const_node
{
const_node(char content, const_node* next)
: next(next), content(content)
{
}
const_node* const next;
const char content;
};
// put front in front of each existing sub-result
void update_helper(int front, std::vector<const_node*>& intermediate)
{
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i] = new const_node(front + 'a', intermediate[i]);
}
if (intermediate.empty())
{
intermediate.push_back(new const_node(front + 'a', NULL));
}
}
std::vector<const_node*> decode_it(int digits[9], size_t count)
{
int current = 0;
std::vector<const_node*> intermediates[3];
for (size_t i = 0; i < count; i++)
{
current = (current + 1) % 3;
int prev = (current + 2) % 3; // -1
int prevprev = (current + 1) % 3; // -2
size_t index = count - i - 1; // invert direction
// copy from prev
intermediates[current] = intermediates[prev];
// update current (part 1)
update_helper(digits[index], intermediates[current]);
if (index + 1 < count && digits[index] &&
digits[index] * 10 + digits[index + 1] < 26)
{
// update prevprev
update_helper(digits[index] * 10 + digits[index + 1], intermediates[prevprev]);
// add to current (part 2)
intermediates[current].insert(intermediates[current].end(), intermediates[prevprev].begin(), intermediates[prevprev].end());
}
}
return intermediates[current];
}
void cleanupDelete(std::vector<const_node*>& nodes);
int main()
{
int code[] = { 1, 2, 3, 1, 2, 3, 1, 2, 3 };
int size = sizeof(code) / sizeof(int);
std::vector<const_node*> result = decode_it(code, size);
// output
for (size_t i = 0; i < result.size(); i++)
{
std::cout.width(3);
std::cout.flags(std::ios::right);
std::cout << i << ": ";
const_node* item = result[i];
while (item)
{
std::cout << item->content;
item = item->next;
}
std::cout << std::endl;
}
cleanupDelete(result);
}
void fillCleanup(const_node* n, std::set<const_node*>& all_nodes)
{
if (n)
{
all_nodes.insert(n);
fillCleanup(n->next, all_nodes);
}
}
void cleanupDelete(std::vector<const_node*>& nodes)
{
// this is like multiple inverse trees, hard to delete correctly, since multiple next pointers refer to the same target
std::set<const_node*> all_nodes;
for each (auto var in nodes)
{
fillCleanup(var, all_nodes);
}
nodes.clear();
for each (auto var in all_nodes)
{
delete var;
}
all_nodes.clear();
}
A drawback of the dynamically reused structure is the cleanup, since you wanna be careful to delete each node only once.
For smart phones, there is this game called Ruzzle.
It's a word finding game.
Quick Explanation:
The game board is a 4x4 grid of letters.
You start from any cell and try to spell a word by dragging up, down, left, right, or diagonal.
The board doesn't wrap, and you can't reuse letters you've already selected.
On average, my friend and I find about 40 words, and at the end of the round, the game informs you of how many possible words you could have gotten. This number is usually about 250 - 350.
We are wondering what board would yield the highest number of possible words.
How would I go about finding the optimal board?
I've written a program in C that takes 16 characters and outputs all the appropriate words.
Testing over 80,000 words, it takes about a second to process.
The Problem:
The number of game board permutations is 26^16.
That's 43608742899428874059776 (43 sextillion).
I need some kind of heuristic.
Should I skip all boards that have either z, q, x, etc because they are expected to not have as many words? I wouldn't want to exclude a letter without being certain.
There is also 4 duplicates of every board, because rotating the board will still give the same results.
But even with these restrictions, I don't think I have enough time in my life to find the answer.
Maybe board generation isn't the answer.
Is there a quicker way to find the answer looking at the list of words?
tldr;
S E R O
P I T S
L A N E
S E R G
or any of its reflections.
This board contains 1212 words (and as it turns out, you can exclude 'z', 'q' and 'x').
First things first, turns out you're using the wrong dictionary. After not getting exact matches with Ruzzle's word count, I looked into it, it seems Ruzzle uses a dictionary called TWL06, which has around 180,000 words. Don't ask me what it stands for, but it's freely available in txt.
I also wrote code to find all possible words given a 16 character board, as follows. It builds the dictionary into a tree structure, and then pretty much just goes around recursively while there are words to be found. It prints them in order of length. Uniqueness is maintained by the STL set structure.
#include <cstdlib>
#include <ctime>
#include <map>
#include <string>
#include <set>
#include <algorithm>
#include <fstream>
#include <iostream>
using namespace std;
struct TreeDict {
bool existing;
map<char, TreeDict> sub;
TreeDict() {
existing = false;
}
TreeDict& operator=(TreeDict &a) {
existing = a.existing;
sub = a.sub;
return *this;
}
void insert(string s) {
if(s.size() == 0) {
existing = true;
return;
}
sub[s[0]].insert(s.substr(1));
}
bool exists(string s = "") {
if(s.size() == 0)
return existing;
if(sub.find(s[0]) == sub.end())
return false;
return sub[s[0]].exists(s.substr(1));
}
TreeDict* operator[](char alpha) {
if(sub.find(alpha) == sub.end())
return NULL;
return &sub[alpha];
}
};
TreeDict DICTIONARY;
set<string> boggle_h(const string board, string word, int index, int mask, TreeDict *dict) {
if(index < 0 || index >= 16 || (mask & (1 << index)))
return set<string>();
word += board[index];
mask |= 1 << index;
dict = (*dict)[board[index]];
if(dict == NULL)
return set<string>();
set<string> rt;
if((*dict).exists())
rt.insert(word);
if((*dict).sub.empty())
return rt;
if(index % 4 != 0) {
set<string> a = boggle_h(board, word, index - 4 - 1, mask, dict);
set<string> b = boggle_h(board, word, index - 1, mask, dict);
set<string> c = boggle_h(board, word, index + 4 - 1, mask, dict);
rt.insert(a.begin(), a.end());
rt.insert(b.begin(), b.end());
rt.insert(c.begin(), c.end());
}
if(index % 4 != 3) {
set<string> a = boggle_h(board, word, index - 4 + 1, mask, dict);
set<string> b = boggle_h(board, word, index + 1, mask, dict);
set<string> c = boggle_h(board, word, index + 4 + 1, mask, dict);
rt.insert(a.begin(), a.end());
rt.insert(b.begin(), b.end());
rt.insert(c.begin(), c.end());
}
set<string> a = boggle_h(board, word, index + 4, mask, dict);
set<string> b = boggle_h(board, word, index - 4, mask, dict);
rt.insert(a.begin(), a.end());
rt.insert(b.begin(), b.end());
return rt;
}
set<string> boggle(string board) {
set<string> words;
for(int i = 0; i < 16; i++) {
set<string> a = boggle_h(board, "", i, 0, &DICTIONARY);
words.insert(a.begin(), a.end());
}
return words;
}
void buildDict(string file, TreeDict &dict = DICTIONARY) {
ifstream fstr(file.c_str());
string s;
if(fstr.is_open()) {
while(fstr.good()) {
fstr >> s;
dict.insert(s);
}
fstr.close();
}
}
struct lencmp {
bool operator()(const string &a, const string &b) {
if(a.size() != b.size())
return a.size() > b.size();
return a < b;
}
};
int main() {
srand(time(NULL));
buildDict("/Users/XXX/Desktop/TWL06.txt");
set<string> a = boggle("SEROPITSLANESERG");
set<string, lencmp> words;
words.insert(a.begin(), a.end());
set<string>::iterator it;
for(it = words.begin(); it != words.end(); it++)
cout << *it << endl;
cout << words.size() << " words." << endl;
}
Randomly generating boards and testing against them didn't turn out too effective, expectedly, I didn't really bother with running that, but I'd be surprised if they crossed 200 words. Instead I changed the board generation to generate boards with letters distributed in proportion to their frequency in TWL06, achieved by a quick cumulative frequency (the frequencies were reduced by a factor of 100), below.
string randomBoard() {
string board = "";
for(int i = 0; i < 16; i++)
board += (char)('A' + rand() % 26);
return board;
}
char distLetter() {
int x = rand() % 15833;
if(x < 1209) return 'A';
if(x < 1510) return 'B';
if(x < 2151) return 'C';
if(x < 2699) return 'D';
if(x < 4526) return 'E';
if(x < 4726) return 'F';
if(x < 5161) return 'G';
if(x < 5528) return 'H';
if(x < 6931) return 'I';
if(x < 6957) return 'J';
if(x < 7101) return 'K';
if(x < 7947) return 'L';
if(x < 8395) return 'M';
if(x < 9462) return 'N';
if(x < 10496) return 'O';
if(x < 10962) return 'P';
if(x < 10987) return 'Q';
if(x < 12111) return 'R';
if(x < 13613) return 'S';
if(x < 14653) return 'T';
if(x < 15174) return 'U';
if(x < 15328) return 'V';
if(x < 15452) return 'W';
if(x < 15499) return 'X';
if(x < 15757) return 'Y';
if(x < 15833) return 'Z';
}
string distBoard() {
string board = "";
for(int i = 0; i < 16; i++)
board += distLetter();
return board;
}
This was significantly more effective, very easily achieving 400+ word boards. I left it running (for longer than I intended), and after checking over a million boards, the highest found was around 650 words. This was still essentially random generation, and that has its limits.
Instead, I opted for a greedy maximisation strategy, wherein I'd take a board and make a small change to it, and then commit the change only if it increased the word count.
string changeLetter(string x) {
int y = rand() % 16;
x[y] = distLetter();
return x;
}
string swapLetter(string x) {
int y = rand() % 16;
int z = rand() % 16;
char w = x[y];
x[y] = x[z];
x[z] = w;
return x;
}
string change(string x) {
if(rand() % 2)
return changeLetter(x);
return swapLetter(x);
}
int main() {
srand(time(NULL));
buildDict("/Users/XXX/Desktop/TWL06.txt");
string board = "SEROPITSLANESERG";
int locmax = boggle(board).size();
for(int j = 0; j < 5000; j++) {
int changes = 1;
string board2 = board;
for(int k = 0; k < changes; k++)
board2 = change(board);
int loc = boggle(board2).size();
if(loc >= locmax && board != board2) {
j = 0;
board = board2;
locmax = loc;
}
}
}
This very rapidly got me 1000+ word boards, with generally similar letter patterns, despite randomised starting points. What leads me to believe that the board given is the best possible board is how it, or one of its various reflections, turned up repeatedly, within the first 100 odd attempts at maximising a random board.
The biggest reason for skepticism is the greediness of this algorithm, and that this somehow would lead to the algorithm missing out better boards. The small changes made are quite flexible in their outcomes – that is, they have the power to completely transform a grid from its (randomised) start position. The number of possible changes, 26*16 for the fresh letter, and 16*15 for the letter swap, are both significantly less than 5000, the number of continuous discarded changes allowed.
The fact that the program was able to repeat this board output within the first 100 odd times implies that the number of local maximums is relatively small, and the probability that there is an undiscovered maximum low.
Although the greedy seemed intuitively right – it shouldn't really be less possible to reach a given grid with the delta changes from a random board – and the two possible changes, a swap and a fresh letter do seem to encapsulate all possible improvements, I changed the program in order to allow it to make more changes before checking for the increase. This again returned the same board, repeatedly.
int main() {
srand(time(NULL));
buildDict("/Users/XXX/Desktop/TWL06.txt");
int glomax = 0;
int i = 0;
while(true) {
string board = distBoard();
int locmax = boggle(board).size();
for(int j = 0; j < 500; j++) {
string board2 = board;
for(int k = 0; k < 2; k++)
board2 = change(board);
int loc = boggle(board2).size();
if(loc >= locmax && board != board2) {
j = 0;
board = board2;
locmax = loc;
}
}
if(glomax <= locmax) {
glomax = locmax;
cout << board << " " << glomax << " words." << endl;
}
if(++i % 10 == 0)
cout << i << endl;
}
}
Having iterated over this loop around a 1000 times, with this particular board configuration showing up ~10 times, I'm pretty confident that this is for now the Ruzzle board with the most unique words, until the English language changes.
Interesting problem. I see (at least, but mainly) two approches
one is to try the hard way to stick as many wordable letters (in all directions) as possible, based on a dictionary. As you said, there are many possible combinations, and that route requires a well elaborated and complex algorithm to reach something tangible
there is another "loose" solution based on probabilities that I like more. You suggested to remove some low-appearance letters to maximize the board yield. An extension of this could be to use more of the high-appearance letters in the dictionary.
A further step could be:
based on the 80k dictionary D, you find out for each l1 letter of our L ensemble of 26 letters the probability that letter l2 precedes or follows l1. This is a L x L probabilities array, and is pretty small, so you could even extend to L x L x L, i.e. considering l1 and l2 what probability has l3 to fit. This is a bit more complex if the algorithm wants to estimate accurate probabilities, as the probas sum depends on the relative position of the 3 letters, for instance in a 'triangle' configuration (eg positions (3,3), (3,4) and (3,5)) the result is probably less yielding than when the letters are aligned [just a supposition]. Why not going up to L x L x L x L, which will require some optimizations...
then you distribute a few high-appearance letters (say 4~6) randomly on the board (having each at least 1 blank cell around in at least 5 of the 8 possible directions) and then use your L x L [xL] probas arrays to complete - meaning based on the existing letter, the next cell is filled with a letter which proba is high given the configuration [again, letters sorted by proba descending, and use randomness if two letters are in a close tie].
For instance, taking only the horizontal configuration, having the following letters in place, and we want to find the best 2 in between ER and TO
...ER??TO...
Using L x L, a loop like (l1 and l2 are our two missing letters). Find the absolutely better letters - but bestchoice and bestproba could be arrays instead and keep the - say - 10 best choices.
Note: there is no need to keep the proba in the range [0,1] in this case, we can sum up the probas (which don't give a proba - but the number matters. A mathematical proba could be something like p = ( p(l0,l1) + p(l2,l3) ) / 2, l0 and l3 are the R and T in our L x L exemple)
bestproba = 0
bestchoice = (none, none)
for letter l1 in L
for letter l2 in L
p = proba('R',l1) + proba(l2,'T')
if ( p > bestproba )
bestproba = p
bestchoice = (l1, l2)
fi
rof
rof
the algorithm can take more factors into account, and needs to take the vertical and diagonals into account as well. With L x L x L, more letters in more directions are taken into account, like ER?,R??,??T,?TO - this requires to think more through the algorithm - maybe starting with L x L can give an idea about the relevancy of this algorithm.
Note that a lot of this may be pre-calculated, and the L x L array is of course one of them.
I am attempting to make a maze-solver using a Breadth-first search, and mark the shortest path using a character '*'
The maze is actually just a bunch of text. The maze consists of an n x n grid, consisting of "#" symbols that are walls, and periods "." representing the walkable area/paths. An 'S' denotes start, 'F' is finish. Right now, this function does not seem to be finding the solution (it thinks it has the solution even when one is impossible). I am checking the four neighbors, and if they are 'unfound' (-1) they are added to the queue to be processed.
The maze works on several mazes, but not on this one:
...###.#....
##.#...####.
...#.#.#....
#.####.####.
#F..#..#.##.
###.#....#S.
#.#.####.##.
....#.#...#.
.####.#.#.#.
........#...
What could be missing in my logic?
int mazeSolver(char *maze, int rows, int cols)
{
int start = 0;
int finish = 0;
for (int i=0;i<rows*cols;i++) {
if (maze[i] == 'S') { start=i; }
if (maze[i] == 'F') { finish=i; }
}
if (finish==0 || start==0) { return -1; }
char* bfsq;
bfsq = new char[rows*cols]; //initialize queue array
int head = 0;
int tail = 0;
bool solved = false;
char* prd;
prd = new char[rows*cols]; //initialize predecessor array
for (int i=0;i<rows*cols;i++) {
prd[i] = -1;
}
prd[start] = -2; //set the start location
bfsq[tail] = start;
tail++;
int delta[] = {-cols,-1,cols,+1}; // North, West, South, East neighbors
while(tail>head) {
int front = bfsq[head];
head++;
for (int i=0; i<4; i++) {
int neighbor = front+delta[i];
if (neighbor/cols < 0 || neighbor/cols >= rows || neighbor%cols < 0 || neighbor%cols >= cols) {
continue;
}
if (prd[neighbor] == -1 && maze[neighbor]!='#') {
prd[neighbor] = front;
bfsq[tail] = neighbor;
tail++;
if (maze[neighbor] == 'F') { solved = true; }
}
}
}
if (solved == true) {
int previous = finish;
while (previous != start) {
maze[previous] = '*';
previous = prd[previous];
}
maze[finish] = 'F';
return 1;
}
else { return 0; }
delete [] prd;
delete [] bfsq;
}
Iterating through neighbours can be significantly simplified(I know this is somewhat similar to what kobra suggests but it can be improved further). I use a moves array defining the x and y delta of the given move like so:
int moves[4][2] = {{0,1},{1,0},{0,-1},{-1,0}};
Please note that not only tis lists all the possible moves from a given cell but it also lists them in clockwise direction which is useful for some problems.
Now to traverse the array I use a std::queue<pair<int,int> > This way the current position is defined by the pair of coordinates corresponding to it. Here is how I cycle through the neighbours of a gien cell c:
pair<int,int> c;
for (int l = 0;l < 4/*size of moves*/;++l){
int ti = c.first + moves[l][0];
int tj = c.second + moves[l][1];
if (ti < 0 || ti >= n || tj < 0 || tj >= m) {
// This move goes out of the field
continue;
}
// Do something.
}
I know this code is not really related to your code, but as I am teaching this kind of problems trust me a lot of students were really thankful when I showed them this approach.
Now back to your question - you need to start from the end position and use prd array to find its parent, then find its parent's parent and so on until you reach a cell with negative parent. What you do instead considers all the visited cells and some of them are not on the shortest path from S to F.
You can break once you set solved = true this will optimize the algorithm a bit.
I personally think you always find a solution because you have no checks for falling off the field. (the if (ti < 0 || ti >= n || tj < 0 || tj >= m) bit in my code).
Hope this helps you and gives you some tips how to improve your coding.
A few comments:
You can use queue container in c++, its much more easier in use
In this task you can write something like that:
int delta[] = {-1, cols, 1 -cols};
And then you simple can iterate through all four sides, you shouldn't copy-paste the same code.
You will have problems with boundaries of your array. Because you are not checking it.
When you have founded finish you should break from cycle
And in last cycle you have an error. It will print * in all cells in which you have been (not only in the optimal way). It should look:
while (finish != start)
{
maze[finish] = '*';
finish = prd[finish];
}
maze[start] = '*';
And of course this cycle should in the last if, because you don't know at that moment have you reach end or not
PS And its better to clear memory which you have allocate in function