Snake game - random number generator for food tiles - c++

I am trying to make a 16x16 LED Snake game using Arduino (C++).
I need to assign a random grid index for the next food tile.
What I have is a list of indices that are occupied by the snake (snakeSquares).
So, my thought is that I need to generate a list of potential foodSquares. Then I can pick a random index from that list, and use the value there for my next food square.
I have some ideas for this but they seem kind of clunky, so I was looking for some feedback. I am using the Arduino LinkedList.h library for my lists in lieu of stdio.h (and random() in place of rand()):
Generate a list (foodSquares) containing the integers [0, 255] so that the indices correspond to the values in the list (I don't know of a quick way to do this, will probably need to use a for loop).
When generating list of snakeSquares, set foodSquares[i] = -1. Afterwards, loop through foodSquares and remove all elements that equal -1.
Then, generate a random number randNum from [0, foodSquares.size()-1] and make the next food square index equal to foodSquares[randNum].
So I guess my question is, will this approach work, and is there a better way to do it?
Thanks.

Potential approach that won't require more lists:
Calculate random integer representing number of steps.
Take head or tail as a starting tile.
For each step move at random free adjacent tile.

I couldn't understand it completely your question as some of those points are quite waste of processor time (i.e. point 1 and 2). But, the first point could be solved quite easily in n proportional complexity as follows:
for (uint8_t i = 0; i < 256; i++) {
// assuming there is a list of food_squares
food_squares[i] = i;
}
Then to the second point you would have to set every food_square to -1, for what? Anyway. A way you could implement this would be as VTT has said and I will describe it further:
Take a random number between [0..255].
Does it is one the snake_squares? If so, back to one, else, go to three.
This is the same as your third point, use this random number to set the position of the food in food_square (food_square[random_number] = some_value).

Related

Find coordinates in a vector c++

I'm creating a game in Qt in c++, and I store every coordinate of specific size into a vector like :
std::vector<std::unique_ptr<Tile>> all_tiles = createWorld(bgTile);
for(auto & tile : all_tiles) {
tiles.push_back(std::move(tile));
}
Each level also has some healthpacks which are stored in a vector aswell.
std::vector<std::unique_ptr<Enemy>> all_enemies = getEnemies(nrOfEnemies);
for(auto &healthPackUniquePtr : all_healthpacks) {
std::shared_ptr<Tile> healthPackPtr{std::move(healthPackUniquePtr)};
int x = healthPackPtr->getXPos();
int y = healthPackPtr->getYPos();
int newYpos=checkOverlapPos(healthPackPtr->getXPos(),healthPackPtr->getYPos());
newYpos = checkOverlapEnemy(healthPackPtr->getXPos(),newYpos);
auto healthPack = std::make_shared<HealthPack>(healthPackPtr->getXPos(), newYpos, healthPackPtr->getValue());
healthPacks.push_back(healthPack);
}
But know I'm searching for the fastest way to check if my player position is at an healthpack position. So I have to search on 2 values in a vector : x and y position. Anyone a suggestion how to do this?
Your 'real' question:
I have to search on 2 values in a vector : x and y position. Anyone a
suggestion how to do this?"
Is a classic XY question, so I'm ignoring it!
I'm searching for the fastest way to check if my player position is at
an healthpack position.
Now we're talking. The approach you are using now won't scale well as the number of items increase, and you'll need to do something similar for every pair of objects you are interested in. Not good.
Thankfully this problem has been solved (and improved upon) for decades, you need to use a spacial partitioning scheme such as BSP, BVH, quadtree/octree, etc. The beauty of the these schemes is that a single data structure can hold the entire world in it, making arbitrary item intersection queries trivial (and fast).
You can implement a callback system. Then a player moves a tile, fire a callback to that tile which the player is on. Tiles should know its state and could add health to a player or do nothing if there is nothing on that tile. Using this technique, you don`t need searching at all.
If all_leathpacks has less than ~50 elements I wouldn't bother to improve. Simple loop is going to be sufficiently fast.
Otherwise you can split the vector into sectors and check only for the elements in the same sector as your player (and maybe a few around if it's close to the edge).
If you need something that's better for the memory you and use a KD-tree to index the healtpacks and search for them fast (O(logN) time).

What is a safety wall and how do I use it?

I've Googled and found zero answers for "safety wall", so I'm pretty sure that's not the correct term. I'll explain myself:
As I've read, I'm talking about taking a two dimensional array and placing it in a same array with an addition of one cell to each side to make sure staying safe and not getting out the limits I've created.
What is the right term for this technique and how would I use it?
Like others told, you need to search it "sentinel" or something like "sentinel control"..
You can use sentinel control when you dont know size or limits of your program. For example, you are writting a program, which is calculating avarage grade of class. However you dont know how many student are in class. Or you inserting array which you dont know limits. Then you can use sentinel control for this job.
Lets look this example,
int grade;
int totalgrade = 0;
int studentCount = 0;
std::cin >> grade;
while (grade != -1)
{
totalgrade = totalgrade + grade;
studentCount ++;
std::cin >> grade;
} // loop until user enter -1
So if you dont know how many values will be entered from user, you can use sentinel control for this job. You can also read more about sentinel value.
These are usually referred to as "ghost cells", and are often used in numerical simulations or image processing where you are applying a kernel (such as a smoothing or difference operator) to an array. They allow you apply the kernel without special casing the edges.
For example; suppose you want to smooth out an image - you could use a kernel like:
0.0 0.1 0.0
0.1 0.6 0.1
0.0 0.1 0.0
You apply this by taking the source image, and for every pixel, you compute the value of the destination pixel by centering the kernel on the source pixel and adding up the weighted contributions of the 9 covered pixel (0.6 * the value of the source pixel, plus 0.1 times the value of each of the pixels above, below, and to the sides). Do this for every pixel and you'll end up with a smoothed version of your original image.
This works well, but the question is "what do you do at the border cells?" Rather than having complicated if/then logic for the border cases (which can be tricky and can degrade performance), you can just add 1 layer of ghost cells to each side.
Of course, you have to pick values for the cells before you run your algorithm. How you pick their value depends on your algorithm. You might choose to set them all to zero, but in the case of the smoothing kernel, this will darken your image at it's borders, so that's probably not what you want. A better plan would be to fill the ghost cells with the value of the nearest non-ghost cell.
You also need to figure out how many ghost cells you need, which depends on the size of your kernel. For a 3x3 kernel like above, you need 1 layer of ghost cells (to take care of the part of the kernel that might "hang off" the edge). More complicated kernels might require more (a 5x5 kernel would require 2 layers, etc).
You can google "ghost cell computation" to find out more (add 'computation' or you'll get a lot of biology results!)

Excluding fields with certain state from 2D array; Game of life

I have an array - 2D(100 x 100 in this case) with some states limited within borders as shown on picture:
http://tinypic.com/view.php?pic=mimiw5&s=5#.UkK8WIamiBI
Each cell has its own id(color, for example green is id=1) and flag isBorder(marked as white on pic if true). What I am trying to do is exclude set of cell with one state limited with borders(Grain) so i could work on each grain separately which means i would need to store all indexes for each grain.
Any one got an idea how to solve it?
Now that I've read your question again... The algorithm is essentially the same as filling the contiguous area with color. The most common way to do it is a BFS algorithm.
Simply start within some point you are sure lays inside the current area, then gradually move in every direction, selecting traversed fields and putting them into a vector.
// Edit: A bunch of other insights, made before I understood the question.
I can possibly imagine an algorithm working like this:
vector<2dCoord> result = data.filter(DataType::Green);
for (2dCoord in result) {
// do some operations on data[2dCoord]
}
The implementation of filter in a simple unoptimized way would be to scan the whole array and push_back matching fields to the vector.
If you shall need more complicated queries, lazily-evaluated proxy objects can work miracles:
data.filter(DataType::Green)
.filter_having_neighbours(DataType::Red)
.closest(/*first*/ 100, /*from*/ 2dCoord(x,y))
.apply([](DataField& field) {
// processing here
});

Algorithm to produce a difference of two collections of intervals

Problem
Suppose I have two collections of intervals, named A and B. How would I find a difference (a relative complement) in a most time- and memory-efficient way?
Picture for illustration:
Interval endpoints are integers ( ≤ 2128-1 ) and they are always both 2n long and aligned on the m×2n lattice (so you can make a binary tree out of them).
Intervals can overlap in the input but this does not affect the output (the result if flattened would be the same).
The problem is because there are MANY intervals in both collections (up to 100,000,000), so naïve implementations will probably be slow.
The input is read from two files and it is sorted in such a way that smaller sub-intervals (if overlapping) come immediately after their parents in order of size. For example:
[0,7]
[0,3]
[4,7]
[4,5]
[8,15]
...
What have I tried?
So far, I've been working on a implementation that generates a binary search tree while doing so aggregates neighbouring intervals ( [0,3],[4,7] => [0,7] ) from both collections, then traverses the second tree and "bumps out" the intervals that are present in both (subdividing the larger intervals in the first tree if necessary).
While this appears to be working for small collections, it requires more and more RAM to hold the tree itself, not to mention the time it needs to complete the insertion and removal from the tree.
I figured that since intervals come pre-sorted, I could use some dynamic algorithm and finish in one pass. I am not sure if this is possible, however.
So, how would I go about solving this problem in a efficient way?
Disclaimer: This is not a homework but a modification/generalization of an actual real-life problem I am facing. I am programming in C++ but I can accept an algorithm in any [imperative] language.
Recall one of the first programming exercises we all had back in school - writing a calculator program. Taking an arithmetic expression from the input line, parsing it and evaluating. Remember keeping track of the parentheses depth? So here we go.
Analogy: interval start points are opening parentheses, end points - closing parentheses. We keep track of the parentheses depth (nesting). The depth of two - intersection of intervals, the depth of one - difference of intervals
Algorithm:
No need to distinguish between A and B, just sort all start points and end points in the ascending order
Set the parentheses depth counter to zero
Iterate through the end points, starting from the smallest one. If it is a starting point increment the depth counter, if it is an ending point decrement the counter
Keep track of intervals where the depth is 1, those are intervals of A and B difference. The intervals where the depth is two are AB intersections
Your intervals are sorted which is great. You can do this in linear time with almost no memory.
Start by "flattening" your two sets. That is for set A, start from the lowest interval, and combine any overlapping intervals until you have an interval set that has no overlaps. Then do that for B.
Now take your two sets and start with the first two intervals. We'll call these the interval indices for A and B, Ai and Bi.
Ai indexes the first interval in A, Bi the first interval in B.
While there are intervals to process do the following:
Consider the start points of both intervals, are the start points the same? If so advance the start point of both intervals to the end point of the smaller interval, emit nothing to your output. Advance the index of the smaller interval to the next interval. (That is if Ai ends before Bi, then Ai advances to the next interval.) If both intervals end in the same place, advance both Ai and Bi and emit nothing.
Is the one start point earlier than the other start point? If so emit the interval from the earlier start point to either a) the start of the later endpoint, or b) the end of the earlier end point. If you chose option b, advance the index of the eariler interval.
So for example if the interval at Ai starts first, you emit the interval from start of Ai to start of Bi, or the end of Ai whichever is smaller. If Ai ended before the start of Bi, you advance Ai.
Repeat until all intervals are consumed.
Ps. I assume you don't have spare memory to flatten the two interval sets into separate buffers. Do this in two functions. A "get next interval" function that advances the interval indices, which does the flattening as necessary, and feed flattened data to the differencing function.
What you are looking for is a Sweep line algorithm.
A simple logic should tell you when the Sweep line is intersecting an interval in both A and B and where it intersects only one set.
This is very similar to this problem. Just consider that you have a set of vertical lines passing through the end points of the B's segments.
This algorithm complexity is O((m+n) log (m+n)) which is the cost of the initial sort. The sweep line algorithm on a sorted set takes O(m+n)
I think you should use boost.icl (Interval Container Library)
http://www.boost.org/doc/libs/1_50_0/libs/icl/doc/html/index.html
#include <iostream>
#include <boost/icl/interval_set.hpp>
using namespace boost::icl;
int main()
{
typedef interval_set<int> TIntervalSet;
TIntervalSet intSetA;
TIntervalSet intSetB;
intSetA += discrete_interval<int>::closed( 0, 2);
intSetA += discrete_interval<int>::closed( 9,15);
intSetA += discrete_interval<int>::closed(12,15);
intSetB += discrete_interval<int>::closed( 1, 2);
intSetB += discrete_interval<int>::closed( 4, 7);
intSetB += discrete_interval<int>::closed( 9,10);
intSetB += discrete_interval<int>::closed(12,13);
std::cout << intSetA << std::endl;
std::cout << intSetB << std::endl;
std::cout << intSetA - intSetB << std::endl;
return 0;
}
this prints
{[0,2][9,15]}
{[1,2][4,7][9,10][12,13]}
{[0,1)(10,12)(13,15]}

Algorithm for sorting a two-dimensional array based on similarities of adjecent objects

I'm writing a program that is supposed
to sort a number of square tiles (of which
each side is colored in one of five colors—red, orange,
blue, green and yellow), that are laying next to each other
(eg 8 rows and 12 columns) in a way that as many sides with
the same color connect as possible.
So, for instance, a tile with right side colored
red should have a tile on the right that has a red left-side.)
The result is evaluated by counting how many non-matching pairs
of sides exist on the board. I'm pretty much done with the actual program;
I just have some trouble with my sorting algorithm. Right now I'm using
Bubble-sort based algorithm, that compares every piece on the board
with every other piece, and if switching those two reduces the amount of
non-matching pairs of sides on the board, it switches them. Here a
abstracted version of the sorting function, as it is now:
for(int i = 0 ; i < DimensionOfBoard.cx * DimensionOfBoard.cy ; i++)
for(int j = 0 ; j < DimensionOfBoard.cx * DiemsionOfBoard.cy ; j++)
{
// Comparing a piece with itself is useless
if(i == j)
continue;
// v1 is the amount of the nonmatching sides of both pieces
// (max is 8, since we have 2 pieces with 4 sides each (duh))
int v1 = Board[i].GetNonmatchingSides() + Board[j].GetNonmatchingSides();
// Switching the pieces, if this decreases the value of the board
// (increases the amount of nonmatching sides) we'll switch back)
SwitchPieces(Board[i], Board[j]);
// If switching worsened the situation ...
if(v1 < Board[i].GetNonmathcingSides() + Board[j].GetNonmatchingSides())
// ... we switch back to the initial state
SwitchPieces(Board[i], Board[j]);
}
As an explanation: Board is a pointer to an array of Piece Object. Each Piece has
four Piece-pointers that point to the four adjacent pieces (or NULL, if the Piece is
a side/corner piece.) And switching actually doesn't switch the pieces itself, but
rather switches the colors. (Instead of exchanging the pieces it scrapes off the color
of both and switches that.)
This algorithm doesn't work too bad, it significantly improves the value of the
board, but it doesn't optimize it as it should. I assume it's because side and corner
pieces can't have move than three/two wrong adjacent pieces, since one/two side(s)
are empty. I tried to compensate for that (by multiplying Board[i].GetMatchingPieces()
with Board[i].GetHowManyNonemptySides() before comparing), but that didn't help a bit.
And that's where I need help. I don't know very many sorting algorithms, let alone
those that work with two-dimensional arrays. So can anyone of you know about
an algorithmic concept that might help me to improve my work? Or can anyone see a
problem that I haven't found yet? Any help is appreciated. Thank you.
if there was a switch you have to re-evaluate a board, because there might be previous positions where now you could find an enhancement.
Note that you are going to find only a local minimum with those swappings. You might won't be able to find any enhancements but that doesn't mean that's the best board configuration.
One way to find a better configuration is to shuffle a board and search for a new local minumum, or use an algorithm-skeleten that allows you to make bigger jumps in the state, eg: Simulated annealing.