C++ Setting a frame rate in a game like Tetris - c++

So this question is a little more abstract than some. Say I have a game that I only want to render once per second, such as a Tetris clone. I only want to process one input per second, then render the new frame accordingly. Tetris is a grid-based game, so I can't just move the game piece by a certain amount times the timeDelta float that people usually use for frame rate examples. How do I go about only rendering one frame per second in a grid-based game? Here's what code I have so far, but it's wrong:
void Engine::Go(){
while(window.isOpen()){
if(timeElapsed >= 1000){
timeElapsed = clock.restart().asMilliseconds();;
ProcessInput();
}
UpdateCPU();
Render();
timeElapsed = clock.getElapsedTime().asMilliseconds();
time += timeElapsed;
}
}
void Engine::ProcessInput(){
while(window.pollEvent(event)){
if(event.type == (sf::Event::Closed))
window.close();
}
//process movement detection of piece
int temp = level.GetGamePieces().size();
if(sf::Keyboard::isKeyPressed(sf::Keyboard::Left)){
level.GetGamePieces().at(temp - 1).GetPieceSprite().move(-10, 0);
std::cout << "left";
moved = true;
}
else if(sf::Keyboard::isKeyPressed(sf::Keyboard::Right)){
level.GetGamePieces().at(temp - 1).GetPieceSprite().move(10,0);
std::cout << "right";
moved = true;
}
else{
level.GetGamePieces().at(temp - 1).GetPieceSprite().move(0,10);
std::cout << "down";
moved = true;
}
}
I only want to move the gamepiece one sqaure at a time, once per second, but I just don't know how to do this.
Edit: Here's the code that renders the frames
void Engine::Render(){
window.clear();
//draw wall tiles
for(int i = 0; i < 160; i++){
if(i < 60){
level.GetWallTile().setPosition(0, i * 10);
window.draw(level.GetWallTile());
}
if(i >= 60 && i < 100){
level.GetWallTile().setPosition((i - 60) * 10, 590);
window.draw(level.GetWallTile());
}
if(i >= 100){
level.GetWallTile().setPosition(390, (i - 100) * 10);
window.draw(level.GetWallTile());
}
}
//draw BG tiles
for(int i = 1; i < 39; i++){
for(int j = 0; j < 59; j++){
level.GetBGTile().setPosition(i * 10, j * 10);
window.draw(level.GetBGTile());
}
}
for(int i = 0; i < level.GetGamePieces().size(); i++){
window.draw(level.GetGamePieces()[i].GetPieceSprite());
}
window.display();
}

You are not accounting for the fact that a user may want to move a shape in between movements of the shape.
You should allow the user to move the shape left, right, and down whenever he/she wants to, but
not force the downward movement until after the elapsed time has completed.
When I made a block-based game, here's what I did
void updateSeconds( double deltaTime ) {
// If timer until shape falls runs out, move the shape down.
timeUntilShapeDrop -= deltaTime;
if ( timeUntilShapeDrop > 0 || currentFallingShape->isAnimating() ) {
return;
}
// If ( shape collides with map when moved down )
currentFallingShape->move(-1,0);
if ( currentFallingShape->isCollisionWithMap ( * map ) ) {
currentFallingShape->move(1,0);
// Lock shape in place on the map
currentFallingShape->setBlocksOnMap( * map );
lastUpdateLinesCleared = clearFullLines();
}
timeUntilShapeDrop = calculateShapeDropTimeInterval();
I also included gradual shape animation if you're interested one way of how to do it. I built a graphical game map on top of the logical map and used the logical map to start the graphical interpolation. I broke the graphics library I was using, but the logic code is still good for reference or use if you'd like.

Related

Adjusting the speed of 2 different object on console

I'm making a simple game like Google's Dinosaur Game. As you pass the obstacles, their speed increases as well as the Dino. What I wanna do is make Dino's speed constant while obstacles speed increases.
while (1)
{
if (GetAsyncKeyState(VK_SPACE) < 0 || action) // checking if the user press SPACE
{
if (!action) button = _getch();
if (button == VK_SPACE) // rest is making the dino move up and down.
{
action = 1;
if (loop < 6)
{
std::queue<Position> tempQue = dinoPos;
for (size_t i = 0; i < DINOSIZE; i++)
{
setCursor(dinoPos.front().x, dinoPos.front().y); dinoPos.pop();
std::cout << " ";
}
for (size_t i = 0; i < DINOSIZE; i++)
{
setCursor(tempQue.front().x, tempQue.front().y - 1);
tempQue.front().y -= 1;
dinoPos.push(tempQue.front());
tempQue.pop();
std::cout << "D";
}
}
loop++;
if (loop == 12)
{
loop = 0;
action = 0;
}
}
}
std::this_thread::sleep_for(std::chrono::milliseconds(speed)); // using this for speeding up the game
You don't want to use sleep_for. It can sleep for longer and offers no timing guarantees. What you want it a fixed update game loop. You can look in popular game engines to see how it's implemented.
But it boils down to the following pseudo-code:
//mainloop
// get a number of frame based on time.
// take a number of frame higher than expected frame rate. say 10 ms.
int frame = now() / frameDuration;
while(true)
{
int frameNow = now() / frameDuration;
while(frame != frameNow)
{
Update();
frame++;
}
render();
}
With the following update function: (pseudo-code again)
// there, you advance you object for a fixing duration, say 10 ms.
// just use different speed for each
Update()
{
obstacle.position += obstacleSpeed * frameDuration;
character.position += characterSpeed * frameDuration;
}

Tic Tac Toe: Evaluating Heuristic Value of a Node

Pardon me if this question already exists, I've searched a lot but I haven't gotten the answer to the question I want to ask. So, basically, I'm trying to implement a Tic-Tac-Toe AI that uses the Minimax algorithm to make moves.
However, one thing I don't get is, that when Minimax is used on an empty board, the value returned is always 0 (which makes sense because the game always ends in a draw if both players play optimally).
So Minimax always chooses the first tile as the best move when AI is X (since all moves return 0 as value). Same happens for the second move and it always chooses the second tile instead. How can I fix this problem to make my AI pick the move with the higher probability of winning? Here is the evaluation and Minimax function I use (with Alpha-Beta pruning):
int evaluate(char board[3][3], char AI)
{
for (int row = 0; row<3; row++)
{
if (board[row][0] != '_' && board[row][0] == board[row][1] && board[row][1] == board[row][2])
{
if (board[row][0]==AI)
{
return +10;
}
else
{
return -10;
}
}
}
for (int col = 0; col<3; col++)
{
if (board[0][col] != '_' && board[0][col] == board[1][col] && board[1][col] == board[2][col])
{
if (board[0][col]==AI)
{
return +10;
}
else
{
return -10;
}
}
}
if (board[1][1] != '_' && ((board[0][0]==board[1][1] && board[1][1]==board[2][2]) || (board[0][2]==board[1][1] && board[1][1]==board[2][0])))
{
if (board[1][1]==AI)
{
return +10;
}
else
{
return -10;
}
}
return 0;
}
int Minimax(char board[3][3], bool AITurn, char AI, char Player, int depth, int alpha, int beta)
{
bool breakout = false;
int score = evaluate(board, AI);
if(score == 10)
{
return score - depth;
}
else if(score == -10)
{
return score + depth;
}
else if(NoTilesEmpty(board))
{
return 0;
}
if(AITurn == true)
{
int bestvalue = -1024;
for(int i = 0; i < 3; i++)
{
for(int j = 0; j<3; j++)
{
if(board[i][j] == '_')
{
board[i][j] = AI;
bestvalue = max(bestvalue, Minimax(board, false, AI, Player, depth+1, alpha, beta));
alpha = max(bestvalue, alpha);
board[i][j] = '_';
if(beta <= alpha)
{
breakout = true;
break;
}
}
}
if(breakout == true)
{
break;
}
}
return bestvalue;
}
else if(AITurn == false)
{
int bestvalue = +1024;
for(int i = 0; i < 3; i++)
{
for(int j = 0; j<3; j++)
{
if(board[i][j] == '_')
{
board[i][j] = Player;
bestvalue = min(bestvalue, Minimax(board, true, AI, Player, depth+1, alpha, beta));
beta = min(bestvalue, beta);
board[i][j] = '_';
if(beta <= alpha)
{
breakout = true;
break;
}
}
}
if(breakout == true)
{
break;
}
}
return bestvalue;
}
}
Minimax assumes optimal play, so maximizing "probability of winning" is not a meaningful notion: Since the other player can force a draw but cannot force a win, they will always force a draw. If you want to play optimally against a player who is not perfectly rational (which, of course, is one of the only two ways to win*), you'll need to assume some probability distribution over the opponent's moves and use something like ExpectMinimax, where with some probability the opponent's move is overridden by a random mistake. Alternatively, you can deliberately restrict the ply of the minimax search, using a heuristic for the opponent's play beyond a certain depth (but still searching the game tree for your own moves.)
* The other one is not to play.
Organize your code into smaller routines so that it looks tidier and easier to debug. Apart from the recursive minimax function, an all-possible-valid-move generation function and a robust evaluation sub-routine are essential ( which seems lacking here).
For example, at the beginning of the game, the evaluation algorithm should return a non-zero score, every position should have a relative scoring index ( eg middle position may have slightly higher weightage than the corners).
Your minimax boundary condition - return if there is no empty cell positions ; is flawed as it will evaluate even when a winning/losing move occurred in the preceding ply. Such conditions will aggravate in more complex AI games.
If you are new to minimax, you can find plenty of ready to compile sample codes on CodeReview

How do I trace back a path using A* in C++? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've been trying to implement A* for weeks so an enemy can chase a player in my game, and I can't get it to work. I've been working on it the entire weekend, and I even ended up scraping most of it and re writing it. I can draw a path from the starting location to the goal, but I can't trace it back, as in actually write down the path. I'm using Vector2f (ordered pair of floats) and Sprite from SFML but all the code there is pretty simple so you won't really need to understand it.
Edit: the problem is with Node.cameFrom. For some reason, it doesn't cout anything but the walls.
Here's Node.h
#ifndef NODE_H
#define NODE_H
#include <SFML/Graphics.hpp>
using namespace sf;
class Node {
public:
Vector2f pos;
// Distance traveled already to reach node
int level;
// Level + estimated dist to goal
int priority;
Node *cameFrom;
Node(Vector2f npos, int lv, Vector2f dest, Node *cf=nullptr);
bool operator == (const Node &nhs) const {
return nhs.priority == priority;
}
};
#endif // NODE_H
Node.cpp
#include "Node.h"
#include <SFML/Graphics.hpp>
#include <math.h>
#include <iostream>
using namespace std;
using namespace sf;
int estimatedDist(Vector2f pos, Vector2f dest) {
return abs(dest.x - pos.x) + abs(dest.y - pos.y);
}
Node::Node(Vector2f npos, int lv, Vector2f dest, Node *cf) {
cameFrom = cf;
level = lv;
pos = npos;
priority = level + estimatedDist(pos, dest);
}
Enemy.cpp pathfind functions
bool occupies(Vector2f pos, vector<Wall> walls) {
for (unsigned w = 0; w < walls.size(); w++) {
if (walls.at(w).collisionBox.getGlobalBounds().contains(pos.x * 32, pos.y * 32)) {
return true;
}
}
return false;
}
bool nFind(Node n, vector<Node> nodes) {
for (unsigned i = 0; i < nodes.size(); i++) {
if (nodes.at(i).pos == n.pos) {
return true;
}
}
return false;
}
void Enemy::pathFind(Vector2f dest, vector<Wall> walls) {
char fullMap[32][22];
vector<Node> openSet;
vector<Node> closedSet;
int xStart, yStart;
for (unsigned y = 0; y < 22; y++) {
for (unsigned x = 0; x < 32; x++) {
if (sprite.getGlobalBounds().top >= y * 32 && sprite.getGlobalBounds().top <= (y + 1) * 32) {
if (sprite.getGlobalBounds().left >= x * 32 && sprite.getGlobalBounds().left <= (x + 1) * 32) {
xStart = x;
yStart = y;
}
} if (occupies(Vector2f(x, y), walls)) {
fullMap[x][y] = '2';
} else {
fullMap[x][y] = ' ';
}
}
}
fullMap[int(dest.x)][int(dest.y)] = 'D';
Node *current = new Node(Vector2f(xStart, yStart), 0, dest);
fullMap[int(current->pos.x)][int(current->pos.y)] = '2';
openSet.push_back(*current);
while (openSet.size() > 0) {
sort(openSet.begin(), openSet.end(), sortByPriority());
*current = openSet.front();
if (current->pos == dest) {
cout << "We gots it ";
for (unsigned y = 0; y < 22; y++) {
for (unsigned x = 0; x < 32; x++) {
if (occupies(Vector2f(x, y), walls)) {
fullMap[x][y] = '2';
} else {
fullMap[x][y] = ' ';
}
}
}
while (current->cameFrom) {
fullMap[int(current->pos.x)][int(current->pos.y)] = 'P';
current = current->cameFrom;
for (unsigned y = 0; y < 22; y++) {
for (unsigned x = 0; x < 32; x++) {
cout << fullMap[x][y];
}
cout << endl;
}
cout << endl;
} for (unsigned y = 0; y < 22; y++) {
for (unsigned x = 0; x < 32; x++) {
cout << fullMap[x][y];
}
cout << endl;
}
cout << endl;
return;
}
openSet.erase(remove(openSet.begin(), openSet.end(), *current), openSet.end());
closedSet.push_back(*current);
fullMap[int(current->pos.x)][int(current->pos.y)] = '2';
vector<Node> neighbors;
neighbors.push_back(Node(Vector2f(current->pos.x - 1, current->pos.y - 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x, current->pos.y - 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x + 1, current->pos.y - 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x + 1, current->pos.y), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x + 1, current->pos.y + 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x, current->pos.y + 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x - 1, current->pos.y + 1), current->level + 1, dest));
neighbors.push_back(Node(Vector2f(current->pos.x - 1, current->pos.y), current->level + 1, dest));
for (unsigned i = 0; i < neighbors.size(); i++) {
if (nFind(neighbors.at(i), closedSet) ||
neighbors.at(i).pos.x > 22 ||
neighbors.at(i).pos.y > 32 ||
neighbors.at(i).pos.x < 0 ||
neighbors.at(i).pos.y < 0 ||
occupies(neighbors.at(i).pos, walls)) {
continue;
} if (!nFind(neighbors.at(i), openSet)) {
openSet.push_back(neighbors.at(i));
}
neighbors.at(i).cameFrom = current;
}
}
}
MCVE would help to try on our side (see zett42 comment).
So by just a quick look I can give you some pointers where to look during debugging, but no clear answer.
These lines looks highly suspicious:
Node *current = new Node(Vector2f(xStart, yStart), 0, dest);
// ^ no delete in source, will leak memory
*current = openSet.front();
// will overwrite the heap memory with copy constructor
// but the pointer will remain the same
// so all of your nodes will always have "cameFrom"
// pointing to this same memory.
Overall this code looks a bit complicated. Do you have the game with fixed square 32x22 tiles? Why "walls" vector then?
I would maintain only single global tile map as level (but the A* search then shouldn't damage it, rather create it's own copy for search, or rather to have new map with to-reach-costs, that would probably simplify the code a lot).
xStart, yStart can be computed directly, no need to iterate it every loop:
xStart = int(sprite.getGlobalBounds().left)>>5; // left/32
yStart = int(sprite.getGlobalBounds().top)>>5; // top/32
The bool operator == (const Node &nhs) const looks unhealthy, but it's not even used anywhere.
And to see if neighbour is in wall, you don't need to use the O(N) occupies, just test the map for == '2'? (I mean if the code is designed that way, I didn't verify it will work as expected if you change it right away in your code).
Overall I don't like that code, you can streamline that into shorter version, if you focus on what data you want to process and how, and stop moving objects back and forth through several lists. For A* IIRC you should need single sorted queue with insert_at to keep it sorted vs map field to mark which squares were already processed.
Are those Vector2f positions important, for example:
...
P#D
...
If player "P" stands in lower part of square ("#" is wall, "D" is destination), should the A* find the bottom path, or you need only "tile" accuracy and the upper path would be good too?
It's not clear to me from you question, whether you work with sub-tile accuracy or not, if not, then you can drop most of that Vector2f stuff and work only in the tile coordinates.
With sub-tile accuracy you can probably still drop most of it, but if actually tile has "32" size, and player is for example only "3" wide, so he can use the tile as some kind of "area" and move across it by different lines, avoiding in example above to go to full centre of middle tile, saving distance... Then you need to calculate those sub-tile positions somehow in to get at least roughly accurate "shortest" path.
When I was working on one game, we had linked list of nodes (classic math graph) instead of tiles, each node had it's "area radius", and after the shortest node-to-node path was found, another reiterative algorithm did few loops to move from node positions to some shadow-node position which was within the radius, but was closer to the other two shadow-nodes. After hitting max iterations, or the shadow-positions didn't change much (usually it took 3-5 iterations at most), it stopped "smoothing" the path and returned it. This way soldiers were running across desert in almost straight lines, while actually the waypoint nodes were like sparse grid with 20m area radius, so the soldier was actually going only like 2-3 nodes away and starting/ending far away from node centre going almost zig-zag in the node graph itself.
For every tile, you need its cost (cost of getting there plus heuristic), and the identify of the neighbouring tile from which you reached it.
The algorithm has a "balloon" of points round the start point, and the best point is analysed first. So if the path is simple, the balloon is very elongated. If it is convoluted, the balloon is roundish, and many paths get abandoned because hemmed in by walls and already-visited tiles.

Minimax with alpha-beta pruning problems

I'm making a C++ program for the game chopsticks.
It's a really simple game with only 625 total game states (and it's even lower if you account for symmetry and unreachable states). I have read up minimax and alpha-beta algorithms, mostly for tic tac toe, but the problem I was having was that in tic tac toe it's impossible to loop back to a previous state while that can easily happen in chopsticks. So when running the code it would end up with a stack overflow.
I fixed this by adding flags for previously visited states (I don't know if that's the right way to do it.) so that they can be avoided, but now the problem I have is that the output is not symmetric as expected.
For example in the start state of the game each player has one finger so it's all symmetric. The program tells me that the best move is to hit my right hand with my left but not the opposite.
My source code is -
#include <iostream>
#include <array>
#include <vector>
#include <limits>
std::array<int, 625> t; //Flags for visited states.
std::array<int, 625> f; //Flags for visited states.
int no = 0; //Unused. For debugging.
class gamestate
{
public:
gamestate(int x, bool t) : turn(t) //Constructor.
{
for (int i = 0; i < 2; i++)
for (int j = 0; j < 2; j++) {
val[i][j] = x % 5;
x /= 5;
}
init();
}
void print() //Unused. For debugging.
{
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++)
std::cout << val[i][j] << "\t";
std::cout << "\n";
}
std::cout << "\n";
}
std::array<int, 6> canmove = {{ 1, 1, 1, 1, 1, 1 }}; //List of available moves.
bool isover() //Is the game over.
{
return ended;
}
bool won() //Who won the game.
{
return winner;
}
bool isturn() //Whose turn it is.
{
return turn;
}
std::vector<int> choosemoves() //Choose the best possible moves in the current state.
{
std::vector<int> bestmoves;
if(ended)
return bestmoves;
std::array<int, 6> scores;
int bestscore;
if(turn)
bestscore = std::numeric_limits<int>::min();
else
bestscore = std::numeric_limits<int>::max();
scores.fill(bestscore);
for (int i = 0; i < 6; i++)
if (canmove[i]) {
t.fill(0);
f.fill(0);
gamestate *play = new gamestate(this->playmove(i),!turn);
scores[i] = minimax(play, 0, std::numeric_limits<int>::min(), std::numeric_limits<int>::max());
std::cout<<i<<": "<<scores[i]<<std::endl;
delete play;
if (turn) if (scores[i] > bestscore) bestscore = scores[i];
if (!turn) if (scores[i] < bestscore) bestscore = scores[i];
}
for (int i = 0; i < 6; i++)
if (scores[i] == bestscore)
bestmoves.push_back(i);
return bestmoves;
}
private:
std::array<std::array<int, 2>, 2 > val; //The values of the fingers.
bool turn; //Whose turn it is.
bool ended = false; //Has the game ended.
bool winner; //Who won the game.
void init() //Check if the game has ended and find the available moves.
{
if (!(val[turn][0]) && !(val[turn][1])) {
ended = true;
winner = !turn;
canmove.fill(0);
return;
}
if (!(val[!turn][0]) && !(val[!turn][1])) {
ended = true;
winner = turn;
canmove.fill(0);
return;
}
if (!val[turn][0]) {
canmove[0] = 0;
canmove[1] = 0;
canmove[2] = 0;
if (val[turn][1] % 2)
canmove[5] = 0;
}
if (!val[turn][1]) {
if (val[turn][0] % 2)
canmove[2] = 0;
canmove[3] = 0;
canmove[4] = 0;
canmove[5] = 0;
}
if (!val[!turn][0]) {
canmove[0] = 0;
canmove[3] = 0;
}
if (!val[!turn][1]) {
canmove[1] = 0;
canmove[4] = 0;
}
}
int playmove(int mov) //Play a move to get the next game state.
{
auto newval = val;
switch (mov) {
case 0:
newval[!turn][0] = (newval[turn][0] + newval[!turn][0]);
newval[!turn][0] = (5 > newval[!turn][0]) ? newval[!turn][0] : 0;
break;
case 1:
newval[!turn][1] = (newval[turn][0] + newval[!turn][1]);
newval[!turn][1] = (5 > newval[!turn][1]) ? newval[!turn][1] : 0;
break;
case 2:
if (newval[turn][1]) {
newval[turn][1] = (newval[turn][0] + newval[turn][1]);
newval[turn][1] = (5 > newval[turn][1]) ? newval[turn][1] : 0;
} else {
newval[turn][0] /= 2;
newval[turn][1] = newval[turn][0];
}
break;
case 3:
newval[!turn][0] = (newval[turn][1] + newval[!turn][0]);
newval[!turn][0] = (5 > newval[!turn][0]) ? newval[!turn][0] : 0;
break;
case 4:
newval[!turn][1] = (newval[turn][1] + newval[!turn][1]);
newval[!turn][1] = (5 > newval[!turn][1]) ? newval[!turn][1] : 0;
break;
case 5:
if (newval[turn][0]) {
newval[turn][0] = (newval[turn][1] + newval[turn][0]);
newval[turn][0] = (5 > newval[turn][0]) ? newval[turn][0] : 0;
} else {
newval[turn][1] /= 2;
newval[turn][0] = newval[turn][1];
}
break;
default:
std::cout << "\nInvalid move!\n";
}
int ret = 0;
for (int i = 1; i > -1; i--)
for (int j = 1; j > -1; j--) {
ret+=newval[i][j];
ret*=5;
}
ret/=5;
return ret;
}
static int minimax(gamestate *game, int depth, int alpha, int beta) //Minimax searching function with alpha beta pruning.
{
if (game->isover()) {
if (game->won())
return 1000 - depth;
else
return depth - 1000;
}
if (game->isturn()) {
for (int i = 0; i < 6; i++)
if (game->canmove[i]&&t[game->playmove(i)]!=-1) {
int score;
if(!t[game->playmove(i)]){
t[game->playmove(i)] = -1;
gamestate *play = new gamestate(game->playmove(i),!game->isturn());
score = minimax(play, depth + 1, alpha, beta);
delete play;
t[game->playmove(i)] = score;
}
else
score = t[game->playmove(i)];
if (score > alpha) alpha = score;
if (alpha >= beta) break;
}
return alpha;
} else {
for (int i = 0; i < 6; i++)
if (game->canmove[i]&&f[game->playmove(i)]!=-1) {
int score;
if(!f[game->playmove(i)]){
f[game->playmove(i)] = -1;
gamestate *play = new gamestate(game->playmove(i),!game->isturn());
score = minimax(play, depth + 1, alpha, beta);
delete play;
f[game->playmove(i)] = score;
}
else
score = f[game->playmove(i)];
if (score < beta) beta = score;
if (alpha >= beta) break;
}
return beta;
}
}
};
int main(void)
{
gamestate test(243, true);
auto movelist = test.choosemoves();
for(auto i: movelist)
std::cout<<i<<std::endl;
return 0;
}
I'm passing the moves in a sort of base-5 to decimal system as each hand can have values from 0 to 4.
In the code I have input the state -
3 3
4 1
The output says I should hit my right hand (1) to the opponent's right (3) but it does not say I should hit it to my opponent's left (also 3)
I think the problem is because of the way I handled infinite looping.
What would be the right way to do it? Or if that is the right way, then how do I fix the problem?
Also please let me know how I can improve my code.
Thanks a lot.
Edit:
I have changed my minimax function as follows to ensure that infinite loops are scored above losing but I'm still not getting symmetry. I also made a function to add depth to the score
static float minimax(gamestate *game, int depth, float alpha, float beta) //Minimax searching function with alpha beta pruning.
{
if (game->isover()) {
if (game->won())
return 1000 - std::atan(depth) * 2000 / std::acos(-1);
else
return std::atan(depth) * 2000 / std::acos(-1) - 1000;
}
if (game->isturn()) {
for (int i = 0; i < 6; i++)
if (game->canmove[i]) {
float score;
if(!t[game->playmove(i)]) {
t[game->playmove(i)] = -1001;
gamestate *play = new gamestate(game->playmove(i), !game->isturn());
score = minimax(play, depth + 1, alpha, beta);
delete play;
t[game->playmove(i)] = score;
} else if(t[game->playmove(i)] == -1001)
score = 0;
else
score = adddepth(t[game->playmove(i)], depth);
if (score > alpha) alpha = score;
if (alpha >= beta) break;
}
return alpha;
} else {
for (int i = 0; i < 6; i++)
if (game->canmove[i]) {
float score;
if(!f[game->playmove(i)]) {
f[game->playmove(i)] = -1001;
gamestate *play = new gamestate(game->playmove(i), !game->isturn());
score = minimax(play, depth + 1, alpha, beta);
delete play;
f[game->playmove(i)] = score;
} else if(f[game->playmove(i)] == -1001)
score = 0;
else
score = adddepth(f[game->playmove(i)], depth);
if (score < beta) beta = score;
if (alpha >= beta) break;
}
return beta;
}
}
This is the function to add depth -
float adddepth(float score, int depth) //Add depth to pre-calculated score.
{
int olddepth;
float newscore;
if(score > 0) {
olddepth = std::tan((1000 - score) * std::acos(-1) / 2000);
depth += olddepth;
newscore = 1000 - std::atan(depth) * 2000 / std::acos(-1);
} else {
olddepth = std::tan((1000 + score) * std::acos(-1) / 2000);
depth += olddepth;
newscore = std::atan(depth) * 2000 / std::acos(-1) - 1000;
}
return newscore;
}
Disclaimer: I don't know C++, and I frankly haven't bothered to read the game rules. I have now read the rules, and still stand by what I said...but I still don't know C++. Still, I can present some general knowledge of the algorithm which should set you off in the right direction.
Asymmetry is not in itself a bad thing. If two moves are exactly equivalent, it should choose one of them and not stand helpless like Buridan's ass. You should, in fact, be sure that any agent you write has some method of choosing arbitrarily between policies which it cannot distinguish.
You should think more carefully about the utility scheme implied by refusing to visit previous states. Pursuing an infinite loop is a valid policy, even if your current representation of it will crash the program; maybe the bug is the overflow, not the policy that caused it. If given the choice between losing the game, and refusing to let the game end, which do you want your agent to prefer?
Playing ad infinitum
If you want your agent to avoid losing at all costs -- that is, you want it to prefer indefinite play over loss -- then I would suggest treating any repeated state as a terminal state and assigning it a value somewhere between winning and losing. After all, in a sense it is terminal -- this is the loop the game will enter forever and ever and ever, and the definite result of it is that there is no winner. However, remember that if you are using simple minimax (one utility function, not two), then this implies that your opponent also regards eternal play as a middling result.
It may sound ridiculous, but maybe playing unto infinity is actually a reasonable policy. Remember that minimax assumes the worst case -- a perfectly rational foe whose interests are the exact opposite of yours. But if, for example, you're writing an agent to play against a human, then the human will either err logically, or will eventually decide they would rather end the game by losing -- so your agent will benefit from patiently staying in this Nash equilibrium loop!
Alright, let's end the game already
If you want your agent to prefer that the game end eventually, then I would suggest implementing a living penalty -- a modifier added to your utility which decreases as a function of time (be it asymptotic or without bound). Implemented carefully, this can guarantee that, eventually, any end is preferable to another turn. With this solution as well, you need to be careful about considering what preferences this implies for your opponent.
A third way
Another common solution is to depth-limit your search and implement an evaluation function. This takes the game state as its input and just spits out a utility value which is its best guess at the end result. Is this provably optimal? No, not unless your evaluation function is just completing the minimax, but it means your algorithm will finish within a reasonable time. By burying this rough estimate deep enough in the tree, you wind up with a pretty reasonable model. However, this produces an incomplete policy, which means that it is more useful for a replanning agent than for a standard planning agent. Minimax replanning is the usual approach for complex games (it is, if I'm not mistaken, the basic algorithm followed by Deep Blue), but since this is a very simple game you probably don't need to take this approach.
A side note on abstraction
Note that all of these solutions are conceptualized as either numeric changes to or estimations of the utility function. This is, in general, preferable to arbitrarily throwing away possible policies. After all, that's what your utility function is for -- any time you make a policy decision on the basis of anything except the numeric value of your utility, you are breaking your abstraction and making your code less robust.

MiniMax Algorithm for Tic Tac Toe failure

I'm trying to implement a minimax algorithm for tic tac toe with alpha-beta pruning. Right now I have the program running, but it does not seem to be working. Whenever I run it it seems to input garbage in all the squares. I've implemented it so that my minimax function takes in a board state and modifies that state so that when it is finished, the board state contains the next best move. Then, I set 'this' to equal the modified board. Here are my functions for the minimax algorithm:
void board::getBestMove() {
board returnBoard;
miniMax(INT_MIN + 1, INT_MAX -1, returnBoard);
*this = returnBoard;
}
int board::miniMax(int alpha, int beta, board childWithMaximum) {
if (checkDone())
return boardScore();
vector<board> children = getChildren();
for (int i = 0; i < 9; ++i) {
if(children.empty()) break;
board curr = children.back();
if (curr.firstMoveMade) { // not an empty board
board dummyBoard;
int score = curr.miniMax(alpha, beta, dummyBoard);
if (computerTurn && (beta > score)) {
beta = score;
childWithMaximum = *this;
if (alpha >= beta) break;
} else if (alpha < score) {
alpha = score;
childWithMaximum = *this;
if (alpha >= beta) break;
}
}
}
return computerTurn? alpha : beta;
}
vector<board> board::getChildren() {
vector<board> children;
for (int i = 0; i < 3; ++i) {
for (int j = 0; j < 3; ++j) {
if (getPosition(i, j) == '*') { //move not made here
board moveMade(*this);
moveMade.setPosition(i, j);
children.push_back(moveMade);
}
}
}
return children;
}
And here are my full files if someone wants to try running it:
.cpp : http://pastebin.com/ydG7RFRX
.h : http://pastebin.com/94mDdy7x
There may be many issues with your code... you sure posted a lot of it. Because you are asking your question it is incumbent on you to try everything you can on your own first and then reduce your question to the smallest amount of code necessary to clarify what is going on. As it is, I don't feel that you've put much effort into asking this question.
But maybe I can still provide some help:
void board::getBestMove() {
board returnBoard;
miniMax(INT_MIN + 1, INT_MAX -1, returnBoard);
*this = returnBoard;
}
See how you are saying *this = returnBoard.
That must mean that you want to get a board back from miniMax.
But look at how miniMax is defined!
int board::miniMax(int alpha, int beta, board childWithMaximum)
It accepts childWithMaximum via pass by value so it cannot return a board in this way.
what you wanted to say was probably:
int board::miniMax(int alpha, int beta, board & childWithMaximum)