I am writing 3D tic tac toe game using minimax algorithm with alpha beta pruning, but the algorithm doesnt give optimal solution, it goes and chooses next possible solution from winning states, not taking into concern what is on the board, meaning it wont block my moves.
This is the code:
`
int Game::miniMax(char marker, int depth, int alpha, int beta){
// Initialize best move
bestMove = std::make_tuple(-1, -1, -1);
// If we hit a terminal state (leaf node), return the best score and move
if (isBoardFull() || getBoardState('O')!=0 || depth>5)
return getBoardState('O');
auto allowedMoves = getAllowedMoves();
for (int i = 0; i < allowedMoves.size(); i++){
auto move = allowedMoves[i];
board[std::get<0>(move)][std::get<1>(move)][std::get<2>(move)] = marker;
// Maximizing player's turn
if (marker == 'O'){
int bestScore = INT32_MIN;
int score = miniMax('X', depth + 1, alpha, beta);
// Get the best scoring move
if (bestScore <= score){
bestScore = score - depth * 10;
bestMove = move;
// Check if this branch's best move is worse than the best
// option of a previously search branch. If it is, skip it
alpha = std::max(alpha, bestScore);
board[std::get<0>(move)][std::get<1>(move)][std::get<2>(move)] = '-';
if (beta <= alpha){
break;
}
}
} // Minimizing opponent's turn
else{
int bestScore = INT32_MAX;
int score = miniMax('O', depth + 1, alpha, beta);
if (bestScore >= score){
bestScore = score + depth * 10;
bestMove = move;
// Check if this branch's best move is worse than the best
// option of a previously search branch. If it is, skip it
beta = std::min(beta, bestScore);
board[std::get<0>(move)][std::get<1>(move)][std::get<2>(move)] = '-';
if (beta <= alpha){
break;
}
}
}
board[std::get<0>(move)][std::get<1>(move)][std::get<2>(move)] = '-'; // Undo move
}
if(marker== 'O')
return INT32_MIN;
else
return INT32_MAX;
}
`
What do I need to change to make it work?
I tried other ways to implement minimax, but it doesnt give optimal, or any, solution.
The limit on depth is because it is too slow for bigger depths, but still not giving solution, also increasing the value of the constant that multiplies the depth in the score part is only slowing the program
Related
I'm implementing a search algorithm into the search function with Negamax with alpha-beta pruning. However, it often misses forced checkmate.
(Note: "Mate in X" counts whole turns, while "depth" and "move(s)" relies on half moves.)
Example
The position with the following FEN: 1k1r4/pp1b1R2/3q2pp/4p3/2B5/4Q3/PPP2B2/2K5 b - - 0 1 has a Mate in 3 (depth of 5 to the algorithm).
It goes Qd1+, Kxd1, Bg4+, Kc1/Ke1 (Doesn't matter), Rd1#.
It can spot the checkmate from 1 move away, but fails at higher depths.
Possible Causes
It could be a typo, a misused type, or even a complete misunderstanding of the method, as all of it happened before.
Simplified Code
I've make some part of the code code easier to read. (eg. remove std::, turns multiple lines into function).
Shouldn't changes the functionalities though.
Root Call
pieceMove searchBestMove (gameState currentState, int depth) {
//Calls the Negamax search
pieceColor sideToMove = whoseTurnIsIt();
vector<pieceMove> moveList = generateLegalMoves(currentState, sideToMove);
pieceMove bestMove;
signed int bestEval = numeric_limits<signed int>::max();
for (const auto move : moveList) {
signed int evaluation = negaMax(applyMove(currentState, move), numeric_limits<signed int>::min(), numeric_limits<signed int>::max(), depth - 1, 1);
if (evaluation < bestEval) {
bestMove = move;
bestEval = evaluation;
}
}
return bestMove;
}
Search Function
signed int negaMax (gameState currentState, signed int alpha, signed int beta, int depth, int rootDepth) {
//Main Negamax search
//Terminal node
if (depth == 0) {
return evaluates(currentState); //Replace this line with the one below to enable the extended search
//return quiescenceSearch(currentState, alpha, beta);
}
//Mate distance pruning
signed int mateDistScore = numeric_limits<signed int>::max() - rootDepth;
alpha = max(alpha, -mateDistScore);
beta = min(beta, mateDistScore - 1);
if (alpha >= beta) return alpha;
vector<pieceMove> moveList = generateLegalMoves(currentState);
//If no moves are allowed, then it's either checkmate or stalemate
if (moveList.size() == 0) return evaluates(currentState)
orderMoves(currentState, moveList);
for (const auto move : moveList) {
signed int score = -negaMax(applyMove(currentState, move), -beta, -alpha, depth - 1, rootDepth + 1);
if (score >= beta) return beta; //Bata cutoff
alpha = max(score, alpha);
}
return alpha;
}
Extended Search
signed int quiescenceSearch (gameState currentState, signed int alpha, signed int beta) {
//Searches only captures
//Terminal node
int evaluation = evaluates(currentState);
if (evaluation >= beta) return beta;
alpha = max(alpha, evaluation);
vector<pieceMove> moveList = generateCaptureMoves(currentState);
//If no moves are allowed, then it's either checkmate or stalemate
if (moveList.size() == 0) return evaluates(currentState);
orderMoves(currentState, moveList);
for (const auto move : moveList) {
signed int score = -quiescenceSearch(applyMove(currentState, move), -beta, -alpha);
if (score >= beta) return beta; //Bata cutoff
alpha = max(score, alpha);
}
return alpha;
}
I think you need to call the function "quiescenceSearch" when the depth is 0 in "negaMax". Also you need to check for "checks" too in "quiescenceSearch" along with captures since they are not quiet moves. Also Matedistance pruning works only when positions are properly scored(https://www.chessprogramming.org/Mate_Distance_Pruning#Mating_Value). May be checking if your evaluation function is evaluating properly could also help.
I am kinda stuck with my basic voxel physics right now. It's very, very choppy and I am pretty sure my maths is broken somewhere, but let's see what you have to say:
// SOMEWHERE AT CLASS LEVEL (so not being reinstantiated every frame, but persisted instead!)
glm::vec3 oldPos;
// ACTUAL IMPL
glm::vec3 distanceToGravityCenter =
this->entity->getPosition() -
((this->entity->getPosition() - gravityCenter) * 0.005d); // TODO multiply by time
if (!entity->grounded) {
glm::vec3 entityPosition = entity->getPosition();
if (getBlock(floorf(entityPosition.x), floorf(entityPosition.y), floorf(entityPosition.z))) {
glm::vec3 dir = entityPosition - oldPos; // Actually no need to normalize as we check for lesser, bigger or equal to 0
std::cout << "falling dir: " << glm::to_string(dir) << std::endl;
// Calculate offset (where to put after hit)
int x = dir.x;
int y = dir.y;
int z = dir.z;
if (dir.x >= 0) {
x = -1;
} else if (dir.x < 0) {
x = 1;
}
if (dir.y >= 0) {
y = -1;
} else if (dir.y < 0) {
y = 1;
}
if (dir.z >= 0) {
z = -1;
} else if (dir.z < 0) {
z = 1;
}
glm::vec3 newPos = oldPos + glm::vec3(x, y, z);
this->entity->setPosition(newPos);
entity->grounded = true; // If some update happens, grounded needs to be changed
} else {
oldPos = entity->getPosition();
this->entity->setPosition(distanceToGravityCenter);
}
}
Basic idea was to determine from which direction entityt would hit the surface and then just position it one "unit" back into that direction. But obviously I am doing something wrong as that will always move entity back to the point where it came from, effectively holding it at the spawn point.
Also this could probably be much easier and I am overthinking it.
As #CompuChip already pointed out, your ifs could be further simplified.
But what is more important is one logical issue that would explain the "choppiness" you describe (Sadly you did not provide any footage, so this is my best guess)
From the code you posted:
First you check if entity is grounded. If so you continue with checking if there is a collision and lastly, if there is not, you set the position.
You have to invert that a bit.
Save old position
Check if grounded
Set the position already to the new one!
Do collision detection
Reset to old position IF you registered a collision!
So basically:
glm::vec3 distanceToGravityCenter =
this->entity->getPosition() -
((this->entity->getPosition() - gravityCenter) * 0.005d); // TODO multiply by time
oldPos = entity->getPosition(); // 1.
if (!entity->grounded) { // 2.
this->fallingStar->setPosition(distanceToGravityPoint); // 3
glm::vec3 entityPosition = entity->getPosition();
if (getBlock(floorf(entityPosition.x), floorf(entityPosition.y), floorf(entityPosition.z))) { // 4, 5
this->entity->setPosition(oldPos);
entity->grounded = true; // If some update happens, grounded needs to be changed
}
}
This should get you started :)
I want to elaborate a bit more:
If you check for collision first and then set position you create an "infinite loop" upon first collision/hit as you collide, then if there is a collision (which there is) you set back to the old position. Basically just mathematic inaccuracy will make you move, as on every check you are set back to the old position.
Consider the if-statements for one of your coordinates:
if (dir.x >= 0) {
x = -1;
}
if (dir.x < 0) {
x = 1;
}
Suppose that dir.x < 0. Then you will skip the first if, enter the second, and x will be set to 1.
If dir.x >= 0, you will enter the first if and x will be set to -1. Now x < 0 is true, so you will enter the second if as well, and x gets set to 1 again.
Probably what you want is to either set x to 1 or to -1, depending on dir.x. You should only execute the second if when the first one was not entered, so you need an else if:
if (dir.x >= 0) {
x = -1;
} else if (dir.x < 0) {
x = 1;
}
which can be condensed, if you so please, into
x = (dir.x >= 0) ? -1 : 1;
I'm currently implementing a MiniMax Algorithm with Alpha Beta Pruning for a Tic Tac Toe game.
My algorithm takes in an empty board and, at the end has that board contain the same state as the current board, along with the next move made. Then, I simply make *this (the current board) equal to the returned board.
However, for some reason, my algorithm is getting stuck in an infinite loop. Here's my miniMax function:
int board::miniMax(int alpha, int beta, board & childWithMaximum)
{
if (checkDone())
return boardScore();
vector<board> children = getChildren();
while (!children.empty())
{
board curr = children.back();
board dummyBoard;
int score = curr.miniMax(alpha, beta, dummyBoard);
if (computerTurn && (beta > score)) {
beta = score;
childWithMaximum = *this;
if (alpha >= beta)
break;
} else if (alpha < score) {
alpha = score;
childWithMaximum = *this;
if (alpha >= beta)
break;
}
}
return computerTurn ? alpha : beta;
}
I've done some print-statement debugging, and it appears that this getChildren() helper function is working. I had it print out a couple of children and there were other board states in the tree:
vector<board> board::getChildren()
{
vector<board> children;
for (int i = 0; i < 3; ++i) {
for (int j = 0; j < 3; ++j) {
if (getPosition(i, j) == '*') {
//move not made here
board moveMade(*this);
moveMade.setPosition(i, j);
children.push_back(moveMade);
}
}
}
return children;
}
However, my miniMax() function is not making the return board equal to the next move.
The instructions in your while-loop never modify children, but will only stop if children.empty() is true. Therefore the loop inner is either never executed or executed infinitely.
Also here:
int score = curr.miniMax(alpha, beta, dummyBoard);
you call the function recursively with same parameters (except the third one which however is unused up to that point). Since the state of this, alpha and beta seems to be unchanged up to this point (except maybe if checkDone() or getPosition() changes it) this will also result in an infinite recursion.
Then, I simply make *this (the current board) equal to the returned board.
No, you only make other boards equal to *this. I do not see *this = anywhere in your code.
I'm trying to add Alpha Beta pruning into my minimax, but I can't understand where I'm going wrong.
At the moment I'm going through 5,000 iterations, where I should be going through approximately 16,000 according to a friend. When choosing the first position, it is returning -1 (a loss) whereas it should be able to definitely return a 0 at this point (a draw) as it should be able to draw from an empty board, however I can't see where I'm going wrong as I follow my code it seems to be fine
Strangely if I switch returning Alpha and Beta inside my checks (to achieve returning 0) the computer will attempt to draw but never initiate any winning moves, only blocks
My logical flow
If we are looking for alpha:
If the score > alpha, change alpha. if alpha and beta are overlapping, return alpha
If we are looking for beta:
If the score < beta, change beta. if alpha and beta are overlapping, return beta
Here is my
Recursive call
int MinimaxAB(TGameBoard* GameBoard, int iPlayer, bool _bFindAlpha, int _iAlpha, int _iBeta)
{
//How is the position like for player (their turn) on iGameBoard?
int iWinner = CheckForWin(GameBoard);
bool bFull = CheckForFullBoard(GameBoard);
//If the board is full or there is a winner on this board, return the winner
if(iWinner != NONE || bFull == true)
{
//Will return 1 or -1 depending on winner
return iWinner*iPlayer;
}
//Initial invalid move (just follows i in for loop)
int iMove = -1;
//Set the score to be instantly beaten
int iScore = INVALID_SCORE;
for(int i = 0; i < 9; ++i)
{
//Check if the move is possible
if(GameBoard->iBoard[i] == 0)
{
//Put the move in
GameBoard->iBoard[i] = iPlayer;
//Recall function
int iBestPositionSoFar = -MinimaxAB(GameBoard, Switch(iPlayer), !_bFindAlpha, _iAlpha, _iBeta);
//Replace Alpha and Beta variables if they fit the conditions - stops checking for situations that will never happen
if (_bFindAlpha == false)
{
if (iBestPositionSoFar < _iBeta)
{
//If the beta is larger, make the beta smaller
_iBeta = iBestPositionSoFar;
iMove = i;
if (_iAlpha >= _iBeta)
{
GameBoard->iBoard[i] = EMPTY;
//If alpha and beta are overlapping, exit the loop
++g_iIterations;
return _iBeta;
}
}
}
else
{
if (iBestPositionSoFar > _iAlpha)
{
//If the alpha is smaller, make the alpha bigger
_iAlpha = iBestPositionSoFar;
iMove = i;
if (_iAlpha >= _iBeta)
{
GameBoard->iBoard[i] = EMPTY;
//If alpha and beta are overlapping, exit the loop
++g_iIterations;
return _iAlpha;
}
}
}
//Remove the move you just placed
GameBoard->iBoard[i] = EMPTY;
}
}
++g_iIterations;
if (_bFindAlpha == true)
{
return _iAlpha;
}
else
{
return _iBeta;
}
}
Initial call (when computer should choose a position)
int iMove = -1; //Invalid
int iScore = INVALID_SCORE;
for(int i = 0; i < 9; ++i)
{
if(GameBoard->iBoard[i] == EMPTY)
{
GameBoard->iBoard[i] = CROSS;
int tempScore = -MinimaxAB(GameBoard, NAUGHT, true, -1000000, 1000000);
GameBoard->iBoard[i] = EMPTY;
//Choosing best value here
if (tempScore > iScore)
{
iScore = tempScore;
iMove = i;
}
}
}
//returns a score based on Minimax tree at a given node.
GameBoard->iBoard[iMove] = CROSS;
Any help regarding my logical flow that would make the computer return the correct results and make intelligent moves would be appreciated
Does your algorithm work perfectly without alpha-beta pruning? Your initial call should be given with false for _bFindAlpha as the root node behaves like an alpha node, but it doesn't look like this will make a difference:
int tempScore = -MinimaxAB(GameBoard, NAUGHT, false, -1000000, 1000000);
Thus I will recommend for you to abandon this _bFindAlpha nonsense and convert your algorithm to negamax. It behaves identically to minimax but makes your code shorter and clearer. Instead of checking whether to maximize alpha or minimize beta, you can just swap and negate when recursively invoking (this is the same reason you can return the negated value of the function right now). Here's a slightly edited version of the Wikipedia pseudocode:
function negamax(node, α, β, player)
if node is a terminal node
return color * the heuristic value of node
else
foreach child of node
val := -negamax(child, -β, -α, -player)
if val ≥ β
return val
if val > α
α := val
return α
Unless you love stepping through search trees, I think that you will find it easier to just write a clean, correct version of negamax than debug your current implementation.
I'm using Particle Deposition to try and create some volcano-like mountains procedurally but all I'm getting out of it is pyramid-like structures. Is anyone familiar with the algorithm that might be able to shed some light on what I might be doing wrong. I'm dropping each particle in the same place at the moment. If I don't they spread out in a very thin layer rather than any sort of mountain.
void TerrainClass::ParticalDeposition(int loops){
float height = 0.0;
//for(int k= 0; k <10; k++){
int dropX = mCurrentX = rand()%(m_terrainWidth-80) + 40;
int dropY = mCurrentZ = rand()%(m_terrainHeight-80) + 40;
int radius = 15;
float angle = 0;
int tempthing = 0;
loops = 360;
for(int i = 0; i < loops; i++){
mCurrentX = dropX + radius * cos(angle);
mCurrentZ = dropY + radius * sin(angle);
/*f(i%loops/5 == 0){
dropX -= radius * cos(angle);
dropY += radius * sin(angle);
angle+= 0.005;
mCurrentX = dropX;
mCurrentZ = dropY;
}*/
angle += 360/loops;
//dropX += rand()%5;
//dropY += rand()%5;
//for(int j = 0; j < loops; j++){
float newY = 0;
newY = (1 - (2.0f/loops)*i);
if(newY < 0.0f){
newY = 0.0f;
}
DepositParticle(newY);
//}
}
//}
}
void TerrainClass::DepositParticle(float heightIncrease){
bool posFound = false;
m_lowerList.clear();
while(posFound == false){
int offset = 10;
int jitter;
if(Stable(0.5f)){
m_heightMap[(m_terrainHeight*mCurrentZ)+mCurrentX].y += heightIncrease;
posFound = true;
}else{
if(!m_lowerList.empty()){
int element = rand()%m_lowerList.size();
int lowerIndex = m_lowerList.at(element);
MoveTo(lowerIndex);
}
}
}
}
bool TerrainClass::Stable(float deltaHeight){
int index[9];
float height[9];
index[0] = ((m_terrainHeight*mCurrentZ)+mCurrentX); //the current index
index[1] = ValidIndex((m_terrainHeight*mCurrentZ)+mCurrentX+1) ? (m_terrainHeight*mCurrentZ)+mCurrentX+1 : -1; // if the index to the right is valid index set index[] to index else set index[] to -1
index[2] = ValidIndex((m_terrainHeight*mCurrentZ)+mCurrentX-1) ? (m_terrainHeight*mCurrentZ)+mCurrentX-1 : -1; //to the left
index[3] = ValidIndex((m_terrainHeight*(mCurrentZ+1))+mCurrentX) ? (m_terrainHeight*(mCurrentZ+1))+mCurrentX : -1; // above
index[4] = ValidIndex((m_terrainHeight*(mCurrentZ-1))+mCurrentX) ? (m_terrainHeight*(mCurrentZ-1))+mCurrentX : -1; // bellow
index[5] = ValidIndex((m_terrainHeight*(mCurrentZ+1))+mCurrentX+1) ? (m_terrainHeight*(mCurrentZ+1))+mCurrentX+1: -1; // above to the right
index[6] = ValidIndex((m_terrainHeight*(mCurrentZ-1))+mCurrentX+1) ? (m_terrainHeight*(mCurrentZ-1))+mCurrentX+1: -1; // below to the right
index[7] = ValidIndex((m_terrainHeight*(mCurrentZ+1))+mCurrentX-1) ? (m_terrainHeight*(mCurrentZ+1))+mCurrentX-1: -1; // above to the left
index[8] = ValidIndex((m_terrainHeight*(mCurrentZ-1))+mCurrentX-1) ? (m_terrainHeight*(mCurrentZ-1))+mCurrentX-1: -1; // above to the right
for ( int i = 0; i < 9; i++){
height[i] = (index[i] != -1) ? m_heightMap[index[i]].y : -1;
}
m_lowerList.clear();
for(int i = 1; i < 9; i++){
if(height[i] != -1){
if(height[i] < height[0] - deltaHeight){
m_lowerList.push_back(index[i]);
}
}
}
return m_lowerList.empty();
}
bool TerrainClass::ValidIndex(int index){
return (index > 0 && index < m_terrainWidth*m_terrainHeight) ? true : false;
}
void TerrainClass::MoveTo(int index){
mCurrentX = index%m_terrainWidth;
mCurrentZ = index/m_terrainHeight;
}
Thats all the code thats used.
You should have a look at these two papers:
Fast Hydraulic Erosion Simulation and Visualization on GPU
Fast Hydraulic and Thermal Erosion on the GPU (read the first one first, the second one expands on it)
Don't get scared by the "on GPU", the algorithms work just fine on CPU (albeit slower). The algorithms don't do particle sedimentation per se (but you don't either ;) ) - they instead aggregate the particles into several layers of vector fields.
One important thing about this algorithm is that it erodes already existing heightmaps - for example generated with perlin noise. It fails miserably if the initial height field is completely flat (or even if it has insufficient height variation).
I had implemented this algorithm myself and had mostly success with it (still have more work to do, the algorithms are very hard to balance to give universally great results) - see the image below.
Note that perlin noise with the Thermal weathering component from the second paper may be well enough for you (and might save you a lot of trouble).
You can also find C++ CPU-based implementation of this algorithm in my project (specifically this file, mind the GPL license!) and its simplified description on pages 24-29 of my thesis.
Your particles will need to have some surface friction and/or stickiness (or similar) in their physics model if you want them to not spread out into a single-layer. This is performed in the collision detection and collision response parts of your code when updating your particle simulation.
A simple approach is to make the particles stick (attract each-other). Particles need to have a size too so that they don't simply converge to perfectly overlapping. If you want to make them attract each other, then you need to test the distance between particles.
You might benefit from looking through some of the DirectX SDK examples that use particles, and in particular (pun arf!) there is a great demo (by Simon Green?) in the NVidia GPU Computing SDK that implements sticky particles in CUDA. It includes a ReadMe document describing what they've done. You can see how the particles interact and ignore all the CUDA/GPU stuff if you aren't going for massive particle counts.
Also note that as soon as you use inter-particle forces, then you are checking approximately 0.5*n^2 combinations (pairs) of particles...so you may need to use a simple spatial partitioning scheme or similar to limit forces to nearby groups of particles only.
Good luck!