If(), else if() alternative in c++(Is this AI?) - c++

First off, I am a noob. I am also a Janitor that has never made a dime writing code. This is just something that I love doing. It is for fun:) That being said, I wrote this console based tic-tak-toe game that has enough ai to not lose every game. (I guess ai is what it should be called.) It has something like 70 if/else if statements for the computers turn. I used 3 int arrays like so:
int L[2], M[2], R[2];
0 = blank;
1 = X;
2 = O;
The board then 'Looks' likeL[0] | M[0] | R[0]L[1] | M[1] | R[1]L[2] | M[2] | R[2]
So I basically wrote out every possible scenario I could think something like:
if(M[0]==1 & M[1]==1 & M[2]==0){M[2] = 2;}//here the computer prevents a win
else if(L[0] ==2&M[1]==2&R[2]==0){R[2]=2;}//here the computer wins
//and so on....68 more times!
I guess my question(s) is(are): Is there a better way? Is there a way to achieve the same result with less lines of code?Is this considered Artificial Intelligence?

The standard algorithm for this is called Minimax. It basically builds a tree, where the beginning of the game is the root, and then the children represent every possible move X can make on the first turn, then the children of each of those nodes are all the moves O can make in response, etc. Once the entire tree is filled (which is possible for Tic-Tac-Toe, but for games like Chess computers still don't have enough memory), you work your way back up, assuming both players are smart enough to make the best move, and arrive at the optimal move. Here is another explanation of Minimax specifically using Tic Tac Toe as an example.

The Wikipedia page on Tic-Tac-Toe has a very good algorithm outline for winning (or tying) every game:
http://en.wikipedia.org/wiki/Tic-tac-toe which is what I used to make a Tic-Tac-Toe game several years ago.
After you understand the algorithm, one of the cleverest ways to implement a Tic-Tac-Toe computer player is with a magic square. The method is discussed here. As far as size goes, I've seen this implemented in about 50 lines of code, I'll post the code if I find it :)
This isn't technically artificial intelligence, as AI usually refers to artificial neurons, neuron layers, gradient descent, support vector machines, solving complex polynomials, and the like. Solving Tic-Tac-Toe

Yes there are better ways.
The most obvious would be to consider how different mirror views of the board would simplify the number of cases.
Also, consider pre-storing "interesting" patterns in arrays and then comparing the game state against the data. For example, one series of pattern would be all the ways a player can win in the next move.
Also, note that with the declaration int L[2], there are only two entries in array L, namely L[0] and L[1]. The references you have to L[2], M[2], etc. are errors that should have been caught by the compiler. Consider turning up the warning level. How this is done depends on the compiler. For gcc it is -Wall.
This counts as a form of artificial intelligence. The series of if statements are accumulated knowledge: how to recognize a situation and the appropriate best reaction to it.

The closest thing to real AI to solve such a game would be to code an artificial network and train it with all combinations of the tictactoe game.
In that case the code would not do so much if then else to solve the problem but would solve the problem by taking the most reasonable choice that solves the problem from a pattern trained in it.
But coding a neural network is not a trivial thing :)

When in need to code a rule-based system (like the AI you are building), you can use a rule engine, like for example CLIPS (which is a tool developed at NASA for creating Expert Systems written in C).
http://en.wikipedia.org/wiki/CLIPS
Perhaps it's an overkill for playing Tic Tac Toe, but if you are in the mood for learning cool AI stuff, Expert Systems is a very interesting area, but different (and perhaps less trickier) than Neural Networks.
Have fun!

Related

Which STL collection would match 2D collision detection best?

I'm working on the game in C++. Basically, the game is going to be a simulation of the world with some animals, plants, and of course a player itself (Animals move in random directions). I wanna do it to practice OPP, algorithm library, and work a little with collections from STL. I want to mention that I'm rewriting this project, and there is only one concern that keeps bothers me - Checking collisions. The game is going to be two-dimensional (only x,y coordinates).
Because if I keep my organisms in vector it's going to take a linear time to find if there are other organisms on these coordinates. When there is more organisms, this search takes a lot of time (seriously, for 400 animals their turn could take 20-30 seconds). So my question is, which STL collection would fit my problem best. Thank you in advance for your help.
Animals may die, their strength may be modified, and they can also create offsprings.

Is it feasible to have an algorithm in C++ which calls a Fortran program for the heavily computation parts?

I am developing an algorithm that has a large numerical computation part. My project supervisor recommended me to use Fortran because of this and so for the last weeks I've been work on it (so far so good). It would be a new version of an algorithm of his, which is basically a lot of numerical computing.
Mine, however, would have more "logic" to it. Without going into much detail, the brute force approach is done using just fortran because it's just 95% of reading from a file and doing the operations. However, the aim of the project is to provide an efficient algorithm to do this, I had been thinking about methods and wanted to start with a Greedy approach (something like Hill Climbing) and that got me thinking that for this part in particular, maybe it would be better to write the algorithm in C++ instead of Fortran.
So basically, how hard do you think it would be to develop the algorithm "logic" in C++ and then call Fortran whenever the bulk of the numerical computation has to be performed. Would it be worth it? Or should I just stick with one of the two languages?
Sorry if it is a very ignorant question but I can't get an idea of whether writing an algorithm such as Hill Climbing would be more difficult if done with Fortran instead of C++ and the benefits of Fortran in this case would be worth it.
Thanks for your time and have a nice day!

What is the difference between state evaluation and heuristics in game-AI?

I am trying to implement a minimax algorithm for an AI player within a simple card game. However, from doing research I am confused what are the key differences between state evaluation and heuristics.
From what I understand heuristics are calculated by the current information available to the player (e.g. in chess, the pieces and their relevant locations). With this infomation, they come to a conclusion based on a heuristics function which essentially provides a "rule of thumb".
A state evaluation is the exact value of the current state.
However I am unsure why both things co-exist as I cannot see how they are much different from one another. Please can someone ellaborate, and clear up my confusion. Thanks.
Assuming a zero-sum game, you can implement a state-evaluation for end-states (game ended with win, draw, loss from perspective of player X) which results 1,0,-1. A full tree-search will then get you perfect-play.
But in practice the tree is huge and can't be searched completely. Therefore you have to stop the search at some point, which is not an end-state. There is no determined winner or loser. Now it's hard to mark this state with 1,0,-1 as the game might be too complex to easily evaluate the winner from some state far away from the end-state. But you still need to evaluate this positions and can use some assumptions about the game, which equals to heuristic-information. One example is piece-mass in chess (queen is more valuable then a pawn). This is heuristic information incorporated into the non-perfect evaluation-function (approximation of the real one). The better your assumptions / heuristics, the better the approximation of the real evaluation!
But there are other parts where heuristic information can be incorporated. One very imporant area is controlling the tree-search. Which first-move will be evaluated first, which last. Selecting good moves first allows algorithms like alpha-beta to prune huge parts of the tree. But of course you need to state some assumptions/ heuristic-information to order your moves (e.g. queen-move more powerful than pawn-move; this is a made-up example, i'm not sure about the effect of this heuristic in chess-AIs here)

Efficient Collision Detection With Numerous Objects at Once

I am developing a 2D game with very large levels in which two teams(around 200 objects per team) fight against each other in planes, tanks, turrets,...etc. With every entity shooting bullets at their enemy it is expected that there would be a numerous amount of objects at one instant. What collision detection algorithm could I use to support collision for a massive number of entities? The objects are simple figures(rectangles and circles). Would a brute force approach suffice or break up the level into a grid?
Do not use brute force approach. You will very soon get into troubles with performance. There's plenty of papers and articles about this topic.
But unless you really want to implement your own solution, pick an existing collision/physics engine that can resolve this for you. You are making a 2D game, then the obvious choice is Box2D, which is ported to many platforms and used in many game engines and games (e.g. Angry Birds and its clones). Also this question is probably better suited for Game Development site as you are not really solving any specific programming problem.

Board game AI design: choosing STL data container

I'm coding an AI engine for a simple board game. My simple implementation for now is to iterate over all optional board states, weight each one according to the game rules and my simple algorithm, and selecting the best move according to that score.
As the scoring algorithm is totally stateless, I want to save computation time by creating a hash table of some (all?) board configurations and get the score from there instead of calculating it on the fly.
My questions are:
1. Is my approach logical? (and if not, can you give me some tips to better it? :))
2. What is the most suitable thread-safe STL container for my needs? I'm thinking to use the char array (board configuration) as key and the score as value.
3. Can you give some tips for making my AI a killer one? :)
edit: more info:
The board is 10x10 and there are two players, each with 10 pawns. The rules are much like checkers.
Yes it's common to store evaluated boards into a hash table it's called transposition table. A STL container could be std::vector. In general you have to create a hash function (e.g. zobrist hashing). The hash function calculates a hash value of a particulare board. The result of hash_value modulo HASH_TABLE_SIZE would be the index to the std::vector.
A transposition table entry can hold more information than only board-score and best-move, you can also store to which depth the board is evaluated and if the evaluated-score (in case you are doing alpha-beta search) is
exact
a Upper Bound
or Lower Bound
I can recommend the chessprogramming site, where I have learned a lot. Look for the terms alpha-beta, transposition table, zobrist hashing, iterative deepening. There are also good papers for further reading:
TA Marsland - Computer Chess and Search
TA Marsland - A Review of Game Tree Pruning
AL Zobrist - A New Hashing Method with Application for Game Playing
J Schaeffer - The games Computers (and People) Play
Your logical approach is ok, you should read and maybe try to use Minimax algorithm :
http://en.wikipedia.org/wiki/Minimax
I think that except from tic tac toe game the number of states would be much too big, you should work on making the counting fast.
Chess and checkers can be done with this approach, but it's not one I'd recommend.
If you go this route then I would use some form of tree. If you think about it, every move reduces the total possibilities that existed before the move was made. Plus, this allows levels of difficulty. Don't pick the best all the time, sometimes pick second best.
The reason I wouldn't go this route is that it's not generally fun. People pick up on this intuitively and they feel it's unfair. I wrote a connect 4 game that was unbeatable, but was rule based rather than game board state based. It was dull. Every move was met with the same response. I think this is what happens in this approach as well. Also, it depends on why you are doing this. If it's to learn AI, very little AI is done like this. If it's to have a fun game, it usually isn't. If it's for the reasons Deep Blue was made, to stretch the limits of what a computer can do, then sure.
I would either use a piece based individual AI and then select the one with the most compelling argument or I would use a variation of hill climbing and put a kind of strategy height into the board. It depends on how much support pieces give one another. For the individual AI I would use neural nets.
A strategy height system would be good for an FPS where soldiers want to know what path has the most cover. Neural nets give each entity more personality. You can even use cascading neural nets where one is the strategy and the second is the personality.