I am making a bitboard based chess engine and I would like to ask - assuming that I made a bitboard to every piece, what do I do with it? I read a little bit about some techniques like if you shift the pawns bit board to the left by 7 and 9 you get a bitboard representing the squares they attack, but how do I use it?
or how do I use the rook bitboard or bishop bitboard? like what are their targets, and if I find it how do I connect it with the other pieces bitboards?
I have been searching on it for days now but did not find a sufficient answer...
thanks
Bitboards is another type of board respresentation than for example a 2d array board or a 1d array. The main advantage is that they can help you generate valid moves for a position quicker and that you can use them more easily to get certain evaluation structures and parameters.
Usually you have 1 bitboard for each piece and each side (12 total), one for each color (2 total), one for all pieces, one for castling rights, one for side to move. With bit operators and bit manipulation you can calculate the valid moves for a position with the help of precomputed tables and only a few bit operations.
I suggest looking at this YouTube series which goes through the entire process of writing a bitboard chess engine from scratch.
Another good source to get how the concepts work is to look at the Chessprogramming site.
I hope it helps! It is not easy to wrap your head around, but the gain from using them is great.
Related
Are there any kind of algorithms out there that can assist and accelerate in the construction of a jigsaw puzzle where the edges are already identified and each edge is guaranteed to fit exactly one other edge (or no edges if that piece is a corner or border piece)?
I've got a data set here that is roughly represented by the following structure:
struct tile {
int a, b, c, d;
};
tile[SOME_LARGE_NUMBER] = ...;
Each side (a, b, c, and d) is uniquely indexed within the puzzle so that only one other tile will match an edge (if that edge has a match, since corner and border tiles might not).
Unfortunately there are no guarantees past that. The order of the tiles within the array is random, the only guarantee is that they're indexed from 0 to SOME_LARGE_NUMBER. Likewise, the side UIDs are randomized as well. They all fall within a contiguous range (where the max of that range depends on the number of tiles and the dimensions of the completed puzzle), but that's about it.
I'm trying to assemble the puzzle in the most efficient way possible, so that I can ultimately address the completed puzzle using rows and columns through a two dimensional array. How should I go about doing this?
The tile[] data defines an undirected graph where each node links with 2, 3 or 4 other nodes. Choose a node with just 2 links and set that as your origin. The two links from this node define your X and Y axes. If you follow, say, the X axis link, you will arrive at a node with 3 links — one pointing back to the origin, and two others corresponding to the positive X and Y directions. You can easily identify the link in the X direction, because it will take you to another node with 3 links (not 4).
In this way you can easily find all the pieces along one side until you reach the far corner, which only has two links. Of all the pieces found so far, the only untested links are pointing in the Y direction. This makes it easy to place the next row of pieces. Simply continue until all the pieces have been placed.
This might be not what you are looking for, but because you asked for "most efficient way possible", here is a relatively recent scientific solution.
Puzzles are a complex combinatorial problem (NP-complete) and require some help from Academia to solve them efficiently. State of the art algorithms was recently beaten by genetic algorithms.
Depending on your puzzle sizes (and desire to study scientific stuff ;)) you might be interested in this paper: A Genetic Algorithm-Based Solver for Very Large Jigsaw Puzzles . GAs would work around in surprising ways some of the problems you encounter in classic algorithms.
Note that genetic algorithms are embarrassingly parallel, so there is a straightforward way to do calculations on parallel machines, such as multi-core CPUs, GPUs (CUDA/OpenCL) and even distributed/cloud frameworks. Which makes them hundreds to thousands times faster. GPU-accelerated GAs unlock puzzle sizes unavailable for conventional algorithms.
In my chess engine, that uses bitboards for representing the board's state, generates a chunk of pseudo-legal moves in one go, a bitboard being the result. For example:
Pawns:
A little bitboard magic later:
The bitboard at the end is simply a chunk of possible moves. How do engines usually take this bitboard and generate individual moves from them? Do I have to iterate over every single bit to check if it's set? Iterating over a bitboard seems to defy the very purpose of using bitboards though, which is why I'm a bit skeptical.
Is there a better way?
Then, typically you apply some variant of the minimax algorithm to evaluate how good the moves are, so you can pick (what you estimate to be) the best move. A simple variant is, for example, alpha-beta.
The variants mainly deal with attempting to guide the search towards "probably useful moves" and away from useless areas of the search space, because the search tree is very wide and your ability to explore it deeply is extremely important for a good chess AI - exploring it shallowly makes the AI easy to "trap" because it will make choices that look good short-term even though they work out badly later on.
So yes, you will iterate over the bitboards. That doesn't really defy their purpose - you've still (probably) computed the moves much faster than if you hadn't used bitboards. For the simplest AI you could just take "the first" move using standard bitboard techniques, but an AI that plays like that will be below novice level, having no regard for winning or losing at all.
You don't have to iterate over 64 single bits. You can prepare/pre-define for example a 256-sized lookup array with all possible move-lists where 8-bit indices represent attack-sets of a piece on a single rank. Then you can iterate only 8 times with bitwise shift operation (bitboard >> 8) to pass subsequent rank-attack-sets as an index to the array and extract the move-list. It will speed up roughly 8 times comparing to one-bit stepping loop. Maybe you should enhance this array to [8][256] actually to pass also a rank number itself and extract a final move-list (with x,y coordinates) depending on your needs. The memory cost is still insignificant.
Supposed we have to write a simple program that transforms a matrix. Each element should be the sum of its neighbor-elements.
What's the "correct" (i.e. most common, has best readability, most effective) way to do this, considering the edges of a matrix?
Two obvious obvious ways of achieving this that I can think of:
Handle corners first (4 separate lines), use 4 loops to do the remaining edges, then use the standard loop for the rest
Use one loop for the whole matrix with if's to check if we're in the middle or it's an edge-case.
The first one is faster (I guess), but it kinda looks off to me to have 4 lines plus 5 loops for this.
Is there a more elegant way? I tagged this as C++ because I'm coding in C++ currently and I have the feeling that the ternary operator ?: is gonna come in handy to write a cute solution.
Bonus points if your solution can be tweaked for a more complex rule (not just looking one up/right/left/down cell, but if you're doing a certain kind of recursion). Not sure if it would change things much, though.
One elegant way of going about it, is to use a larger matrix. If your matrix has NxM elements, make a temporary (N+2)x(M+2) matrix, fill it with zeros and then copy your values like so:
temp(i+1,j+1) <- original(i,j)
Now you actually have your original matrix with zeroed-out edges around it. You can now safely calculate the sum of all neighbors of all the non-edge cells in the temporary matrix. The result will be the matrix you were originally looking for.
Note - this will be less efficient than the straight-forward five-loop-solution you proposed.
So, I was thinking about making a simple random world generator. This generator would create a starting "cell" that would have between one and four random exits (in the cardinal directions, something like a maze). After deciding those exits, I would generate a new random "cell" at each of those exits, and repeat whenever a player would get near a part of the world that had not yet been generated. This concept would allow a "infinite" world of sorts, all randomly generated; however, I am unsure of how to best represent this internally.
I am using C++ (which doesn't really matter, I could implement any sort of data structure necessary). At first I thought of using a sort of directed graph in which each node would have directed edges to each cell surrounding it, but this probably won't work well if a user finds a spot in the world, backtracks, and comes back to that spot from another direction. The world might do some weird things, such as generate two cells at one location.
Any ideas on what kind of data structure might be the most effective for such a situation? Or am I doing something really dumb with my random world generation?
Any help would be greatly appreciated.
Thanks,
Chris
I recommend you read about graphs. This is exactly an application of random graph generation. Instead of 'cell' and 'exit' you are describing 'node' and 'edge'.
Plus, then you can do things like shortest path analysis, cycle detection and all sorts of other useful graph theory application.
This will help you understand about the nodes and edges:
and here is a finished application of these concepts. I implemented this in a OOP way - each node knew about it's edges to other nodes. A popular alternative is to implement this using an adjacency list. I think the adjacency list concept is basically what user470379 described with his answer. However, his map solution allows for infinite graphs, while a traditional adjacency list does not. I love graph theory, and this is a perfect application of it.
Good luck!
-Brian J. Stianr-
A map< pair<int,int>, cell> would probably work well; the pair would represent the x,y coordinates. If there's not a cell in the map at those coordinates, create a new cell. If you wanted to make it truly infinite, you could replace the ints with an arbitrary length integer class that you would have to provide (such as a bigint)
If the world's cells are arranged in a grid, you can easily give them cartesian coordinates. If you keep a big list of existing cells, then before determining exits from a given cell, you can check that list to see if any of its neighbors already exist. If they do, and you don't want to have 1-way doors (directed graph?) then you'll have to take their exits into account. If you don't mind having chutes in your game, you can still choose exits randomly, just make sure that you link to existing cells if they're there.
Optimization note: checking a hash table to see if it contains a particular key is O(1).
Couldn't you have a hash (or STL set) that stored a collection of all grid coordinates that contain occupied cells?
Then when you are looking at creating a new cell, you can quickly check to see if the candidate cell location is already occupied.
(if you had finite space, you could use a 2d array - I think I saw this in a Byte magazine article back in ~1980-ish, but if I understand correctly, you want a world that could extend indefinitely)
I'm looking for a data structure that would allow me to store an M-by-N 2D matrix of values contiguously in memory, such that the distance in memory between any two points approximates the Euclidean distance between those points in the matrix. That is, in a typical row-major representation as a one-dimensional array of M * N elements, the memory distance differs between adjacent cells in the same row (1) and adjacent cells in neighbouring rows (N).
I'd like a data structure that reduces or removes this difference. Really, the name of such a structure is sufficient—I can implement it myself. If answers happen to refer to libraries for this sort of thing, that's also acceptable, but they should be usable with C++.
I have an application that needs to perform fast image convolutions without hardware acceleration, and though I'm aware of the usual optimisation techniques for this sort of thing, I feel a specialised data structure or data ordering could improve performance.
Given the requirement that you want to store the values contiguously in memory, I'd strongly suggest you research space-filling curves, especially Hilbert curves.
To give a bit of context, such curves are sometimes used in database indexes to improve the locality of multidimensional range queries (e.g., "find all items with x/y coordinates in this rectangle"), thereby aiming to reduce the number of distinct pages accessed. A bit similar to the R-trees that have been suggested here already.
Either way, it looks that you're bound to an M*N array of values in memory, so the whole question is about how to arrange the values in that array, I figure. (Unless I misunderstood the question.)
So in fact, such orderings would probably still only change the characteristics of distance distribution.. average distance for any two randomly chosen points from the matrix should not change, so I have to agree with Oli there. Potential benefit depends largely on your specific use case, I suppose.
I would guess "no"! And if the answer happens to be "yes", then it's almost certainly so irregular that it'll be way slower for a convolution-type operation.
EDIT
To qualify my guess, take an example. Let's say we store a[0][0] first. We want a[k][0] and a[0][k] to be similar distances, and proportional to k, so we might choose to interleave the storage of first row and first column (i.e. a[0][0], a[1][0], a[0][1], a[2][0], a[0][2], etc.) But how do we now do the same for e.g. a[1][0]? All the locations near it in memory are now taken up by stuff that's near a[0][0].
Whilst there are other possibilities than my example, I'd wager that you always end up with this kind of problem.
EDIT
If your data is sparse, then there may be scope to do something clever (re Cubbi's suggestion of R-trees). However, it'll still require irregular access and pointer chasing, so will be significantly slower than straightforward convolution for any given number of points.
You might look at space-filling curves, in particular the Z-order curve, which (mostly) preserves spatial locality. It might be computationally expensive to look up indices, however.
If you are using this to try and improve cache performance, you might try a technique called "bricking", which is a little bit like one or two levels of the space filling curve. Essentially, you subdivide your matrix into nxn tiles, (where nxn fits neatly in your L1 cache). You can also store another level of tiles to fit into a higher level cache. The advantage this has over a space-filling curve is that indices can be fairly quick to compute. One reference is included in the paper here: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.8959
This sounds like something that could be helped by an R-tree. or one of its variants. There is nothing like that in the C++ Standard Library, but looks like there is an R-tree in the boost candidate library Boost.Geometry (not a part of boost yet). I'd take a look at that before writing my own.
It is not possible to "linearize" a 2D structure into an 1D structure and keep the relation of proximity unchanged in both directions. This is one of the fundamental topological properties of the world.
Having that that, it is true that the standard row-wise or column-wise storage order normally used for 2D array representation is not the best one when you need to preserve the proximity (as much as possible). You can get better result by using various discrete approximations of fractal curves (space-filling curves).
Z-order curve is a popular one for this application: http://en.wikipedia.org/wiki/Z-order_(curve)
Keep in mind though that regardless of which approach you use, there will always be elements that violate your distance requirement.
You could think of your 2D matrix as a big spiral, starting at the center and progressing to the outside. Unwind the spiral, and store the data in that order, and distance between addresses at least vaguely approximates Euclidean distance between the points they represent. While it won't be very exact, I'm pretty sure you can't do a whole lot better either. At the same time, I think even at very best, it's going to be of minimal help to your convolution code.
The answer is no. Think about it - memory is 1D. Your matrix is 2D. You want to squash that extra dimension in - with no loss? It's not going to happen.
What's more important is that once you get a certain distance away, it takes the same time to load into cache. If you have a cache miss, it doesn't matter if it's 100 away or 100000. Fundamentally, you cannot get more contiguous/better performance than a simple array, unless you want to get an LRU for your array.
I think you're forgetting that distance in computer memory is not accessed by a computer cpu operating on foot :) so the distance is pretty much irrelevant.
It's random access memory, so really you have to figure out what operations you need to do, and optimize the accesses for that.
You need to reconvert the addresses from memory space to the original array space to accomplish this. Also, you've stressed distance only, which may still cause you some problems (no direction)
If I have an array of R x C, and two cells at locations [r,c] and [c,r], the distance from some arbitrary point, say [0,0] is identical. And there's no way you're going to make one memory address hold two things, unless you've got one of those fancy new qubit machines.
However, you can take into account that in a row major array of R x C that each row is C * sizeof(yourdata) bytes long. Conversely, you can say that the original coordinates of any memory address within the bounds of the array are
r = (address / C)
c = (address % C)
so
r1 = (address1 / C)
r2 = (address2 / C)
c1 = (address1 % C)
c2 = (address2 % C)
dx = r1 - r2
dy = c1 - c2
dist = sqrt(dx^2 + dy^2)
(this is assuming you're using zero based arrays)
(crush all this together to make it run more optimally)
For a lot more ideas here, go look for any 2D image manipulation code that uses a calculated value called 'stride', which is basically an indicator that they're jumping back and forth between memory addresses and array addresses
This is not exactly related to closeness but might help. It certainly helps for minimation of disk accesses.
one way to get better "closness" is to tile the image. If your convolution kernel is less than the size of a tile you typical touch at most 4 tiles at worst. You can recursively tile in bigger sections so that localization improves. A Stokes-like (At least I thinks its Stokes) argument (or some calculus of variations ) can show that for rectangles the best (meaning for examination of arbitrary sub rectangles) shape is a smaller rectangle of the same aspect ratio.
Quick intuition - think about a square - if you tile the larger square with smaller squares the fact that a square encloses maximal area for a given perimeter means that square tiles have minimal boarder length. when you transform the large square I think you can show you should the transform the tile the same way. (might also be able to do a simple multivariate differentiation)
The classic example is zooming in on spy satellite data images and convolving it for enhancement. The extra computation to tile is really worth it if you keep the data around and you go back to it.
Its also really worth it for the different compression schemes such as cosine transforms. (That's why when you download an image it frequently comes up as it does in smaller and smaller squares until the final resolution is reached.
There are a lot of books on this area and they are helpful.