I am developing a 2D game with very large levels in which two teams(around 200 objects per team) fight against each other in planes, tanks, turrets,...etc. With every entity shooting bullets at their enemy it is expected that there would be a numerous amount of objects at one instant. What collision detection algorithm could I use to support collision for a massive number of entities? The objects are simple figures(rectangles and circles). Would a brute force approach suffice or break up the level into a grid?
Do not use brute force approach. You will very soon get into troubles with performance. There's plenty of papers and articles about this topic.
But unless you really want to implement your own solution, pick an existing collision/physics engine that can resolve this for you. You are making a 2D game, then the obvious choice is Box2D, which is ported to many platforms and used in many game engines and games (e.g. Angry Birds and its clones). Also this question is probably better suited for Game Development site as you are not really solving any specific programming problem.
Related
I'm working on the game in C++. Basically, the game is going to be a simulation of the world with some animals, plants, and of course a player itself (Animals move in random directions). I wanna do it to practice OPP, algorithm library, and work a little with collections from STL. I want to mention that I'm rewriting this project, and there is only one concern that keeps bothers me - Checking collisions. The game is going to be two-dimensional (only x,y coordinates).
Because if I keep my organisms in vector it's going to take a linear time to find if there are other organisms on these coordinates. When there is more organisms, this search takes a lot of time (seriously, for 400 animals their turn could take 20-30 seconds). So my question is, which STL collection would fit my problem best. Thank you in advance for your help.
Animals may die, their strength may be modified, and they can also create offsprings.
I'm trying to make a game or 3D application using openGL. The game/program will have many objects in them and drawn to the screen(around 7000 of them). When I render them, I would need to calculate the distance between the camera and the object and sort them in order to correctly render the objects within the scene. Knowing this, what is the best way to sort them? I really want the sorting to be done really fast, but I've heard there are "trade off" for them, so what algorithm should I use to get the best performance out of it?
Any help would be greatly appreciated.
Edit: a lot of people are talking about the z-buffer/depth buffer. This doesn't work in some cases like a few people talked about. This is why I asked this question.
Sorting by distance doesn't solve the transparency problem perfectly. Consider the situation where two transparent surfaces intersect and each has a part which is closer to you. Perhaps rare in games, but still something to consider if you don't want an occasional glitched look to your renderer.
The better solution is order-independent transparency. With the latest graphics hardware supporting atomic operations, you can use an A-buffer to do this with little memory overhead and in a single pass so it is pretty efficient. See for example this article.
The issue of sorting your scene is still a valid one, though, even if it isn't for transparency -- it is still useful to sort opaque objects front to back to to allow depth testing to discard unseen fragments. For this, Vaughn provided the great solution of BSP trees -- these have been used for this purpose for as long as 3D games have been around.
Use http://en.wikipedia.org/wiki/Insertion_sort which has O(n) complexity for nearly sorted arrrays.
In your case by exploiting temporal cohesion insertion sort gives fastest results.
It is used for http://en.wikipedia.org/wiki/Sweep_and_prune
From link above:
In many applications, the configuration of physical bodies from one time step to the next changes very little. Many of the objects may not move at all. Algorithms have been designed so that the calculations done in a preceding time step can be reused in the current time step, resulting in faster completion of the calculation.
So in such cases insertion sort is best(or similar sorts with O(n) at best case)
I want to write a platform game, all I done before was puzzle games with no need of physics.
All I need is simple collision detection, what I will supply the physics engine is position of all objects and it should output:
All objects of a specific object type (bullets) that collide with any object (and what object), just a list of pairs.
For every object of a specific object type (players and npcs) if its on ground or in mid air.
All the simulation of move/speed/gravity/hits/reflections will be done using custom code because what I want to implement is a world with strange physics.
Should I roll my own engine? can I use existing ones like chipmunk/box2d? If I need to implement my own how will I make collision detection not a costly operation? (like a naive implementation of just checking everything in O(n^2).
I can use objective-c or c++, I would prefer c++ (it should have better performance).
If you are writing your own physics you probably want to include your own collision detection. There are also some publicly available free physics engines that you might try like bullet. (http://www.bulletphysics.org)
But you might want to just do a google search for collision detection algorithms that might apply to the kind of game you are making with the kinds of intersections you need to test for.
Here is an article I found at random: http://www.gamespp.com/algorithms/collisionDetection.html
For this kind of problem, I'd recommend you write your own library.
Yes, you can find this kind of feature in existing libraries, however you'll learn a lot more from writing your own.
I recommend looking into graph and tree data structures.
I'm coding an AI engine for a simple board game. My simple implementation for now is to iterate over all optional board states, weight each one according to the game rules and my simple algorithm, and selecting the best move according to that score.
As the scoring algorithm is totally stateless, I want to save computation time by creating a hash table of some (all?) board configurations and get the score from there instead of calculating it on the fly.
My questions are:
1. Is my approach logical? (and if not, can you give me some tips to better it? :))
2. What is the most suitable thread-safe STL container for my needs? I'm thinking to use the char array (board configuration) as key and the score as value.
3. Can you give some tips for making my AI a killer one? :)
edit: more info:
The board is 10x10 and there are two players, each with 10 pawns. The rules are much like checkers.
Yes it's common to store evaluated boards into a hash table it's called transposition table. A STL container could be std::vector. In general you have to create a hash function (e.g. zobrist hashing). The hash function calculates a hash value of a particulare board. The result of hash_value modulo HASH_TABLE_SIZE would be the index to the std::vector.
A transposition table entry can hold more information than only board-score and best-move, you can also store to which depth the board is evaluated and if the evaluated-score (in case you are doing alpha-beta search) is
exact
a Upper Bound
or Lower Bound
I can recommend the chessprogramming site, where I have learned a lot. Look for the terms alpha-beta, transposition table, zobrist hashing, iterative deepening. There are also good papers for further reading:
TA Marsland - Computer Chess and Search
TA Marsland - A Review of Game Tree Pruning
AL Zobrist - A New Hashing Method with Application for Game Playing
J Schaeffer - The games Computers (and People) Play
Your logical approach is ok, you should read and maybe try to use Minimax algorithm :
http://en.wikipedia.org/wiki/Minimax
I think that except from tic tac toe game the number of states would be much too big, you should work on making the counting fast.
Chess and checkers can be done with this approach, but it's not one I'd recommend.
If you go this route then I would use some form of tree. If you think about it, every move reduces the total possibilities that existed before the move was made. Plus, this allows levels of difficulty. Don't pick the best all the time, sometimes pick second best.
The reason I wouldn't go this route is that it's not generally fun. People pick up on this intuitively and they feel it's unfair. I wrote a connect 4 game that was unbeatable, but was rule based rather than game board state based. It was dull. Every move was met with the same response. I think this is what happens in this approach as well. Also, it depends on why you are doing this. If it's to learn AI, very little AI is done like this. If it's to have a fun game, it usually isn't. If it's for the reasons Deep Blue was made, to stretch the limits of what a computer can do, then sure.
I would either use a piece based individual AI and then select the one with the most compelling argument or I would use a variation of hill climbing and put a kind of strategy height into the board. It depends on how much support pieces give one another. For the individual AI I would use neural nets.
A strategy height system would be good for an FPS where soldiers want to know what path has the most cover. Neural nets give each entity more personality. You can even use cascading neural nets where one is the strategy and the second is the personality.
Intuitively, it would seem that given a dozen or so 2d images from different angles of almost any object, it should be easy to construct a 3d representation of that object. Subsequently a library of 3d representations attained in this way could be used to identify new 2d images.
What literature is there along these lines, and why has it not yet produced strong object recognition?
It is your word "intuitively" that is causing you trouble there. Your brain is not designed to be very good at certain tasks, like multiplying thousands of numbers in an instant. However for raw computational power your brain makes the fastest computer look like mere tiddly-winks (neural response time of only about 10 milliseconds, but all those 10^14 or so neurons all working in parallel totally beats any modern machine). Its just that your brain is designed to solve problems that are intensely more computationally complex, like recognizing objects in a picture, parsing sound data and picking out individual speakers amidst background noise. Learning to classify and deal with tens of thousands of types of objects.
The incredibly computationally intense things your brain is designed to do really well are the things that, to a person, seem "intuitive". The things it isn't designed to do really well seem "unintuitive" or difficult. But the raw computation needed for strong object recognition (because there are just so MANY kinds of objects, many of which really have subobjects, and multiple classifications, and non-rigid forms, e.g. "trousers", "water", "dog") is WAY more than what is needed accomplish things one considers only possible for a computer. Things like using "common sense" to solve an every day problem are similarly trivial for a person, but computationally incredibly complex.
What you want to do is indeed possible, but (there are quite a few buts)
for the 3D reconstruction:
For anything but the simplest shapes you need more than just a few dozen images.
The shape you are reconstructing needs to have a lot of recognizable features that look similar enough from different angles so that you can match them.
Lighting needs to be fairly constant over your entire set of images, otherwise shadows will throw you off (or you need even more images)
even with very feature rich objects (i.e. lot of variation in colour and shape) 3D reconstruction accuracy from any matched pair of features is going to be terrible if you do not have full knowledge of the parameters (position, view direction and opening angle) of the camera used to take each picture.
These are all problems can be solved, so suppose you did, and now you have a new picture from the object that you want to match to your 3D shape.
You could of course try to find a 2D projection of your shape that fit the new picture, but the search space there is enormous. It would probably be a lot easier and faster to use the feature finding and matching system you built for the initial 3D reconstruction to directly match the new picture to the existing set, and find where it fits on the object that way.
So once you've solved the problem of creating the initial 3D reconstruction your second step is basically done as well.
Photosynth is a brilliant example of these two steps. Browse the site, try to find some of the references they have there.
As for your final step, strong object recognition, just imagine the search space! What you need for strong object recognition, apart from a good representation of the objects you want to recognize, is a good way to search the space of objects you know, and a good way to represent your new object (the image of an object in this case) in that space. This is something I know nearly nothing about.
For just matching the same object in different 2D images there are SIFT features. But I don't think this translates well to 3D.
Note that what you're describing is instance recognition. Computer can indeed do a good job of instance recognition these days. For example, Google Goggles is very good at recognizing landmarks like the Golden Gate Bridge and Eiffel Tower.
However, computers are less good at doing category recognition and classification. Creating dozens of 2D snapshots for all possible objects under all types of lighting conditions etc. becomes intractable very quickly. The fact that certain objects such as a dog can move around makes the space of possibilities even bigger. Computers become much worse at this.
Also, from the biological standpoint, our visual field is around 100 million pixels. Graphics cards have only now started to become capable of rendering that much data in real-time. Making sense of that much data is even more computationally intensive.
One often talks about having a machine reach a 5 year old's ability to process information. But let's think about how much data that is. 100 million pixels with 3 color channels and 1 byte per pixel = 300MB/s. Now multiply that by 30 frames per second, 31,556,926 seconds per year, and 5 years, you end up with roughly 1.4 exabytes (1.4x10^18).