How would you store game level for a Dizzy-like adventure game? How would you specify walkable areas and graphics? Is it tile-based, pixel-based or walkable surfaces can be described by vectors?
Very old adventure games (like Sierra's quests from the 80s) used to actually maintain a separate bitmap of the entire screen that represented z-depth and materials to determine where your character could go and where it would be hidden. They would use pixel sampling to check where their small sprites would go.
Though current machines are faster, long side scrolling levels make this sort of approach impractical, IMHO. You have to go with more sparse representations.
One option is to "reduce" your game into invisible tiles, which are easier to represent. The main problem with this is that it can constrain your design (e.g., difficult to do diagonal platforms), and it can make the animations very shoddy (e.g., your characters' feet not actually touching the platform). This option can work IMHO for zelda-like adventure games, but not for action games.
A second option is to represent the game world via some vector representation and implement some collision detection. I would personally go with the second solution, especially if you can be smart about how you organize your data structures to minimize access time (e.g., have faster access to a subset of world elements close to your characters current position).
I wouldn't be surprised if there are available 2D game engines that provide this sort of capability, as there are definitely 3D engines that do it. In fact, you may find it easier to use an existing 3D game engine and use it to render 2D.
The Dizzy game is probably using a tile-based system. The game world is made up of a palette of tiles that are repeated throughout the level. Each tile would have three elements - the image drawn to the screen, the z-buffer to allow the main character to walk behind parts of the image and a collision map. The latter two would be implemented as monochrome images such that:
colour | z map | collision
-------|--------------------|---------------
black | draw dizzy infront | collide
white | draw dizzy behind | don't collide
Storing these are monochrome images save a lot of ram too.
You would have an editor to build level that would display a grid where tiles can be dragged and dropped.
That specific game is a tile-based game, with pixel perfect collision. The collision was controlled by a single bit in the colour byte of the tile (the brightness) and you could also mirror the tile by setting the flashing bit of the colour.
The tiles, however, could only be placed at even x,y coordinates (I suspect this was done to help the collision system a bit.)
The collision system involved a persistent check around the hero. The rules were roughly:
- If it finds a non-collision pixel row below the hero, it dropped the hero by 1 pixel.
- If there is a collision intersection with the hero, it raised the hero by 1 pixel.
- When moving left or right, it checks in that direction and if:
- if finds a wall (collision height more than 4 pixels), deny movement in that direction;
- if it finds a climbable box (collision height up to 4 pixels), allow movement in that direction.
- If there is enough headroom allow jumping, else, stop at the last possible free position.
When combining these simple rules, you obtained a very smooth collision negotiation able to walk even arbitrary slopes without extra taxing.
In creating such a game, I would use individual tiles, I would assemble them on layers (background, foreground etc...) and render them. I would assemble a separate collision map from the tiles attributes that indicate a collision tile, and I would use that separate map to negotiate the hero's collision and movement.
Similar to this demo: http://games.catevaclickuri.ro/
Related
I am building a 2d platformer game that might have a good hundred or so rotated sprites (characters, rockets, bullets, etc) that I want to let collide with a wallmask.
Currently using Allegro 5, which does not support 1bit bitmaps as would be natural to use for this.
Is it better for me to attempt creating my own bitmap implementation, and do some hack for rotation (like cache rotated sprites), or to use one of https://www.allegro.cc/manual/5/allegro_pixel_format and Allegro's get_pixel()?
And for the collision testing itself, should I use some kind of blit'ing the character masks into the alpha channel of the wallmask to test for a single value or is it better to just
if (wallmask[x][y] && character_mask[x+o_x][y+o_y]) { collide(); }
for all relevant x and y?
Thank you.
I have decided to rewrite an old Zatacka clone of mine. The old thing, running under Allegro 4, utilizes logic bitmap, that is a bitmap used for non-display purposes, reflecting directly the visible "what's on the screen and doesn't move" bitmap, but the integers stored in it represent logical meaning of things on screen, because the game got quite colorful. So the things that players see may be of any color possible, but game just remembers what kind of object each pixels represents.
The new clone is not supposed to use Allegro, so I could write the logic bitmap code myself. That said, I would appreciate if someone suggested some more efficient and precise alternatives.
Structure must be able to be kept up with bitmap/texture visible to players. Think about Worms game, but utilizing player invisible ground type variations, or something. In addition, following methods must be implemented:
Checking if all pixels in a circle belong to a small (~6) set of "colors" given as a parameter.
Painting all pixels in a circle with a single "color".
Painting all pixels in a circle, (except/only) ones in small set of "colors" provided as a parameter, with a single "color".
Painting a silhouette of rotated, preprocessed if you wish so, bitmap with a single "color". (That's the tricky one: would interpreting the bitmap as a stupid polygon with loads of right angles do the job?)
This is the minimum. If your structure supports shapes other than circles, that's great.
How does one have a scrolling map and detect collision on it?
I have tried a tile map but dont want to use this technique.
I would like to learn how a games background can be scrolled as well as object are detected for collision.
One example would be this game.
If one is not using tilemaps for this, how is the world scrollable as well as having physical objects detecting collision with the character sprite?
Please point me in the right direction on this topic.
Simple collision detection is done by checking to see if bounding boxes (or sometimes, bounding circles) overlap. That is, given object A and object B, if their bounding boxes overlap, then a collision has occurred.
If you want to get fancy, you create polygons for your objects and determine if the bounding polygons overlap. That's more complicated and harder to get right, and also involves considerably more processing.
It's not really much different from using a tilemap. With a tilemap, you're doing collision detection against the tiles. Without a tilemap you're doing collision detection against the bounding boxes of individual objects. The logic is almost identical.
Additional info:
I think the problem you're struggling with is that you're thinking in terms of screens. It's important when writing a game to separate the concept of the game world (the model) from the presentation (the screen).
For example, a simple side scroller game might consist of many "screens" full of information. Whereas the screen might be 1000 units wide, the game's world could be 10,000 or 100,000 units wide. Note that I say "units" rather than "pixels". A game's unit of measure could be inches, centimeters, miles, or whatever. All of your game logic should work in terms of world coordinates. How you project that onto the view (the screen) is irrelevant to what goes on in the world.
Every object in the world is assigned a position in world coordinates. That includes the player sprite, which is moving around in the world. All collision detection and other interaction of objects in the world is done in terms of world coordinates.
To simplify things, assume that there is a one-to-one mapping between world units and screen units. That is, your world is 10,000 pixels wide, and your screen is 1,000 pixels wide. The world, then is "ten screens" in size. More correctly, the screen can view 1/10 of the world. If that's the case, then your view can be defined by the left-most world coordinate that's visible. So if the world coordinate 2,000 is at the left of your screen, the rightmost column of pixels on your screen will be world coordinate 2,999.
This implies that you need world-to-screen and screen-to-world transformation functions. For this case, those functions are very simple. If we assume that there's no vertical scrolling, then only the x coordinate needs to be transformed by subtracting the world x coordinate from the view x coordinate. That is, world.x - viewOrigin.x. So in the case above, the screen coordinate of an object that's at world x coordinate 2,315 would be (2,315 - 2,000), or 315. That's the x coordinate of the object's position on the screen. In the current view.
The screen-to-world is the opposite: add the object's screen coordinate to the view's origin: 315 + 2,000 = 2,315.
I can't stress how important it is to maintain this separation between world and view coordinates. All of your game logic should operate in terms of world coordinates. You update the view to show you what's happening in the world. In a side scroller, you typically move the view left and right based on the player's actions. If the player moves left, you move the view origin left. If the player moves right, you move the view origin to the right.
Once you internalize this idea, things become much simpler. Everything done in the game world happens in the world. You make sure that the world state is consistent, and then you worry about displaying a view of the world.
There's no easier way (that I know of) to build a game that's larger than the screen. No matter how you build it, you have to have some concept of showing just part of the world on the view. By formalizing it, you separate the presentation from the game logic and things get much, much simpler.
You are going to want to set you camera dimensions to the dimensions of your level that you want to be rendered. You can get the level coordinates according to whatever you camera is supposed to be fixed on, say your hero.
Sprite collision is relatively easy. Use the two sprites' height/width you're testing against (IDK about cocos2d, but most api's sprites' position are at the upper left corner of the sprite):
if(player.Pos.x <= (object.Pos.x + object.width) || (player.Pos.x + player.width) >= object.Pos.x)
{
if(player.Pos.y >= (object.Pos.y - object.height) || (player.Pos.y - player.height) <= object.Pos.y)
//collision detected
}
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I'm working on a little 2D platformer/fighting game with C++ and SDL, and I'm having quite a bit of trouble with the collision detection.
The levels are made up of an array of tiles, and I use a for loop to go through each one (I know it may not be the best way to do it, and I may need help with that too). For each side of the character, I move it one pixel in that direction and check for a collision (I also check to see if the character is moving in that direction). If there is a collision, I set the velocity to 0 and move the player to the edge of the tile.
My problem is that if I check for horizontal collisions first, and the player moves vertically at more than one pixel per frame, it handles the horizontal collision and moves the character to the side of the tile even if the tile is below (or above) the character. If I handle vertical collision first, it does the same, except it does it for the horizontal axis.
How can I handle collisions on both axes without having those problems? Is there any better way to handle collision than how I'm doing it?
XNA's 2D platformer example uses tile-based collision as well. The way they handle it there is pretty simple and may useful for you. Here's a stripped down explanation of what's in there (removing the specific-to-their-demo stuff):
After applying movement, it checks for collisions.
It determines the tiles the player overlaps based on the player's bounding box.
It iterates through all of those tiles...
If the tile being checked isn't passable:
It determines how far on the X and Y axes the player is overlapping the non-passable tile
Collision is resolved only on the shallow axis:
If Y is the shallow axis (abs(overlap.y) < abs(overlap.x)), position.y += overlap.y; likewise if X is the shallow axis.
The bounding box is updated based on the position change
Move on to the next tile...
It's in player.cs in the HandleCollisions() function if you grab the code and want to see what they specifically do there.
Yes. Vector based collision will be much better than tile based. Define each edge of a tile as lines (there are short cuts, but ignore them for now.) Now to see if a collision has occured, find the closest horizontal and vertical line. if you take the sign of lastPos.x * LineVector.y - lastPos.y * LineVector.x and compare that with thisTurnsPos.x * LineVector.y - ThisTurnsPos.y * LinePos.x. If the signs of those two values differ, you have crossed that line this tic. This doesn't check if you've crossed the end of a line segment though. You can form a dot product between the same lineVector and your curPosition (a little error here, but good enough probably) and it is either negative or greater than the line's magnitude squared, you aren't within that line segment and no collision has occured.
Now this is farily complex and you could probably get away with a simple grid check to see if you've crossed into another square's area. But! The advantage of doing it with vectors is it solves the moving faster than the size of the collision box problem and (more importantly), you can use non axis aligned lines for your collisions. This system works for any 2D vectors (and with a little massaging, 3D as well.) It also allows you slide your character along the edge of the collision box rather easily as well because you've already done 99% of the math needed to find where you are supposed to be after a collision.
I've glossed over a couple of implementation details, but I can tell that I've used the above method in countless commercial video games and it has worked like a charm. Good Luck!