I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)
Related
I've written my own 3D Game Engine in the past few years and wanted to actually use it for a game.
I stumbled accros the following problem:
I have multiple planes in my game but lets talk about one single plane.
Naturally, planes are not able to dive into the ground and fly under the terrain.
Therefor, I need to implement something that detects the collision between a plane/jet and my ground.
The informations given are the following:
Grid of terrain [2- dimensional array; stores height at according x,z coordinate]
Hitbox of my plane (it moves with my plane, so the bounds etc. are all already calculated and given)
So about the hitboxes:
I though about which method to use. The best one in terms of performance seems to be simple spheres with different radius.
About the ground: Graphically, the ground is subdivided into triangles:
So what I need now is the optimal type of hitbox (sphere, AABB,...) and the according most efficient calculations.
My attempt was to get every surrounding triangle and calculate the distance from that one to each center of my hitbox spheres. If the distance is less than the radius, it has successfully detected a collision. But when I have up to 10/20 spheres in my plane and like 100 triangles to check, it will take to much time.
Another attempt was to get the vertical distance to the ground from each hitbox sphere. This one needs way less calculations but fails when getting near steep surfaces.
I would be very happy if someone could help me implementing an efficient version of plane/terrain collision detection :)
render terrain
May be you could try liner depth buffer to improve accuracy.
read depth texture
you can use glReadPixels with GL_DEPTH_COMPONENT and GL_FLOAT. That will copy depth buffer into CPU side memory. So now you can do also collision on CPU side or any computation related to ground in view...
use the depth buffer as texture
so copy it back GPU with glTexImage2D. I know this is slow (but most likely much faster then your current computation of collision. In case you are not using Intel HD Graphics You can instead #2,#3 use FBO for depth which will render depth buffer directly to texture. But on Intel this does not work reliably (or at all).
now render your objects (off screen) with GLSL
inside fragment shader just compare rendered position with depth (attached as texture). If bellow output the collision somewhere. If done in compute shaders than you can store results in some texture. Or you could use some attachment or FBO for this.
In case you can not use FBO you could render to "screen" with specifically color encoded collisions. Then read it with glReadPixels and scan for it to handle what ever collision logic you have on CPU side...
Do not write to Depth buffer in this pass !!! And also do not use CULL_FACE because that could miss some collision of the back side of your object.
now render the objects normally
in case you do not render in #4 or you encode collision to screen buffer you need to overwrite/render the stuff. Otherwise this step is not needed. But rendering after collision detection is good because in case of collision you most likely change the object position/orientation/mesh and already rendered object could be hindering the altered one.
[Notes]
Copying image between CPU and GPU is slow so use FBO and render to texture if you can instead.
If you are not familiar with multiple pass rendering see some QAs for inspiration:
OpenGL Scale Single Pixel Line
Render filled complex polygons with large number of vertices with OpenGL
This works only in view ... but you can do just collision rendering pass (per object). Render with camera set to view from top to down (birdseye) and covering only area around your object... Also you do not need too big resolution for this so it should be relatively fast ... So you can divide your screen to square areas (using glViewport) testing more objects in single frame to lover the sync time slowdowns as much as possible (use less glReadPixel calls). Also you do not need any vertex colors or textures for this.
Im making a voxel game, and i have designed the water as cubes with 0.5 alpha. It works great if all the water is at the same height, like in the image below:
But, if the water is not at the same height, alpha overlapping happens:
How can I prevent this overlapping to occur? (For example, only drawing the nearest water body for each pixel, discarding the remaining). Should I need to use FrameBuffers, drawing the scene with multiple passes, or it would be enough by using a alternate blend function, or taking another less GPU expensive approach?
I found an answer without drawing the scene with multiple passes. I hope it helps somebody:
We are going to draw the nearest water body for each pixel, discarding the remaining, and so, avoiding the overlapping.
First, you draw the solid blocks normally.
Then, you draw the water after disabling writing in the color buffer glColorMask(false,false,false,false). The Z-buffer will be updated as desired, but no water will be drawn yet.
Finally, you enable writing in the color buffer (glColorMask(true,true,true,true) ) and set the depthFunc to LEQUAL ( glDepthFunc(GL_LEQUAL) ). Only the nearest water pixels will pass the depth test (Setting it to LEQUAL instead of EQUAL deals with some rare but possible floating point approximation errors). Enabling blending and drawing the water again will produce the effect we wanted:
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt