Using depth buffer for layering 2D sprites - c++

I'm making a 2D game using OpenGL. I want to do the drawing like, first I copy vertex data of all objects I want to draw into VBOs (one VBO per texture/shader), then draw each VBO in separate draw call. It seemed like a good idea, until I realized it will mess up drawing order - the draw calls won't necessarily be in order the objects were loaded into VBOs. I thought of using a depth buffer to sort items - every new object to draw will have slightly higher Z position. The question is, how much should I increment it to not run into any problems? AFAIK, there can be two kinds of problems - if I make it too large, then I will have limited number of objects I can draw in a single frame, and if I make it too small, the precision loss of the depth buffer might make overlapping images be drawn in wrong order. To summarize:
1) What should be front and back values of my orthographic projection? 0 to 1? -1 to 1? 1 to 2? Does it matter?
2) If I use 's nextafter() for incrementing Z position, what kind of trouble can I run into? How does OpenGL and depth buffer react to subnormal floats? If I started with std::numeric_limits::min(), and ended at 1, is there anything else I should worry about?

First and foremost, you need to know the bit-depth of your depth buffer. Generally the depth buffer is fixed-point, either 16-, 24- or 32-bit.
Given a fixed-point depth buffer and the default depth range [0,1] you can make every integer value represent a uniquely distinguishable depth by using an orthographic projection matrix with 0.0 for nearVal and:
16-bit: farVal = 65535.0
24-bit: farVal = 16777215.0 // Most Common Configuration
32-bit: farVal = 4294967295.0
Then, you can assign your layered sprites up to farVal+1-many different depths (always use an integer value for sprite depth and begin with 0) and not worry about the depth buffer not being able to distinguish between the layers. In other words, the precision of your depth buffer will dictate the maximum number of layers you can have.

Related

How to write integers alongside pixels in the framebuffer, and then use the written integer to ignore the depth buffer

What I want to do
I want to have a set triangles bleed through, or rather ignore the depth buffer, for another set triangles, but only if they have the same number.
Problem (optional reading)
I do not know how to do this without introducing a ton of bubbles into the pipeline. Right now I have very high throughput because I can throw my geometry onto the GPU, tell it to render, and forget about it. However, if I have to keep toggling the state when drawing, I'm worried I'm going to tank my performance. Other people who have done what I've just said (doing a ton of draw calls and state changes) have much worse performance than me. This performance hit is also significantly worse on older hardware, where we are talking on order of 50 - 100+ times performance loss by doing it the state-change way.
Unfortunately this triangle bleeding scenario happens many thousands of times, so the state machine will be getting flooded with "draw triangles, depth off, draw triangles that bleed through, depth on, ...", except N times, where N can get large (N >= 1000).
A good way of imagining this is having a set of triangles T_i, and a set of triangles that bleed through B_i where B_i only bleeds through T_i, and i ranges from 0...1000+. Note that if we are drawing B_100, then it should only bleed through T_100, not T_99 or T_101.
My next thought is to draw all the triangles with their integer into one framebuffer (along with the integer), then draw the bleed through triangles into another framebuffer (also with the integer), and then merge these framebuffers together. I figure they will have the color, depth, and the integer, so I can hopefully merge them in the fragment shader.
Problem is, I have no idea how to write an integer alongside the out vec4 fragColor in the fragment shader.
Questions (and in short)
This leaves me with two questions:
How do I write an integer into a framebuffer? Do I need to write to 4 separate texture framebuffers? (like one color/depth framebuffer texture, another integer framebuffer texture, and then double this so I can merge the pairs of framebuffers together at some point?)
To make this more clear, the algorithm would look like
Render all the 'could be bled from triangles', described above as set T_i,
write colors and depth info into FB1, and write integers into FB2
Render all the 'bleeding' triangles, described above as set B_i,
write colors and depth into FB3, and write integers to FB4
Bind the textures for FB1, FB2, FB3, FB4
Render each pixel by sampling the RGBA, depth, and integers
from the appropriate texture and write those out into the
final framebuffer
I would need to access the color and depth from the textures in the shader. I would also need to access the integer from the other texture. Then I can do the comparison and choose which pixel to write to the default framebuffer.
Is this idea possible? I assume if (1) is, then the answer is yes. Maybe another question could be whether there's a better way. I tried thinking of doing this with the stencil buffer but had no luck
What you want is theoretically possible, but I can't speak as to its performance. You'll be reading and writing a whole lot of texels in a lot of textures for every program iteration.
Anyway to answer your questions:
A framebuffer can have multiple color attachments by using glFramebufferTexture2D with GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, etc. Each texture can then have its own internal format, in your example you probably want a regular RGB texture for your color output, and a second 1-integer only texture.
Your depth buffer is complicated, because you don't want to let OpenGL handle it as normal. If you want to take over the depth buffer, you probably want to attach it as yet another, float texture that you can check against or not your screen-space fragments.
If you have doubts about your shader, remember that you can bind the any number of textures as input samplers you program in code, and each color bind gets its own output value (your shader runs per-texel, so you output one value at a time). Make sure the format of your output is correct, ie vec3/vec4 for the color buffer, int for your integer buffer and float for the float buffer.
And stencil buffers won't help you turn depth checking on or off in a single (possibly indirect) draw call. I can't visualize what your bleeding thing means, but it can probably help with that? Maybe? But definitely not conditional depth checking.

How can I apply a depth test to vertices (not fragments)?

TL;DR I'm computing a depth map in a fragment shader and then trying to use that map in a vertex shader to see if vertices are 'in view' or not and the vertices don't line up with the fragment texel coordinates. The imprecision causes rendering artifacts, and I'm seeking alternatives for filtering vertices based on depth.
Background. I am very loosely attempting to implement a scheme outlined in this paper (http://dash.harvard.edu/handle/1/4138746). The idea is to represent arbitrary virtual objects as lots of tangent discs. While they wanted to replace triangles in some graphics card of the future, I'm implementing this on conventional cards; my discs are just fans of triangles ("Discs") around center points ("Points").
This is targeting WebGL.
The strategy I intend to use, similar to what's done in the paper, is:
Render the Discs in a Depth-Only pass.
In a second (or more) pass, compute what's visible based solely on which Points are "visible" - ie their depth is <= the depth from the Depth-Only pass at that x and y.
I believe the authors of the paper used a gaussian blur on top of the equivalent of a GL_POINTS render applied to the Points (ie re-using the depth buffer from the DepthOnly pass, not clearing it) to actually render their object. It's hard to say: the process is unfortunately a one line comment, and I'm unsure of how to duplicate it in WebGL anyway (a naive gaussian blur will just blur in the background pixels that weren't touched by the GL_POINTS call).
Instead, I'm hoping to do something slightly different, by rerendering the discs in a second pass instead as cones (center of disc becomes apex of cone, think "close the umbrella") and effectively computing a voronoi diagram on the surface of the object (ala redbook http://www.glprogramming.com/red/chapter14.html#name19). The idea is that an output pixel is the color value of the first disc to reach it when growing radiuses from 0 -> their natural size.
The crux of the problem is that only discs whose centers pass the depth test in the first pass should be allowed to carry on (as cones) to the 2nd pass. Because what's true at the disc center applies to the whole disc/cone, I believe this requires evaluating a depth test at a vertex or object level, and not at a fragment level.
Since WebGL support for accessing depth buffers is still poor, in my first pass I am packing depth info into an RGBA Framebuffer in a fragment shader. I then intended to use this in the vertex shader of the second pass via a sampler2D; any disc center that was closer than the relative texture2D() lookup would be allowed on to the second pass; otherwise I would hack "discarding" the vertex (its alpha would be set to 0 or some flag set that would cause discard of fragments associated with the disc/cone or etc).
This actually kind of worked but it caused horrendous z-fighting between discs that were close together (very small perturbations wildly changed which discs were visible). I believe there is some floating point error between depth->rgba->depth. More importantly, though, the depth texture is being set by fragment texel coords, but I'm looking up vertices, which almost certainly don't line up exactly on top of relevant texel coordinates; so I get depth +/- noise, essentially, and the noise is the issue. Adding or subtracting .000001 or something isn't sufficient: you trade Type I errors for Type II. My render became more accurate when I switched from NEAREST to LINEAR for the depth texture interpolation, but it still wasn't good enough.
How else can I determine which disc's centers would be visible in a given render, so that I can do a second vertex/fragment (or more) pass focused on objects associated with those points? Or: is there a better way to go about this in general?

Optimize OpenGL 2D rendering by using depth buffer to discard overlapping pixels?

Is it possible to take advantage of the depth buffer in a way such that it would only draw on those areas where are no pixels drawn yet?
I am rendering simple 1 colored triangles: a lot of them may overlap, which will reduce rendering speed significantly, because it is rendering more pixels than what is visible on the screen.
This is easily possible in 3D render mode: just enable depth testing and set the triangles on different z-positions. But that does not work on 2d mode: i cant set every triangle on higher position than the previous, since that would result in bad rendering quality after certain height when the depth buffer limits come on the way.
How can I do this with shaders? Or if no shaders needed; how to do it without shaders?
Assign a polygon offset (by means of glPolygonOffset) to each triangle, and enable depth testing.
i cant set every triangle on higher position than the previous, since that would result in bad rendering quality after certain height when the depth buffer limits come on the way.
That would only happen if you do it wrong.
A 24-bit depth buffer offers 16 million different depth values for you to choose from. It's simply a matter of computing a value properly. Granted, the exact mechanics are hardware-specific, but no so specific that you would be unable to get at least 4 million separate layers.
It's a matter of simple math. You're building a function that maps from the integer range [0, N] to the floating-point range [0, 1], where N is the number of triangles. Say, 4 million just to give you room.
Thus, the Z-value for any pariticular triangle is k/N, where k is the integer index of that triangle. You should easily be able to do this in your shader.
Worst comes to worst, you can make a 32-bit floating-point depth buffer.

Deformable terrain in OpenGL 2D [Worms like]

I've searched for a while and I've heard of different ways to do this, so I thought I'd come here and see what I should do,
From what I've gathered I should use.. glBitmap and 0s and 0xFF values in the array to make the terrain. Any input on this?
I tried switching it to quads, but I'm not sure that is efficient and the way its meant to be done.
I want the terrain to be able to have tunnels, such as worms. 2 Dimensional.
Here is what I've tried so far,
I've tried to make a glBitmap, so..
pixels = pow(2 * radius, 2);
ras = new GLubyte[pixels];
and then set them all to 0xFF, and drew it using glBitmap(x, y, 0, 0, ras);
This could be then checked for explosions and what not and whatever pixels could be set to zero. Is this a plausible approach? I'm not too good with opengl, can I put a texture on a glBitmap? From what I've seen it I don't think you can.
I would suggest you to use the stencil buffer. You mark destroyed parts of the terrain in the stencil buffer and then draw your terrain with stencil testing enabled with a simple quad without manually testing each pixel.
OK, this is a high-level overview, and I'm assuming you're familiar with OpenGL basics like buffer objects already. Let me know if something doesn't make sense or if you'd like more details.
The most common way to represent terrain in computer graphics is a heightfield: a grid of points that are spaced regularly on the X and Y axes, but whose Z (height) can vary. A heightfield can only have one Z value per (X,Y) grid point, so you can't have "overhangs" in the terrain, but it's usually sufficient anyway.
A simple way to draw a heightfield terrain is with a triangle strip (or quads, but they're deprecated). For simplicity, start in one corner and issue vertices in a zig-zag order down the column, then go back to the top and do the next column, and so on. There are optimizations that can be done for better performance, and more sophisticated ways of constructing the geometry for better appearance, but that'll get you started.
(I'm assuming a rectangular terrain here since that's how it's commonly done; if you really want a circle, you can substitute 𝑟 and 𝛩 for X and Y so you have a polar grid.)
The coordinates for each vertex will need to be stored in a buffer object, as usual. When you call glBufferData() to load the vertex data into the GPU, specify a usage parameter of either GL_STREAM_DRAW if the terrain will usually change from one frame to the next, or GL_DYNAMIC_DRAW if it will change often but not (close to) every frame. To change the terrain, call glBufferData() again to copy a different set of vertex data to the GPU.
For the vertex data itself, you can specify all three coordinates (X, Y, and Z) for each vertex; that's the simplest thing to do. Or, if you're using a recent enough GL version and you want to be sophisticated, you should be able to calculate the X and Y coordinates in the vertex shader using gl_VertexID and the dimensions of the grid (passed to the shader as a uniform value). That way, you only have to store the Z values in the buffer, which means less GPU memory and bandwidth consumed.

OpenGL depth buffer maximum distance

Since the depth buffer pixels can only have colors from 0 to 255 (am I right?), the maximum draw distance would be limited by that bounds as well.
Is that true?
How do modern games work around this?
What about values inbetween? Like 125.5?
No its not true. Its usually not even possible to use an 8-bit depth buffer due to the limited range it would provide. The minimum is usually 16-bit with 24bit (saving the top 8 bits of 32 for a stencil buffer) the most common. Its also possible to use floating point depth buffers and 32-bit integer buffers.
By using a greater depth.
In the case of a value like 125.5 It would actually get rounded or truncated to 126 or 125. However in general through OpenGL you would actually pass a depth value of between 1 and -1 (post projection and w divide) to OpenGL. This value is then sent to the OpenGL run time which converts it to an actual depth value. This way you can change the bit depth of the depth buffer and everything continues to work.
Games that want to show a huge landscape usually use a skybox / skysphere, ie. a flat image which gives the impression of greatness.
I remember Guildwars' main menu. It looks huge, but it you look closely, it's really a round texture.
Depth buffer pixels are here to make sure objects which are rendered further than an existing object are not drawn. If two objects have the same depth, they can choose to render it anyway, or not; either way is fine, you don't look for this much precision.