OpenGL depth buffer maximum distance - opengl

Since the depth buffer pixels can only have colors from 0 to 255 (am I right?), the maximum draw distance would be limited by that bounds as well.
Is that true?
How do modern games work around this?
What about values inbetween? Like 125.5?

No its not true. Its usually not even possible to use an 8-bit depth buffer due to the limited range it would provide. The minimum is usually 16-bit with 24bit (saving the top 8 bits of 32 for a stencil buffer) the most common. Its also possible to use floating point depth buffers and 32-bit integer buffers.
By using a greater depth.
In the case of a value like 125.5 It would actually get rounded or truncated to 126 or 125. However in general through OpenGL you would actually pass a depth value of between 1 and -1 (post projection and w divide) to OpenGL. This value is then sent to the OpenGL run time which converts it to an actual depth value. This way you can change the bit depth of the depth buffer and everything continues to work.

Games that want to show a huge landscape usually use a skybox / skysphere, ie. a flat image which gives the impression of greatness.
I remember Guildwars' main menu. It looks huge, but it you look closely, it's really a round texture.
Depth buffer pixels are here to make sure objects which are rendered further than an existing object are not drawn. If two objects have the same depth, they can choose to render it anyway, or not; either way is fine, you don't look for this much precision.

Related

Changing the size of a pixel depending on it's color with GLSL

I have a application that will encode data for bearing and intensity using 32 bits. My fragment shader already decodes the values and then sets the color depending on bearing and intensity.
I'm wondering if it's also possible, via shader, to change the size (and possibly shape) of the drawn pixel.
As an example, let's say we have 4 possible values for intensity, then 0 would cause a single pixel to be drawn, 1 would draw a 2x2 square, 2 a 4x4 square and 3 a circle with a radius of 6 pixels.
In the past, we had to do all this on the CPU side and I was hoping to offload this job to the GPU.
No, fragment shaders cannot affect the "size" of the data they write. Once something has been rasterized into fragments, it doesn't really have a "size" anymore.
If you're rendering GL_POINTS primitives, you can change their size from the vertex shader. As for point sizes, it's rather difficult to ensure that a particular point covers an exact number of fragments.
The first thing that came into my mind is doing something similiar to blur technique, but instead of bluring the texture, we use it to look for neighbouring texels with a range to check if it has the intensity above 1.0f. if yes, then set the current texel color to for example red.
If you're using a fbo that is 1:1 in comparison to window size, use 1/width and 1/height in texture coordinates to get approximately 1 pixel (well not exactly because it is not a pixel but texel, just nearly)
Although this work just fine, the downside of this is it is very expensive as it will have n^2 complexity and probably some branching.
Edit: after thinking awhile this might not work for size with even number

openGL how to draw using unsigned int vertex data

I'm doing a simple openGL program that involves rendering to a depth texture offscreen. However I'm dealing with large depths that exceed what can be represented by a float's precision. As a result I need to use unsigned int for drawing my points. I run into two issues when I try to implement this.
1) Whenever I attempt to draw a VBO that uses unsigned int (screen coordinates) for drawing it doesn't fall within the -1 to 1 range so none of them draw to the screen. The only way I can find to fix this problem is by using a orthographic projection matrix to adjust it to draw to screen coordinates.
Is this understanding correct or is there an easier way to do it.
If it is correct how do you properly implement this for what I want.
2) Secondly when drawing this way is there any way to preserve the initial values (not converting them to floats when drawing) so they are no different when you read them back again, this is necessary because my objective is to create a depth buffer of random points with random depths up to 2^32. If this gets converted to floats precision is lost so the data read out again is not the same as what was put in.
This is the wrong solution to the problem. To answer your question itself, gl_Position is a vec4. And therefore, the depth that OpenGL sees is a float. There's nothing you can do to change that, short of ignoring the depth buffer entirely and doing "depth tests" yourself in the fragment shader.
The preferred solution to the problem is to use a floating-point depth buffer. Using GL_DEPTH_COMPONENT_32F or something of the kind. But that alone is insufficient, due to an unfortunate legacy issue with how OpenGL defines its coordinate transforms. See, floats put a lot of precision into the range [0, 1], but it's biased closer to zero. But because of the way OpenGL defines its transforms, that precision gets lost along the way; effectively, the exponent part of the float never gets used. It makes a 32-bit float seem like a 24-bit fixed-point value.
OpenGL has fixed that problem with ARB_clip_control, which restores the ability to use full 32-bit floats effectively. You should attempt to employ that if possible.

Using depth buffer for layering 2D sprites

I'm making a 2D game using OpenGL. I want to do the drawing like, first I copy vertex data of all objects I want to draw into VBOs (one VBO per texture/shader), then draw each VBO in separate draw call. It seemed like a good idea, until I realized it will mess up drawing order - the draw calls won't necessarily be in order the objects were loaded into VBOs. I thought of using a depth buffer to sort items - every new object to draw will have slightly higher Z position. The question is, how much should I increment it to not run into any problems? AFAIK, there can be two kinds of problems - if I make it too large, then I will have limited number of objects I can draw in a single frame, and if I make it too small, the precision loss of the depth buffer might make overlapping images be drawn in wrong order. To summarize:
1) What should be front and back values of my orthographic projection? 0 to 1? -1 to 1? 1 to 2? Does it matter?
2) If I use 's nextafter() for incrementing Z position, what kind of trouble can I run into? How does OpenGL and depth buffer react to subnormal floats? If I started with std::numeric_limits::min(), and ended at 1, is there anything else I should worry about?
First and foremost, you need to know the bit-depth of your depth buffer. Generally the depth buffer is fixed-point, either 16-, 24- or 32-bit.
Given a fixed-point depth buffer and the default depth range [0,1] you can make every integer value represent a uniquely distinguishable depth by using an orthographic projection matrix with 0.0 for nearVal and:
16-bit: farVal = 65535.0
24-bit: farVal = 16777215.0 // Most Common Configuration
32-bit: farVal = 4294967295.0
Then, you can assign your layered sprites up to farVal+1-many different depths (always use an integer value for sprite depth and begin with 0) and not worry about the depth buffer not being able to distinguish between the layers. In other words, the precision of your depth buffer will dictate the maximum number of layers you can have.

Optimize OpenGL 2D rendering by using depth buffer to discard overlapping pixels?

Is it possible to take advantage of the depth buffer in a way such that it would only draw on those areas where are no pixels drawn yet?
I am rendering simple 1 colored triangles: a lot of them may overlap, which will reduce rendering speed significantly, because it is rendering more pixels than what is visible on the screen.
This is easily possible in 3D render mode: just enable depth testing and set the triangles on different z-positions. But that does not work on 2d mode: i cant set every triangle on higher position than the previous, since that would result in bad rendering quality after certain height when the depth buffer limits come on the way.
How can I do this with shaders? Or if no shaders needed; how to do it without shaders?
Assign a polygon offset (by means of glPolygonOffset) to each triangle, and enable depth testing.
i cant set every triangle on higher position than the previous, since that would result in bad rendering quality after certain height when the depth buffer limits come on the way.
That would only happen if you do it wrong.
A 24-bit depth buffer offers 16 million different depth values for you to choose from. It's simply a matter of computing a value properly. Granted, the exact mechanics are hardware-specific, but no so specific that you would be unable to get at least 4 million separate layers.
It's a matter of simple math. You're building a function that maps from the integer range [0, N] to the floating-point range [0, 1], where N is the number of triangles. Say, 4 million just to give you room.
Thus, the Z-value for any pariticular triangle is k/N, where k is the integer index of that triangle. You should easily be able to do this in your shader.
Worst comes to worst, you can make a 32-bit floating-point depth buffer.

My own z-buffer

How I can make my own z-buffer for correct blending alpha channels? I'm using glsl.
I have only one idea. And this is use 2 "buffers", one of them storing depth-component and another color (with alpha channel). I don't need access to buffer in my program. I cant use uniform array because glsl have a restriction for the number of uniforms variables. I cant use FBO because behaviour for sometime writing and reading Frame Buffer is not defined (and dont working at any cards).
How I can resolve this problem?!
Or how to read actual real time z-buffer from glsl? (I mean for each fragment shader call z-buffer must be updated)
How I can make my own z-buffer for correct blending alpha channels?
That's not possible. For perfect order-independent transparency you must get rid of z-buffer and replace it with another mechanism for hidden surface removal.
With z-buffer there are two possible ways to tackle the problem.
Multi-layered z-buffer (impractical with hardware acceleration) - basically it'll store several layers of "depth" values and will use it for blending transparent surfaces. Will hog a lot of memory, and there will be maximum number of transparent overlayying surfaces, once you're over the limit, there will be artifacts.
Depth peeling (google it). Order independent transparency, but there's a limit for maximum number of "overlaying" transparent polygons per pixel. Can actually be implemented on hardware.
Both approaches will have a limit (maximum number of overlapping transparent polygons per pixel), once you go over the limit, scene will no longer render properly. Which means the whole thing rather useless.
What you could actually do (to get perfect solution) is to remove the zbuffer completely, and make a graphic rendering pipeline that will gather all polygons to be rendered, clip them, split them (when two polygons intersect), sort them and then paint them on screen in correct order to ensure that you'll get correct result. However, this is hard, and doing it with hardware acceleration is harder. I think (I'm not completely certain it happened) 5 ot 6 years ago some ATI GPU-related document mentioned that some of their cards could render correct scene with Z-Buffer disabled by enabling some kind of extension. However, they didn't say a thing about alpha-blending. I haven't heard about this feature since. Perhaps it didn't become popular and shared the fate of TruForm (forgotten). Also such rendering pipeline will not be able to some things that are possible on z-buffer
If it's order-independent transparencies you're after then the fundamental problem is that a depth buffer stores on depth per pixel but if you're composing a view of partially transparent geometry then more than one fragment contributes to each pixel.
If you were to solve the problem robustly you'd need an ordered list of depths per pixel, going back to the closest opaque fragment. You'd then walk the list in reverse order. In practice OpenGL doesn't do things like variably sized arrays so people achieve pretty much that by drawing their geometry in back-to-front order.
An alternative embodied by GL_SAMPLE_ALPHA_TO_COVERAGE is to switch to screen-door transparency, which is indistinguishable from real transparency either at a really high resolution or with multisampling. Ideally you'd do that stochastically, but that would void the OpenGL rule of repeatability. Nevertheless since you're in GLSL you can do it for yourself. Your sampler simply takes the input alpha and uses that as the probability that it'll output the final pixel. So grab a random value in the range 0.0 to 1.0 from somewhere and if it's greater than the alpha then discard the pixel. Always output with an alpha of 1.0 and just use the normal depth buffer. Answers like this say a bit more on what you can do to get randomish numbers in GLSL, and obviously you want to turn multisampling up as high as possible.
Eric Enderton has written a decent paper (which has a slide version) on stochastic order-independent transparency that goes alongside a DirectX implementation that's worth checking out.