OpenGL Depth buffer in orthographic projection - opengl

If I use orthographic projection in OpenGL, but still set different z-values to my objects, does it still be visible in Depth Buffer?
I mean, in color buffer everything looks plain and at one distance. But wherever they will "colorized" in different shades? Does depth buffer "understand" depth in orthographic projection?

A depth buffer has nothing to do with the projection matrix. Simply put, a z-buffer takes note of the closest Z-value at a given point. As things are drawn it looks at the current value. If the new value is less then the existing value it is accepted and the z buffer is updated. If the value is greater then the value, behind the current value, it is discarded. The depth buffer has nothing to do with color. I think you might be confusing blending with depth testing.
For example, say you have two quads A & B.
A.z = - 1.0f;
B.z = - 2.0f;
If you assume that both quads have the same dimensions, outside of their Z value, then you can see how drawing both would be a waste. Since quad A is in front of quad B it is a waste to draw quad B. What the depth buffer does is checks the zcords. In this example if you had enabled depth testing & a depth buffer quad B would never have been drawn because the depth testing would have shown it was occluded by quad A.

Related

OpenGL optimizing skybox rendering

I'm learning how to draw skybox using cubemaps from the following resource.
I've got to the part where he talks about how we can optimize the rendering of the skybox. I get that instead of rendering the skybox first, which will result in width*height of your viewport fragments to be calculated and then only to be overdrawn by other objects, it's best to draw it last and fake its depth value to be 1.0f by assigning the gl_Position of the skybox vertex shader to gl_Position = pos.xyww essentially making each gl_FragCoord.z equals to 1.0f due to perspective division.
Now after we get a skybox with each of its fragments have maximum depth value of 1.0f he changes the depth function to GL_LEQUAL instead of GL_LESS.
Here's where I got a little bit confused.
If we render the skybox last and have its depth values equal to 1.0f why do we need to change the depth function to GL_LEQUAL? Wouldn't it be sufficient to have it set to GL_LESS because if we render every other object in the scene it depth value will probably be less than 1.0f so it'll write its value to the z-buffer a value less than 1.0f. Now if we set the depth function for the skybox to GL_LESS it will then only pass the fragments with depth value of less than what is actually in the z-buffer which will probably only pass fragments that other objects are not covering, so why do we need the GL_LEQUAL?
When you initially cleared the framebuffer at the start of the frame, you probably did so to a value of 1.0f. So if you want to draw the skybox at all, you'll need to allow the skybox to draw in areas with a cleared depth value.

GLSL, change glPosition.z to create a flat change in depth buffer?

I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html

Why does OpenGL allow/use fractional values as the location of vertices?

As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!

Accessing the Depth Buffer from a fragment shader

I had an idea for fog that I would like to implement in OpenGl: After the scene is rendered, a quad is rendered over the entire viewport. In the fragment shader, this quad samples the depth buffer at that location and changes its color/alpha in order to make that pixel as foggy as needs be.
Now I know I can render the scene with the depth buffer linked to a texture, render the scene normally and then render the fog, passing it that texture, but this is one rendering too many. I wish to be able to either
Directly access the current depth buffer from the fragment shader
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
Is this possible?
What you're thinking of (accessing the target framebuffer for input) would result in a feedback loop which is forbidden.
(…), but this is one rendering too many.
Why do you think that? You don't have to render the whole scene a new, just the fog overlay on top of it.
I wish to be able to either
Directly access the current depth buffer from the fragment shader
If you want to access only the depth of the newly rendered fragment, just use gl_FragCoord.z, this variable (that should only be read to keep performance) holds the depth buffer value the new fragment will have.
See the GLSL Specification:
The variable gl_FragCoord is available as an input variable from within fragment shaders
and it holds the window relative coordinates (x, y, z, 1/w) values for the fragment.
If multi-sampling, this value can be for any location within the pixel, or one of the
fragment samples. The use of centroid in does not further restrict this value to be
inside the current primitive. This value is the result of the fixed functionality that
interpolates primitives after vertex processing to generate fragments. The z component
is the depth value that would be used for the fragment’s depth if no shader contained
any writes to gl_FragDepth. This is useful for invariance if a shader conditionally
computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
What's so wrong with first rendering the scene normally, with depth going into a separate depth texture attachment, then render the fog, finally compositing them. The computational complexity does not increase by this. Just because it's more steps, it's not doing more work that in your imagined solution, since the individual steps become simpler.
distance camera-pixel:
float z = gl_FragCoord.z / gl_FragCoord.w;
the solution you think to is a common solution, but no need of a supplementary sampling with a quad, everything is already there to compute fog in one pass if depth buffer is enable:
here is a an implementation
const float LOG2 = 1.442695;
float z = gl_FragCoord.z / gl_FragCoord.w;
float fogFactor = exp2( -gl_Fog.density *
gl_Fog.density *
z *
z *
LOG2 );
fogFactor = clamp(fogFactor, 0.0, 1.0);
gl_FragColor = mix(gl_Fog.color, finalColor, fogFactor );

opengl using depth buffer for shadows- any reason not to just render z to tex rather than use depth buffer?

So I'm working on implementing shadow mapping. So far, I've rendered the geometry (depth, normals, colors) to a framebuffer from the camera's point of view, and rendered the depth of the geometry from the light's point of view. Now, I'm rendering the lighting from the camera's point of view, and for each fragment, I'm to compare its distance to the light, to the depth tex value from the render-from-the-lights-pov pass. If the distance is greater, it's in shadow. (Just recapping here to make sure there isn't anything I don't realize I don't understand).
So, to do this last step, I need to convert the depth value [0-1] to its eye-space value [0.1-100] (my near/far planes). (explanation here- Getting the true z value from the depth buffer).
Is there any reason to not instead just have the render-from-the-lights-pov pass just write to a texture the distance of the fragment to the camera (the z component) directly? Then we won't have to deal with the ridiculous conversion? Or am I missing something?
You can certainly write your own depth value to a texture, and many people do just that. The advantage of doing that is that you can choose whatever representation and mapping you like.
The downside is that you have to either a) still have a "real" depth buffer attached to your FBO (and therefore double up the bandwidth you're using for depth writing), or b) use GL_MIN/GL_MAX blending mode (depending on how you are mapping depth) and possibly miss out on early-z out optimizations.