GLSL, manual depth test - opengl

I turned OFF the depth buffer reading and writting. I'm rendering something on top of a previous render. Is there a way to query the current depth buffer (from the previous render) and test it against the current fragment being drawn ?
My goal is to change the color when something is behind a geometry. gl_FragDepth, and gl_FragCoord.z seems to give me the current fragment depth being rendered, but not the value of the depth buffer at this fragment location.
Is there a way to achieve this ?

Related

How can I write a different value to the depth buffer than the value used for depth comparisson?

In Open GL, is it possible to render a polygon with a regular depth test enabled, but when the depth buffer value is actually written to the depth buffer, I want to write a custom value?
(The reason is I'm rendering a particle system, which should be depth-tested against the geometry in the scene, but I want to write a very large depth value where the particle system is located, thus utilizing the depth-blur post-processing step to further blur the particle system)
Update
To further refine the question, is it possible without rendering in multiple passes?
You don't.
OpenGL does not permit you to lie. The depth test tests the value in the depth buffer against the incoming value to be written in the depth buffer. Once that test passes, the tested depth value will be written to the depth buffer. Period.
You cannot (ab)use the depth test to do something other than testing depth values.
ARB_conservative_depth/GL 4.2 does allow you a very limited form of this. Basically, you promise the implementation that you will only change the depth in a way that makes it smaller or larger than the original value. But even then, it would only work in specific cases, and then only so long as you stay within the limits you specified.
Enforcing early fragment tests will similarly not allow you to do this. The specification explicitly states that the depth will be written before your shader executes. So anything you write to gl_FragDepth will be ignored.
One way to do it in a single pass is by doing the depth-test "manually".
Set glDepthFunc to GL_ALWAYS
Then in the fragment shader, sample the current value of the depth buffer and depending on it discard the fragment using discard;
To sample the current value of the depth buffer, you either need ARM_shader_framebuffer_fetch_depth_stencil (usually on mobile platforms) or NV_texture_barrier. The later however will yield undefined results if multiple particles of the same drawcall render on top of each other, while the former in this case will use the written depth value of the last particle rendered at the same location.
You can also do it without any extension by copying the current depth buffer into a depth texture before you render the particles and then use that depth texture for your manual depth test. That also avoids that particles which render on top of each other would interfere, as they'll all use the old depth value for the manual test.
You can use gl_FragDepth in fragment shader to write your custom values.
gl_FragDepth = 0.3;
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_FragDepth.xhtml

OpenGL implementing skybox in a deferred renderer

I am trying to figure out how to render a skybox in a deferred renderer so that it can be included in post processing effects, However my Geometry stage is in view space and unfortunately the skybox in this stage will be effected by it's position relative to the light as any object would (it behaves like large box located very far from the light source and shows up very dark).
my setup without trying to incorporate the skybox in post processing is as follows:
1:(bind FBO) Render Geometry to color, normal, position FBO texture attachments (unbind FBO).
2:(bind FBO) Render the scene and calculate lighting in screen space.(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
5: render skybox.
I've tried to switch step 5 with 3
like this:
2:(bind FBO) Render the scene and calculate lighting in screen space.
5: render skybox
(unbind FBO)
3:(bind FBO) apply post processing effects (unbind FBO)
4: blit the Geometry FBO's depth buffer to the default frame buffer
but obviously the skybox has no depth information about the scene and renders on top of the lighting stage. And if I try to do any depth blitting between 2 and 5, I believe I am making invalid GL calls because I'm already bound to an FBO while calling
GL30.glBindFramebuffer(GL30.GL_READ_FRAMEBUFFER, DeferredFBO.fbo_handle);
GL30.glBindFramebuffer(GL30.GL_DRAW_FRAMEBUFFER, 0); // Write to default
// framebuffer or a skybox framebuffer
GL30.glBlitFramebuffer(0, 0, DisplayManager.Width,
DisplayManager.Height, 0, 0, DisplayManager.Width,
DisplayManager.Height, GL11.GL_DEPTH_BUFFER_BIT,
GL11.GL_NEAREST);
So I came up with a really easy hacky solution to this problem without having to incorporate any texture barriers or messing with the depth or color buffers.
I actually render the Skybox Geometry in the Geometry pass of the Deferred Rendering process, I render the skybox and set a flag in the fragment shader to color my skybox, remembering to modify the view matrix to remove the translation with another uniform flag in the vertex Shader. In the fragment shader I set the skybox color as such. Here is a basic summary without pasting all of the code.
layout (binding = 4) uniform samplerCube cubeMap;
uniform float SkyRender;
void main(){
if(SkyRender){
vec4 SkyColor = texture(cubeMap, skyTexCoords);
gAlbedoSpec.rgb = SkyColor.rgb;
gAlbedoSpec.a = -1;
}else{
gAlbedoSpec.rgb = texture(DiffuseTexture, TexCoords);
gAlbedoSpec.a = texture(SpecularTexture, TexCoords).r;
}
I set the alpha component of my skybox in the Color buffer as a flag for my Lighting pass. Here I set it to to -1.
In my lighting pass I simply choose to color my box with Diffuse Only instead of adding lighting calculations if my gAlbedoSpec alpha value is -1.
if(Diffuse.a > -1){
FragColor = SphereNormal * vec4(Dlighting, 1.0)+vec4(Slighting, 1.0);
}else{
FragColor = Diffuse ;
}
It's fairly simple and doesn't require much code and gets the job done.
Then give it the depth information it lacks.
When you rendered your scene in step 1, you used a depth buffer. So when you draw your skybox, you need an FBO that uses that same depth buffer. But this FBO also needs to use the color image that you rendered to in step 2.
Now, this FBO cannot be the same FBO you used in step 2. Why?
Because that would be undefined behavior. Presumably, step 2 reads from your depth buffer to reconstruct the position (if this is not the case, then you can just attach the depth buffer to the FBO from step 2. But then again, you're also wasting tons of performance). But that depth buffer is also attached to the FBO. And that makes it undefined behavior. Even if you're not writing to the depth, it is still undefined under OpenGL.
So you will need another FBO, which has the depth buffer from step 1 with the color buffer from step 2.
Unless you have access to OpenGL 4.5/ARB_texture_barrier/NV_texture_barrier. With that feature, it becomes defined behavior if you use write masks to turn off writes to the depth buffer. All you need to do is issue a glTextureBarrier before performing step 2. So you don't need another FBO if you have that.
In either case, keep the depth test enabled when rendering your skybox, but turn off depth writing. This will allow fragments behind your actual world to be culled, but the depth of the skybox fragments will be infinitely far away.

OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture

I want to access the depth buffer value at the currently processed pixel in a pixel shader.
How can we achieve this goal? Basically, there seems to be two options:
Render depth to texture. How can we do this and what is the tradeoff?
Use the value provided by gl_FragCoord.z - But: Is this the correct value?
On question 1: You can't directly read from the depth buffer in the fragment shader (unless there are recent extensions I'm not familiar with). You need to render to a Frame Buffer Object (FBO). Typical steps:
Create and bind an FBO. Look up calls like glGenFramebuffers and glBindFramebuffer if you have not used FBOs before.
Create a texture or renderbuffer to be used as your color buffer, and attach it to the GL_COLOR_ATTACHMENT0 attachment point of your FBO with glFramebufferTexture2D or glFramebufferRenderbuffer. If you only care about the depth from this rendering pass, you can skip this and render without a color buffer.
Create a depth texture, and attach it to the GL_DEPTH_ATTACHMENT attachment point of the FBO.
Do your rendering that creates the depth you want to use.
Use glBindFramebuffer to switch back to the default framebuffer.
Bind your depth texture to a sampler used by your fragment shader.
Your fragment shader can now sample from the depth texture.
On question 2: gl_FragCoord.z is the depth value of the fragment that your shader is operating on, not the current value of the depth buffer at the fragment position.
gl_FragCoord.z is the window-space depth value of the current fragment. It has nothing to do with the value stored in the depth buffer. The value may later be written to the depth buffer, if the fragment is not discarded and it passes a stencil/depth test.
Technically there are some hardware optimizations that will write/test the depth early, but for all intents and purposes gl_FragCoord.z is not the value stored in the depth buffer.
Unless you render in multiple passes, you cannot read and write to the depth buffer in a fragment shader. That is to say, you cannot use a depth texture to read the depth and then turn around and write a new depth. This is akin to trying to implement blending in the fragment shader, unless you do something exotic with DX11 class hardware and image load/store, it just is not going to work.
If you only need the depth of the final drawn scene for something like shadow mapping, then you can do a depth-only pre-pass to fill the depth buffer. In the second pass, you would read the depth buffer but not write to it.

Accessing the Depth Buffer from a fragment shader

I had an idea for fog that I would like to implement in OpenGl: After the scene is rendered, a quad is rendered over the entire viewport. In the fragment shader, this quad samples the depth buffer at that location and changes its color/alpha in order to make that pixel as foggy as needs be.
Now I know I can render the scene with the depth buffer linked to a texture, render the scene normally and then render the fog, passing it that texture, but this is one rendering too many. I wish to be able to either
Directly access the current depth buffer from the fragment shader
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
Is this possible?
What you're thinking of (accessing the target framebuffer for input) would result in a feedback loop which is forbidden.
(…), but this is one rendering too many.
Why do you think that? You don't have to render the whole scene a new, just the fog overlay on top of it.
I wish to be able to either
Directly access the current depth buffer from the fragment shader
If you want to access only the depth of the newly rendered fragment, just use gl_FragCoord.z, this variable (that should only be read to keep performance) holds the depth buffer value the new fragment will have.
See the GLSL Specification:
The variable gl_FragCoord is available as an input variable from within fragment shaders
and it holds the window relative coordinates (x, y, z, 1/w) values for the fragment.
If multi-sampling, this value can be for any location within the pixel, or one of the
fragment samples. The use of centroid in does not further restrict this value to be
inside the current primitive. This value is the result of the fixed functionality that
interpolates primitives after vertex processing to generate fragments. The z component
is the depth value that would be used for the fragment’s depth if no shader contained
any writes to gl_FragDepth. This is useful for invariance if a shader conditionally
computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
What's so wrong with first rendering the scene normally, with depth going into a separate depth texture attachment, then render the fog, finally compositing them. The computational complexity does not increase by this. Just because it's more steps, it's not doing more work that in your imagined solution, since the individual steps become simpler.
distance camera-pixel:
float z = gl_FragCoord.z / gl_FragCoord.w;
the solution you think to is a common solution, but no need of a supplementary sampling with a quad, everything is already there to compute fog in one pass if depth buffer is enable:
here is a an implementation
const float LOG2 = 1.442695;
float z = gl_FragCoord.z / gl_FragCoord.w;
float fogFactor = exp2( -gl_Fog.density *
gl_Fog.density *
z *
z *
LOG2 );
fogFactor = clamp(fogFactor, 0.0, 1.0);
gl_FragColor = mix(gl_Fog.color, finalColor, fogFactor );

how to handle depth in glsl

I have a problem with FBO and depth in openGL. I am passing projection, view and model matrices to a shader that writes to the g buffer. When I unbind the FBO and write to gl_FragColor the scene displays as it ought. But when I write to gl_FragData[0] then write the accompanying texture to a screen aligned quad, objects are drawn according to inverse order processed rather than depth... I can see through objects processed first to objects processed after. Has anyone had the same problem and do they know a fix? Or could someone provide syntax on reading depth values from the vertex shader, querying the current depth, then writing to the depth buffer depending on a comparison, ie, handling the operation manually in the fragment shader.
Your main frame-buffer most likely has the depth, while your manually created FBO might not have it. Therefore, when drawing to the screen you have depth-sorted geometry, while your FBO can not provide that and internally works with disabled depth testing having no storage associated with it.