Comparing multiple depth buffers - c++

I am trying to use openGL for a project, I was wondering if there was a way to use the built in depth testing with multiple depth buffers. Basically each pass I'm getting a new depth buffer, and I have to compare this with the current depth buffer and write all the values that pass to it. Is there any built in functionality that can do this or will I have to do it manually.
Thanks

It's not mandatory to clear the depth buffer between pass.
If you don't, For any pass, if a value succeed at the depth test it will be writen to the depth buffer.
You can control the depth function but possibilities are limited. And the result is always the same: the value is written to the depth buffer (no complicated calculation).
For more complex stuff, you should take a look to GlSl.
Finally, if you want a good help, you should be more specific about your goal.

Related

Depth-fighting solution using custom depth testing

The core of my problem is that I have troubles with depth-fighting in pure OpenGL.
I have two identical geometries, but one is simpler than the other.
That forms a set of perfectly coplanar polygons, and I want to display the complex geometry on top of the simpler geometry.
Unsurprisingly, it leads me to depth-fighting when I draw sequentially the two sets of triangles using the OpenGL depth buffer. At the moment, I've patched it using glPolygonOffset but this solution is not suitable for me (I want the polygons the be exactly coplanar).
My idea is to temporary use a custom depth test when drawing the second set of triangles. I would like to save the depth of the fragments during the rendering of the first set. Next, I would use glDepthFunc(GL_ALWAYS) to disable the depth buffer (but still writing in it). When rendering the second set, I would discard fragments that have a greater z than the memory I just created, but with a certain allowance (at least one time the precision of the Z-buffer at the specific z, I guess). Then I would reset depth function to GL_LEQUAL.
Actually, I just want to force a certain allowance for the depth test.
Is it a possible approach ?
The problem is that I have no idea how to pass information (custom depth buffer) from one program to another.
Thanks
PS : I also looked into Frame Buffer Objects and Deferred Rendering because apparently it allows passing information via a 'G-buffer', but once I write:
unsigned int gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
My window goes black... Sorry if things are obvious I'm not familiar yet with OpenGL
As Rabbid76 said, I can simply disable depth writing using glDepthMask(GL_FALSE).
Now, I can draw several layers of coplanar polygons using the same offset.
Solved.

Why is depth buffers faster than depth textures?

This tutorial on shadow-mapping in OpenGL briefly mentions the difference between using a depth buffer and a depth texture (edit: to store per pixel depth information for depth testing or other purposes, such as shadow-mapping) by stating:
Depth texture. Slower than a depth buffer, but you can sample it later in your shader
However, this got me wondering why this is so. After all, both seem to be nothing more than a two-dimensional array containing some data, and the definition on Microsofts notes on graphics define them in very similar terms as such (these notes are as pointed out in a comment, not on OpenGL but another graphical engine, but the purpose of the depth-buffers/-textures seem to be quite similar -- I have have not found an equal description of the two for OpenGL depth-buffers/-textures -- for which reason I have decided to keep these articles. If someone has a link to an article describing depth buffers and depth textures in OpenGL you will be welcome to post it in the comments)
A depth buffer contains per-pixel floating-point data for the z depth of each pixel rendered.
and
A depth texture, also known as a shadow map, is a texture that contains the data from the depth buffer for a particular scene
Of course, there are a few differences between the two methods -- notably, the depth texture can be sampled later, unlike the depth buffer.
Despite these differences, I can however not see why the depth buffer should be faster to use than a depth texture, and my question is, therefore: why can't these two methods of storing the same data be equally fast (edit: when used for storing depth data for depth testing).
By "depth buffer", I will assume you mean "renderbuffer with a depth format".
Possible reasons why a depth renderbuffer might be faster to render to than a depth texture include:
A depth renderbuffer can live within specialized memory that is not shader-accessible, since the implementation knows that you can't access it from the shader.
A depth renderbuffer might be able to have a special format or layout that a depth texture cannot have, since the texture has to be shader-accessible. This could include things like Hi-Z/Hierarchical-Z and so forth.
#1 tends to crop up on tile-based architectures. If you do things right, you can keep your depth renderbuffer entirely within tile memory. That means that, after a rendering operation, there is no need to copy it out to main memory. By contrast, with a depth texture, the implementation can't be sure you don't need to copy it out, so it has to do so just to be safe.
Note that this list is purely speculative. Unless you've actually profiled it, or have some specific knowledge of hardware (as in the TBR case), there's no reason to assume that there is any substantial difference in performance.

How can I write a different value to the depth buffer than the value used for depth comparisson?

In Open GL, is it possible to render a polygon with a regular depth test enabled, but when the depth buffer value is actually written to the depth buffer, I want to write a custom value?
(The reason is I'm rendering a particle system, which should be depth-tested against the geometry in the scene, but I want to write a very large depth value where the particle system is located, thus utilizing the depth-blur post-processing step to further blur the particle system)
Update
To further refine the question, is it possible without rendering in multiple passes?
You don't.
OpenGL does not permit you to lie. The depth test tests the value in the depth buffer against the incoming value to be written in the depth buffer. Once that test passes, the tested depth value will be written to the depth buffer. Period.
You cannot (ab)use the depth test to do something other than testing depth values.
ARB_conservative_depth/GL 4.2 does allow you a very limited form of this. Basically, you promise the implementation that you will only change the depth in a way that makes it smaller or larger than the original value. But even then, it would only work in specific cases, and then only so long as you stay within the limits you specified.
Enforcing early fragment tests will similarly not allow you to do this. The specification explicitly states that the depth will be written before your shader executes. So anything you write to gl_FragDepth will be ignored.
One way to do it in a single pass is by doing the depth-test "manually".
Set glDepthFunc to GL_ALWAYS
Then in the fragment shader, sample the current value of the depth buffer and depending on it discard the fragment using discard;
To sample the current value of the depth buffer, you either need ARM_shader_framebuffer_fetch_depth_stencil (usually on mobile platforms) or NV_texture_barrier. The later however will yield undefined results if multiple particles of the same drawcall render on top of each other, while the former in this case will use the written depth value of the last particle rendered at the same location.
You can also do it without any extension by copying the current depth buffer into a depth texture before you render the particles and then use that depth texture for your manual depth test. That also avoids that particles which render on top of each other would interfere, as they'll all use the old depth value for the manual test.
You can use gl_FragDepth in fragment shader to write your custom values.
gl_FragDepth = 0.3;
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/gl_FragDepth.xhtml

In OpenGL, (how) can I depth test between two depth buffers?

Say I have two depth buffers ("DA" and "DB"). I'd like to attach DA to a framebuffer, and render to it in the normal/"traditional" way (every fragment depth tests against GL_LESS, if it passes, it is drawn and it's depth is written to the depth buffer).
Then, I'd like to attach DB- here I'd want to reference/use DB in the traditional way, but also depth test against DA with GL_GREATER. If the fragment passes both tests, I'd still just like to write depth to DB.
What this would accomplish is anything drawn in the second pass would only draw if it is behind the contents of the first pass, but in front of any other contents in the second pass.
Does OpenGL 2.1 (or OpenGL ES 2) offer any of this functionality? What could I do to work around it not formally offering it?
If it doesn't offer it, here's an idea for how I'd do it, that I would like to be sanity checked...
You attach a render buffer that acts as a depth buffer, then manually populate it with depths. And manually perform the "if depth greater than texel at DA" test (when failing, discard the fragment). I'm curious what problems I'd run into? Is there an easy way to emulate the depth precision of formally specified depth buffers? Would the performance be awful? Any other concerns?
I don't believe there is a way to have multiple depth buffers. I thought there might at least be vendor specific extensions, but a search through the extension registry came up empty even for that.
What you already started considering in the last paragraph looks like the only reasonably straightforward option that comes to mind. You use a depth texture as your second depth buffer, and discard fragments based on their comparison with the value from the depth texture in your fragment shader.
You don't have to manually populate the depth texture. If the depth values are generated as the result of an earlier rendering pass, you can copy them from the primary depth buffer to a depth texture using glCopyTexImage2D(). This function is available in OpenGL 2.1. In newer versions, you can also use glBlitFramebuffer() to perform the copy.
On your concerns:
Precision is a very valid concern. I had some fairly severe precision issues when I played with something similar in the past. They can probably be solved, but you definitely need to watch out for it, and carefully think about what your precision requirements are, and how you can fudge the values if necessary.
I would certainly expect performance to suffer somewhat. The fixed function depth buffer has the advantage of heavy optimizations, like early depth testing, that you won't get with your "home made" depth buffer. I wouldn't necessarily expect it to be horrible, though. And if this is what you need, it's worth trying.
I think this is pretty much a no go in ES 2.0, at least without relying on extensions. You could potentially consider writing depth values into color textures, but I think it would get very ugly and inefficient. ES 2.0 is great for its simplicity. But for what you're planning to do, you're exceeding its intended feature set. ES 3.0 adds depth textures, and some other features that could come into play.

Reading FBO depth attachment whilst depth testing

I'm working with a deferred rendering engine using OpenGL 3.3. I have an FBO set up as my G-buffer with a texture attached as the depth component.
In my lighting pass I need to depth test (with writes disabled) to cull unnecessary pixels. However, I'm currently writing code which will reconstruct world position coordinates, this will also need access to the depth buffer.
Is it legal in Opengl 3.3 to bind a depth attachment to a texture unit and sample it whilst also using it for depth testing in the same pass?
I can't find anything specifically about it in the docs but my gut tells me that using the same buffer/texture for two different purposes will produce undefined behaviour. Does anybody know for sure? I have a limited set of hardware to test on and don't want to make false assumptions about what works.
At the very least this creates a situation where memory coherency cannot be guaranteed (coherency is something you sort of assume at all stages in the traditional pipeline pre-GL4 and have no standardized control over either).
The driver just might cache this memory in an undesirable way since this behavior is undefined. You would like to think that an appropriate combination of writemask and sampling would be a strong hint not to do that, but that is all up to whoever designed the driver and your results will tend to vary by hardware vendor, platform and hardware generation.
This scenario is a use-case for things like NV's texture barrier extension, but that is vendor specific and still does not tackle the problem entirely. If you want to do this sort of thing portably, your best bet is to promote the engine to GL4 and use standardized features for early fragment tests, barriers, etc.
Does your composite pass really need a depth buffer in the first place though? It sounds like you want to re-construct per-pixel position during lighting from the stored depth buffer. That's entirely possible in a framebuffer with no depth attachment at all.
Your G-Buffers will already be filled at this point, and after that you no longer need to do any fragment tests. The one fragment that passed all previous tests is what's eventually written to the G-Buffer and there's no reason to apply any additional tests to it when it comes time to do lighting.