Say I have two depth buffers ("DA" and "DB"). I'd like to attach DA to a framebuffer, and render to it in the normal/"traditional" way (every fragment depth tests against GL_LESS, if it passes, it is drawn and it's depth is written to the depth buffer).
Then, I'd like to attach DB- here I'd want to reference/use DB in the traditional way, but also depth test against DA with GL_GREATER. If the fragment passes both tests, I'd still just like to write depth to DB.
What this would accomplish is anything drawn in the second pass would only draw if it is behind the contents of the first pass, but in front of any other contents in the second pass.
Does OpenGL 2.1 (or OpenGL ES 2) offer any of this functionality? What could I do to work around it not formally offering it?
If it doesn't offer it, here's an idea for how I'd do it, that I would like to be sanity checked...
You attach a render buffer that acts as a depth buffer, then manually populate it with depths. And manually perform the "if depth greater than texel at DA" test (when failing, discard the fragment). I'm curious what problems I'd run into? Is there an easy way to emulate the depth precision of formally specified depth buffers? Would the performance be awful? Any other concerns?
I don't believe there is a way to have multiple depth buffers. I thought there might at least be vendor specific extensions, but a search through the extension registry came up empty even for that.
What you already started considering in the last paragraph looks like the only reasonably straightforward option that comes to mind. You use a depth texture as your second depth buffer, and discard fragments based on their comparison with the value from the depth texture in your fragment shader.
You don't have to manually populate the depth texture. If the depth values are generated as the result of an earlier rendering pass, you can copy them from the primary depth buffer to a depth texture using glCopyTexImage2D(). This function is available in OpenGL 2.1. In newer versions, you can also use glBlitFramebuffer() to perform the copy.
On your concerns:
Precision is a very valid concern. I had some fairly severe precision issues when I played with something similar in the past. They can probably be solved, but you definitely need to watch out for it, and carefully think about what your precision requirements are, and how you can fudge the values if necessary.
I would certainly expect performance to suffer somewhat. The fixed function depth buffer has the advantage of heavy optimizations, like early depth testing, that you won't get with your "home made" depth buffer. I wouldn't necessarily expect it to be horrible, though. And if this is what you need, it's worth trying.
I think this is pretty much a no go in ES 2.0, at least without relying on extensions. You could potentially consider writing depth values into color textures, but I think it would get very ugly and inefficient. ES 2.0 is great for its simplicity. But for what you're planning to do, you're exceeding its intended feature set. ES 3.0 adds depth textures, and some other features that could come into play.
Related
The core of my problem is that I have troubles with depth-fighting in pure OpenGL.
I have two identical geometries, but one is simpler than the other.
That forms a set of perfectly coplanar polygons, and I want to display the complex geometry on top of the simpler geometry.
Unsurprisingly, it leads me to depth-fighting when I draw sequentially the two sets of triangles using the OpenGL depth buffer. At the moment, I've patched it using glPolygonOffset but this solution is not suitable for me (I want the polygons the be exactly coplanar).
My idea is to temporary use a custom depth test when drawing the second set of triangles. I would like to save the depth of the fragments during the rendering of the first set. Next, I would use glDepthFunc(GL_ALWAYS) to disable the depth buffer (but still writing in it). When rendering the second set, I would discard fragments that have a greater z than the memory I just created, but with a certain allowance (at least one time the precision of the Z-buffer at the specific z, I guess). Then I would reset depth function to GL_LEQUAL.
Actually, I just want to force a certain allowance for the depth test.
Is it a possible approach ?
The problem is that I have no idea how to pass information (custom depth buffer) from one program to another.
Thanks
PS : I also looked into Frame Buffer Objects and Deferred Rendering because apparently it allows passing information via a 'G-buffer', but once I write:
unsigned int gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
My window goes black... Sorry if things are obvious I'm not familiar yet with OpenGL
As Rabbid76 said, I can simply disable depth writing using glDepthMask(GL_FALSE).
Now, I can draw several layers of coplanar polygons using the same offset.
Solved.
Me and a friend have been having an ongoing argument about the stencil buffer. In short I haven't been able to find a situation where the stencil buffer would provide any advantage over the programmable pipeline tools in OpenGL 3.2+. Are there any uses to the stencil buffer in modern OpenGL?
[EDIT]
Thanks everyone for all the inputs on the subject.
It is more useful than ever since you can sample stencil index textures from fragment shaders. It should not even be argued that the stencil buffer is not part of the programmable pipeline.
The depth buffer is used for simple pass/fail fragment rejection, which the stencil buffer can also do as suggested in comments. However, the stencil buffer can also accumulate information about test results over multiple passes. All sorts of logic and counting applications exist such as measuring a scene's depth complexity, constructive solid geometry, etc.
To add a recent example to Andon's answer, GTA V uses the stencil buffer kinda like an ID buffer to mark the player character, cars, vegetation etc.
It subsequently uses the stencil buffer to e.g. apply subsurface scattering only to the character or exclude him from motion blur.
See the GTA V Graphics Study (highly recommended, it's a great read!)
Edit: sure you can do this in software. But you can do rasterization or tessellation in software just as well... In the end it's about performance I guess. With depth24stencil8 you have a nice hardware-supported format, and the stencil test is most likely faster then doing discards in the fragment shader.
Just to provide one other use case, shadow volumes (aka "stencil shadows") are still very relevant: https://en.wikipedia.org/wiki/Shadow_volume
They're useful for indoor scenes where shadows are supposed to be pixel perfect, and you're less likely to have alpha-tested foliage messing up the extruded shadow volumes.
It's true that shadow maps are more common, but I suspect that stencil shadows will have a comeback once the brain dead Createive/3DLabs patent expires on the zfail method.
I am trying to use openGL for a project, I was wondering if there was a way to use the built in depth testing with multiple depth buffers. Basically each pass I'm getting a new depth buffer, and I have to compare this with the current depth buffer and write all the values that pass to it. Is there any built in functionality that can do this or will I have to do it manually.
Thanks
It's not mandatory to clear the depth buffer between pass.
If you don't, For any pass, if a value succeed at the depth test it will be writen to the depth buffer.
You can control the depth function but possibilities are limited. And the result is always the same: the value is written to the depth buffer (no complicated calculation).
For more complex stuff, you should take a look to GlSl.
Finally, if you want a good help, you should be more specific about your goal.
I'm working with a deferred rendering engine using OpenGL 3.3. I have an FBO set up as my G-buffer with a texture attached as the depth component.
In my lighting pass I need to depth test (with writes disabled) to cull unnecessary pixels. However, I'm currently writing code which will reconstruct world position coordinates, this will also need access to the depth buffer.
Is it legal in Opengl 3.3 to bind a depth attachment to a texture unit and sample it whilst also using it for depth testing in the same pass?
I can't find anything specifically about it in the docs but my gut tells me that using the same buffer/texture for two different purposes will produce undefined behaviour. Does anybody know for sure? I have a limited set of hardware to test on and don't want to make false assumptions about what works.
At the very least this creates a situation where memory coherency cannot be guaranteed (coherency is something you sort of assume at all stages in the traditional pipeline pre-GL4 and have no standardized control over either).
The driver just might cache this memory in an undesirable way since this behavior is undefined. You would like to think that an appropriate combination of writemask and sampling would be a strong hint not to do that, but that is all up to whoever designed the driver and your results will tend to vary by hardware vendor, platform and hardware generation.
This scenario is a use-case for things like NV's texture barrier extension, but that is vendor specific and still does not tackle the problem entirely. If you want to do this sort of thing portably, your best bet is to promote the engine to GL4 and use standardized features for early fragment tests, barriers, etc.
Does your composite pass really need a depth buffer in the first place though? It sounds like you want to re-construct per-pixel position during lighting from the stored depth buffer. That's entirely possible in a framebuffer with no depth attachment at all.
Your G-Buffers will already be filled at this point, and after that you no longer need to do any fragment tests. The one fragment that passed all previous tests is what's eventually written to the G-Buffer and there's no reason to apply any additional tests to it when it comes time to do lighting.
How I can make my own z-buffer for correct blending alpha channels? I'm using glsl.
I have only one idea. And this is use 2 "buffers", one of them storing depth-component and another color (with alpha channel). I don't need access to buffer in my program. I cant use uniform array because glsl have a restriction for the number of uniforms variables. I cant use FBO because behaviour for sometime writing and reading Frame Buffer is not defined (and dont working at any cards).
How I can resolve this problem?!
Or how to read actual real time z-buffer from glsl? (I mean for each fragment shader call z-buffer must be updated)
How I can make my own z-buffer for correct blending alpha channels?
That's not possible. For perfect order-independent transparency you must get rid of z-buffer and replace it with another mechanism for hidden surface removal.
With z-buffer there are two possible ways to tackle the problem.
Multi-layered z-buffer (impractical with hardware acceleration) - basically it'll store several layers of "depth" values and will use it for blending transparent surfaces. Will hog a lot of memory, and there will be maximum number of transparent overlayying surfaces, once you're over the limit, there will be artifacts.
Depth peeling (google it). Order independent transparency, but there's a limit for maximum number of "overlaying" transparent polygons per pixel. Can actually be implemented on hardware.
Both approaches will have a limit (maximum number of overlapping transparent polygons per pixel), once you go over the limit, scene will no longer render properly. Which means the whole thing rather useless.
What you could actually do (to get perfect solution) is to remove the zbuffer completely, and make a graphic rendering pipeline that will gather all polygons to be rendered, clip them, split them (when two polygons intersect), sort them and then paint them on screen in correct order to ensure that you'll get correct result. However, this is hard, and doing it with hardware acceleration is harder. I think (I'm not completely certain it happened) 5 ot 6 years ago some ATI GPU-related document mentioned that some of their cards could render correct scene with Z-Buffer disabled by enabling some kind of extension. However, they didn't say a thing about alpha-blending. I haven't heard about this feature since. Perhaps it didn't become popular and shared the fate of TruForm (forgotten). Also such rendering pipeline will not be able to some things that are possible on z-buffer
If it's order-independent transparencies you're after then the fundamental problem is that a depth buffer stores on depth per pixel but if you're composing a view of partially transparent geometry then more than one fragment contributes to each pixel.
If you were to solve the problem robustly you'd need an ordered list of depths per pixel, going back to the closest opaque fragment. You'd then walk the list in reverse order. In practice OpenGL doesn't do things like variably sized arrays so people achieve pretty much that by drawing their geometry in back-to-front order.
An alternative embodied by GL_SAMPLE_ALPHA_TO_COVERAGE is to switch to screen-door transparency, which is indistinguishable from real transparency either at a really high resolution or with multisampling. Ideally you'd do that stochastically, but that would void the OpenGL rule of repeatability. Nevertheless since you're in GLSL you can do it for yourself. Your sampler simply takes the input alpha and uses that as the probability that it'll output the final pixel. So grab a random value in the range 0.0 to 1.0 from somewhere and if it's greater than the alpha then discard the pixel. Always output with an alpha of 1.0 and just use the normal depth buffer. Answers like this say a bit more on what you can do to get randomish numbers in GLSL, and obviously you want to turn multisampling up as high as possible.
Eric Enderton has written a decent paper (which has a slide version) on stochastic order-independent transparency that goes alongside a DirectX implementation that's worth checking out.