I'm trying to perform the depth peeling algorithm, to draw in correct order transparent objects in a scene already full of opaque objects (A terrain that almost occludes the transparent objects most of the time).
I have already separated the draw calls for opaque/transparent objects, drawing the opaque ones first. The problem lies in that each one of the "transparent objects drawing passes" the algorithm needs will actually use it's own (empty) z-buffer, but I need to check if a opaque object is occluding the view before drawing, and hence I need to reuse the z-buffer obtained after drawing the opaque objects multiple times, and not a empty one each.
Im aware i could dump the z-buffer state after drawing the opaque objects to a texture, and performing the depth comparisons from it in the fragment shader, but im afraid that performing the depth comparisons in the fragment shader would be extremelly slow. There is another faster way to solve that problem?
Edit: My target is a 3.X OpenGL version, so using the A-buffer is not an option in this particular case.
Related
The core of my problem is that I have troubles with depth-fighting in pure OpenGL.
I have two identical geometries, but one is simpler than the other.
That forms a set of perfectly coplanar polygons, and I want to display the complex geometry on top of the simpler geometry.
Unsurprisingly, it leads me to depth-fighting when I draw sequentially the two sets of triangles using the OpenGL depth buffer. At the moment, I've patched it using glPolygonOffset but this solution is not suitable for me (I want the polygons the be exactly coplanar).
My idea is to temporary use a custom depth test when drawing the second set of triangles. I would like to save the depth of the fragments during the rendering of the first set. Next, I would use glDepthFunc(GL_ALWAYS) to disable the depth buffer (but still writing in it). When rendering the second set, I would discard fragments that have a greater z than the memory I just created, but with a certain allowance (at least one time the precision of the Z-buffer at the specific z, I guess). Then I would reset depth function to GL_LEQUAL.
Actually, I just want to force a certain allowance for the depth test.
Is it a possible approach ?
The problem is that I have no idea how to pass information (custom depth buffer) from one program to another.
Thanks
PS : I also looked into Frame Buffer Objects and Deferred Rendering because apparently it allows passing information via a 'G-buffer', but once I write:
unsigned int gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
My window goes black... Sorry if things are obvious I'm not familiar yet with OpenGL
As Rabbid76 said, I can simply disable depth writing using glDepthMask(GL_FALSE).
Now, I can draw several layers of coplanar polygons using the same offset.
Solved.
I am trying to implement object independent transparency (OIT) using the following naive technique:
Sort opaque and transparent objects.
Render opaque with depth write on.
Disable depth write, enable alpha blending, and render the transparent objects.
It work ok if I have fully opaque and transparent objects only.But what about the case where some objects have transparent alpha textures (all my meshes are planar) and need Alpha blending on, and other are transparent.What is the routine in this case?
Currently, if I render the transparent first with depth write on and alpha blend on too, then rendering the object with alpha channel texture (depth write off, blend on) what happens is that the part of the last rendered plane that intersects into the first plane gets culled. Here is a picture to depict what I am after:
Both planes have some amount of transparency and still maintain depth sort.
I know I can use more sophisticated approaches for OIT like this. But is it possible to do it without getting into fragment linked lists etc?
A rasterization-based renderer, by its very nature, cannot handle transparency well. Rasterizers work by rendering an object, then rendering another. It has no idea of what has been rendered before or what will be rendered afterwards. It's job is to take a 3D shape and turn it into a field of colors.
If there was a simple technique for doing order-independent transparency, then you'd have heard about it by now because everyone would be using it. There isn't. Every general OIT technique is complicated and has some performance downsides associated with it.
Is it possible to find a way for the very specific case of two planes intersecting with nothing else on the screen? Yes. Could you generalize that method for arbitrary scenes with arbitrary transparency? No.
I'm working on rendering a scene that potentially has multiple intersecting transparent objects. This makes the standard method of sorting and drawing back to front problematic (even sorting triangles wouldn't work if the triangles intersect). So I've implemented depth peeling using a GLSL fragment shader to do the second depth test. It's works great.
Now I want to be able to apply certain effects using shaders. One of the objects in the scene is a syringe, and I would like to apply a glass effect. If I was drawing back to front, this would be easy - just start the shader when I draw the syringe, since everything behind it is already in the frame buffer. However, when using depth peeling this approach won't work.
So my questions are:
How to I apply shader effects to a single object in a scene when using depth peeling?
How do I combine effect shaders with my depth peeling shader (assuming they need to run at the same time)?
I should note that I'm pretty new at using shaders, so code examples are appreciated!
I'd be surprised if that's possible without ray tracing. As far as I know, the way to use refraction shaders is to do texture lookups in an environment map. This map can be either precomputed, or it's computed on the fly in a separate rendering pass. For the latter option you would need one separate environment map and one extra pass for each object that uses the shader. I kinda doubt that that's possible if the objects intersect each other. Even if it was, for each of these passes you would also need another couple passes for the depth peeling. Now if you also wanted the depth peeling shader passes to factor in refractions for the surrounding objects, this would quickly get out of hand.
Criteria: I’m using OpenGL with shaders (GLSL) and trying to stay with modern techniques (e.g., trying to stay away from deprecated concepts).
My questions, in a very general sense--see below for more detail—are as follows:
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
Background: My application draws points connected with lines in an ortho projection (vertices have varying depth in the projection). I’ve only recently started using shaders in the project (trying to get away from deprecated concepts). I understand that standard blending has ordering issues with alpha testing and depth testing: basically, if a “translucent” pixel at a higher z level is drawn first (thus blending with whatever colors were already drawn to that pixel at a lower z level), and an opaque object is then drawn at that pixel but at a lower z level, depth testing prevents changing the pixel that was already drawn for the “higher” z level, thus causing blending issues. To overcome this, you need to draw opaque items first, then translucent items in ascending z order. My gut feeling is that shaders wouldn’t provide an (efficient) way to change this behavior—am I wrong?
Further, for speed and convenience, I pass information for each vertex (along with a couple of uniform variables) to the shaders and they use the information to find a subset of the vertices that need special attention. Without doing a similar set of logic in the app itself (and slowing things down) I can’t know a priori what subset of vericies that is. Thus I send all vertices to the shader. However, when I draw “points” I’d like the shader to ignore all the vertices that aren’t in the subset it determines. I think I can get the effect by setting alpha to zero and using an alpha function in the GL context that will prevent drawing anything with alpha less than, say, 0.01. However, is there a better or more “correct” glsl way for a shader to say “just ignore this vertex”?
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Sort of. If you have access to GL 4.x-class hardware (Radeon HD 5xxx or better, or GeForce 4xx or better), then you can perform order-independent transparency. Earlier versions have techniques like depth peeling, but they're quite expensive.
The GL 4.x-class version uses essentially a series of "linked lists" of transparent samples, which you do a full-screen pass to resolve into the final sample color. It's not free of course, but it isn't as expensive as other OIT methods. How expensive it would be for your case is uncertain; it is proportional to how many overlapping pixels you have.
You still have to draw opaque stuff first, and you have to draw transparent stuff using special shader code.
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
No.
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
No in general, but yes for points. A Geometry shader can conditionally emit vertices, thus allowing you to discard any vertex for arbitrary reasons.
Discarding a vertex in non-point primitives is possible, but it will also affect the interpretation of that primitive. The reason it's simple for points is because a vertex is a primitive, while a vertex in a triangle isn't a whole primitive. You can discard lines, but discarding a vertex within a line is... of dubious value.
That being said, your explanation for why you want to do this is of dubious merit. You want to update vertex data with essentially a boolean value that says "do stuff with me" or not to. That means that, every frame, you have to modify your data to say which points should be rendered and which shouldn't.
The simplest and most efficient way to do this is to simply not render with them. That is, arrange your data so that the only thing on the GPU are the points you want to render. Thus, there's no need to do anything special at all. If you're going to be constantly updating your vertex data, then you're already condemned to dealing with streaming vertex data. So you may as well stream it in a way that makes rendering efficient.
All geometry is storing in one VBO (Transparent + Not transparent). I can not sort geometry. How I can disable writing in depth buffer from glsl without loss the data colors?
If I understand right, you want to disable depth writes because you draw both opaque and transparent objects. Apart from the fact that it doesn't work that way from within GLSL, it would not produce what you want, if it did.
If you just disabled depth writes ad hoc, the opaque objects coming after a transparent object would overwrite it, regardless of the z order.
What you really want to do is this:
Enable depth writes and depth test
Draw all opaque geometry. If you can, in a roughly sorted (roughly is good enough!) order, closest objects first.
Disable depth writes, keep depth test enabled
Enable blending
Draw transparent objects, sorted in the opposite direction, that is farthest away first. This occludes transparent objects with opaque geometry and makes blending work correctly.
If, for some reason, you can't sort the opaque geometry (though there is really no reason why you can't do that?), never mind -- it will be slightly slower because it does not cull fragments, but it will produce the same image.
If, for some reason, you can't sort the transparent geometry, you will have to expect incorrect results where several transparent objects overlap. This may or may not be noticeable (especially if the order is "random", i.e. changes frame by frame, it will be very noticeable -- otherwise you might in fact get away with it although it's incorrect).
Note that as datenwolf has pointed out already, the fact that several objects are in one VBO does not mean you can't draw a subset of them, or several subsets in any order you want. After all, a VBO only holds some vertices, it is up to you which groups of them you draw in which order.
You can't.
I can not sort geometry.
Why? You think because it's all in one VBO? Then I've got good news: It's perfectly possible to draw from just a subset of a buffer object.