OpenGL: Specify what value gets wrote to the depth buffer? - c++

As I understand the depth buffer, it calculates a fragment's relation to the far/near clipping planes, and deduces the depth value from that before writing it. However, this isn't what I want as I don't utilize the clipping planes, or the 3rd dimension at all. However, depth testing would still be immensely helpful to me.
My question, is there any way to specify what value gets written to the depth buffer manually, for all geometry rendered after you set it (that passes the Alpha Test) regardless of it's true depth in a scene? The Stencil buffer works this way, with the value specified as the second argument of glStencilFunc(), so I thought glDepthFunc() might have behaved similarly but I was mistaken.
The main reason I need depth testing in a 2D game, is because my lighting model uses stencils a great deal. Objects closer to the camera than the light must be rendered first, for shadow stencils to be properly laid out, with the lights drawn after that. It's a pretty tricky draw order, but basically it just means lights have to be drawn after the scene is finished drawing, is all.
The OpenGL version I'm using is 2.0, though I'm trying to avoid using a fragment shader if possible.

It seems you are talking about a technique called Parallax scrolling. You don't need to write to the depth buffer manually, just enable it, and then you can use a layer approach and specify the Z manually for each object. Then render the scene front to back (sorting).

Related

Depth-fighting solution using custom depth testing

The core of my problem is that I have troubles with depth-fighting in pure OpenGL.
I have two identical geometries, but one is simpler than the other.
That forms a set of perfectly coplanar polygons, and I want to display the complex geometry on top of the simpler geometry.
Unsurprisingly, it leads me to depth-fighting when I draw sequentially the two sets of triangles using the OpenGL depth buffer. At the moment, I've patched it using glPolygonOffset but this solution is not suitable for me (I want the polygons the be exactly coplanar).
My idea is to temporary use a custom depth test when drawing the second set of triangles. I would like to save the depth of the fragments during the rendering of the first set. Next, I would use glDepthFunc(GL_ALWAYS) to disable the depth buffer (but still writing in it). When rendering the second set, I would discard fragments that have a greater z than the memory I just created, but with a certain allowance (at least one time the precision of the Z-buffer at the specific z, I guess). Then I would reset depth function to GL_LEQUAL.
Actually, I just want to force a certain allowance for the depth test.
Is it a possible approach ?
The problem is that I have no idea how to pass information (custom depth buffer) from one program to another.
Thanks
PS : I also looked into Frame Buffer Objects and Deferred Rendering because apparently it allows passing information via a 'G-buffer', but once I write:
unsigned int gBuffer;
glGenFramebuffers(1, &gBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, gBuffer);
My window goes black... Sorry if things are obvious I'm not familiar yet with OpenGL
As Rabbid76 said, I can simply disable depth writing using glDepthMask(GL_FALSE).
Now, I can draw several layers of coplanar polygons using the same offset.
Solved.

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

Multi-pass shading using render-to-texture

I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.

Depth buffer in 2D

I'm working on a 2D game with realistic deferred lighting. Since I'm rendering the lights after the scene is rendered, I need a way to cull lighting calculations if there is an object, such as a tree, obstructing the area being lit. After doing some reading, my best bet for deferred rendering is to use a depth buffer. I've been searching the internet for ways to have a depth buffer with 2D graphics but really have not found anything helpful. I found glPolygonOffset but I really have no clue if that is what I want, or if there is another way to set a z value for the polygons. Thanks for any help
Your misconception lies in thinking "3D == perspective". To generate a depth buffer your scene needs, well, depth. But that's no problem. What you refer to as "2D" probably just means lack of perspective. So by using a ortho projection and placing your objects on layers at various depth you can generate a depth buffer with data usefull for deferred lighting.

Can "See through" objects in openGL

I'm not sure why this is happening, I'm only rendering a few simple primitive QUADS.
The red is meant to be in front of the yellow.
The yellow always goes in-front of the red, even when it's behind it.
Is this a bug or simply me seeing the cube wrongly?
Turn the depth buffer and depth test on, or OpenGL would draw what is latter on the top.
Your application needs to do at least the following to get depth buffering to work:
Ask for a depth buffer when you create your window.
Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.
Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.
Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT.
See here http://www.opengl.org/resources/faq/technical/depthbuffer.htm
I had the same problem but it was unrelated to the depth buffer, although I did see some change for the better when I enabled that. It had to do with the blend functions used which combined pixel intensities at the last step of rendering. So I had to turn off glBlendFunc()