how to handle depth in glsl - opengl

I have a problem with FBO and depth in openGL. I am passing projection, view and model matrices to a shader that writes to the g buffer. When I unbind the FBO and write to gl_FragColor the scene displays as it ought. But when I write to gl_FragData[0] then write the accompanying texture to a screen aligned quad, objects are drawn according to inverse order processed rather than depth... I can see through objects processed first to objects processed after. Has anyone had the same problem and do they know a fix? Or could someone provide syntax on reading depth values from the vertex shader, querying the current depth, then writing to the depth buffer depending on a comparison, ie, handling the operation manually in the fragment shader.

Your main frame-buffer most likely has the depth, while your manually created FBO might not have it. Therefore, when drawing to the screen you have depth-sorted geometry, while your FBO can not provide that and internally works with disabled depth testing having no storage associated with it.

Related

GLSL : accessing framebuffer to get RGB and change it

I'd like to access framebuffer to get RGB and change their values for each pixel. It is because the glReadPixels, and glDrawPixels are too slow to use, so that i should use shaders instead of using them.
Now, I write code, and success to display three-dimensional model using GLSL shaders.
I drew two cubes as follows.
....
glDrawArrays(GL_TRIANGLES, 0, 12*6);
....
and fragment shader :
varying vec3 fragmentColor;
void main()
{
gl_FragColor = vec4(fragmentColor, 1);
}
Then, how can I access to RGB values and change it?
For example, If the pixel values at (u1, v1) on window and (u2, v2) are (0,0,255), then I want to change them to (255,0,0)
With the exception of an OpenGL ES-only extension, fragment shaders cannot just read from the current framebuffer. Otherwise, we wouldn't need blending.
You also can't just render to the image you're reading from in a shader. So if you need to do some sort of post-processing, then that is best done by rendering to a separate image. That is, you do your rendering to image 1, then bind that as a texture and change the FBO so that you're rendering to image 2.
Alternatively, if you have access to OpenGL 4.5/ARB/NV_texture_barrier, then you can use texture barriers to handle this. This permits you a single read/modify/write pass, if you bind the current framebuffer's image as a texture. You'd issue the barrier before doing your read/modify/write, then bind that texture to a sampler while still rendering to that framebuffer.
Also, this requires that the FS read from the exact texel that it would write to. Assuming a viewport anchored at 0,0, the code for this would be texelFetch(sampler, ivec2(gl_FragCoord.xy), 0). You can't read from someone else's texel and modify it.
Obviously you must be rendering to a texture; you cannot use the default framebuffer for this.
Texture barrier could be used for cases where you read from different texels than you write to. But that would require doing something similar to the first case of switching bound images. Though you wouldn't need to change the FBO exactly; you could change the region of the FBO that you render to. That is, so long as you're reading from a different area than you're rendering to, and you use barriers appropriately when switching between those regions, everything is fine.

How to apply a vertex shader to all vertices in a scene in OpenGL?

I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().

OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture

I want to access the depth buffer value at the currently processed pixel in a pixel shader.
How can we achieve this goal? Basically, there seems to be two options:
Render depth to texture. How can we do this and what is the tradeoff?
Use the value provided by gl_FragCoord.z - But: Is this the correct value?
On question 1: You can't directly read from the depth buffer in the fragment shader (unless there are recent extensions I'm not familiar with). You need to render to a Frame Buffer Object (FBO). Typical steps:
Create and bind an FBO. Look up calls like glGenFramebuffers and glBindFramebuffer if you have not used FBOs before.
Create a texture or renderbuffer to be used as your color buffer, and attach it to the GL_COLOR_ATTACHMENT0 attachment point of your FBO with glFramebufferTexture2D or glFramebufferRenderbuffer. If you only care about the depth from this rendering pass, you can skip this and render without a color buffer.
Create a depth texture, and attach it to the GL_DEPTH_ATTACHMENT attachment point of the FBO.
Do your rendering that creates the depth you want to use.
Use glBindFramebuffer to switch back to the default framebuffer.
Bind your depth texture to a sampler used by your fragment shader.
Your fragment shader can now sample from the depth texture.
On question 2: gl_FragCoord.z is the depth value of the fragment that your shader is operating on, not the current value of the depth buffer at the fragment position.
gl_FragCoord.z is the window-space depth value of the current fragment. It has nothing to do with the value stored in the depth buffer. The value may later be written to the depth buffer, if the fragment is not discarded and it passes a stencil/depth test.
Technically there are some hardware optimizations that will write/test the depth early, but for all intents and purposes gl_FragCoord.z is not the value stored in the depth buffer.
Unless you render in multiple passes, you cannot read and write to the depth buffer in a fragment shader. That is to say, you cannot use a depth texture to read the depth and then turn around and write a new depth. This is akin to trying to implement blending in the fragment shader, unless you do something exotic with DX11 class hardware and image load/store, it just is not going to work.
If you only need the depth of the final drawn scene for something like shadow mapping, then you can do a depth-only pre-pass to fill the depth buffer. In the second pass, you would read the depth buffer but not write to it.

Setting neighbor fragment color via GLSL

I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.

OpenGL - How do I compare pixel values from separate textures with the same location

I was wondering what is the best way to go about comparing a pixel that
is currently being rendered (and accessed using a fragment shader) to a
pixel with the same location in a previously stored unbound texture (both
textures are the same size)?
Now that the question is more clear, it's possible to give an answer.
The main issue is that the framebuffer contents and the fragment parameters (position) are not available in the fragment shader. Indeed, you can't execute the "compare" operation while rendering.
You have to render the model in a texture (search for render to texture, using frame buffer objects), and then run a fragment shader (maybe using GL_texture_rectangle) on a otho view with a viewport of the same size of the texture.
The fragment shader shall have two textures as input: the first texture (containing detected edges) and the texture-rendered wireframe model. Then, it's easy to perform complex computation in the fragment shader once you can access to each textel of both textures.
Hope this can help you.