Setting neighbor fragment color via GLSL - opengl

I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?

It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.

As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.

Related

how to retrieve z depth and color of a rendered pixel

I would like to retrieve the z height of each pixels of a rendered object in a scene.
I will need to retrieve the color rendered too.
What are the opengl technics to implement ?
glReadPixels and CPU side code
use glReadPixels to obtain both RGB and Depth buffers. Here examples for both:
depth buffer got by glReadPixels is always 1
OpenGL Scale Single Pixel Line
That will read the buffers into CPU accessible memory. This way is slow (due to sync) but should work on any platform.
FBO render to texture and GPU shader
Faster method is to use FBO and render to texture and use that output in next rendering pass as input texture for computing your stuff inside shaders. This however will not run properly on Intel and might need additional tweaking of code between nVidia and AMD.
If you have per pixel output use single QUAD covering your screen as the second rendering pass.
If you got single output for the whole screen instead use single POINT render and compute all in the fragment shader (scann the whole texture inside) something like this:
How to implement 2D raycasting light effect in GLSL
The difference is that by usnig shaders and FBO you are not transferring data between GPU/CPU so its way faster.
The content of the targeted textures can be still readed by CPU using texture related GL functions
compute GPU shaders
There are also compute shaders out there but I did not use them yet so I am just guessing however with them it might be possible to do your stuff in single pass and also the form of the result and computation should not be as limiting.
My bet is that you are doing some post processing similar to Deferred Shading so googling such topic/tutorials might help.

GLSL : accessing framebuffer to get RGB and change it

I'd like to access framebuffer to get RGB and change their values for each pixel. It is because the glReadPixels, and glDrawPixels are too slow to use, so that i should use shaders instead of using them.
Now, I write code, and success to display three-dimensional model using GLSL shaders.
I drew two cubes as follows.
....
glDrawArrays(GL_TRIANGLES, 0, 12*6);
....
and fragment shader :
varying vec3 fragmentColor;
void main()
{
gl_FragColor = vec4(fragmentColor, 1);
}
Then, how can I access to RGB values and change it?
For example, If the pixel values at (u1, v1) on window and (u2, v2) are (0,0,255), then I want to change them to (255,0,0)
With the exception of an OpenGL ES-only extension, fragment shaders cannot just read from the current framebuffer. Otherwise, we wouldn't need blending.
You also can't just render to the image you're reading from in a shader. So if you need to do some sort of post-processing, then that is best done by rendering to a separate image. That is, you do your rendering to image 1, then bind that as a texture and change the FBO so that you're rendering to image 2.
Alternatively, if you have access to OpenGL 4.5/ARB/NV_texture_barrier, then you can use texture barriers to handle this. This permits you a single read/modify/write pass, if you bind the current framebuffer's image as a texture. You'd issue the barrier before doing your read/modify/write, then bind that texture to a sampler while still rendering to that framebuffer.
Also, this requires that the FS read from the exact texel that it would write to. Assuming a viewport anchored at 0,0, the code for this would be texelFetch(sampler, ivec2(gl_FragCoord.xy), 0). You can't read from someone else's texel and modify it.
Obviously you must be rendering to a texture; you cannot use the default framebuffer for this.
Texture barrier could be used for cases where you read from different texels than you write to. But that would require doing something similar to the first case of switching bound images. Though you wouldn't need to change the FBO exactly; you could change the region of the FBO that you render to. That is, so long as you're reading from a different area than you're rendering to, and you use barriers appropriately when switching between those regions, everything is fine.

Reverse triangle lookup from affected pixels?

Assume I have a 3D triangle mesh, and a OpenGL framebuffer to which I can render the mesh.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
The only way I could think of doing this is to individually render each triangle from the mesh, then go through each pixel in the framebuffer to determine if it was affected by the triangle (using the depth buffer or a user-defined fragment shader output variable). I would then have to clear the framebuffer and do the same for the next triangle.
Is there a more efficient way to do this?
I considered, for each fragment in the fragment shader, writing out a triangle identifier, but GLSL doesn't allow outputting a list of integers.
For each rendered pixel, I need to build a list of triangles that rendered to that pixel, even those that are occluded.
You will not be able to do it for entire scene. There's no structure that allow you to associate "list" with every pixel.
You can get list of primitives that affected certain area using select buffer (see glRenderMode(GL_SELECT)).
You can get scene depth complexity using stencil buffer techniques.
If there are 8 triangles total, then you can get list of triangles that effected every pixel using stencil buffer (basically, assign unique (1 << n) stencil value to each triangle, and OR it with existing stencil buffer value for every stencil OP).
But to solve it in generic case, you'll need your own rasterizer and LOTS of memory to store per-pixel triangle lists. The problem is quite similar to multi-layered depth buffer, after all.
Is there a more efficient way to do this?
Actually, yes, but it is not hardware accelerated and OpenGL has nothing to do it. Store all rasterized triangles in OCT-tree. Launch a "ray" through that OCT-tree for every pixel you want to test, and count triangles this ray hits. That's collision detection problem.

how to handle depth in glsl

I have a problem with FBO and depth in openGL. I am passing projection, view and model matrices to a shader that writes to the g buffer. When I unbind the FBO and write to gl_FragColor the scene displays as it ought. But when I write to gl_FragData[0] then write the accompanying texture to a screen aligned quad, objects are drawn according to inverse order processed rather than depth... I can see through objects processed first to objects processed after. Has anyone had the same problem and do they know a fix? Or could someone provide syntax on reading depth values from the vertex shader, querying the current depth, then writing to the depth buffer depending on a comparison, ie, handling the operation manually in the fragment shader.
Your main frame-buffer most likely has the depth, while your manually created FBO might not have it. Therefore, when drawing to the screen you have depth-sorted geometry, while your FBO can not provide that and internally works with disabled depth testing having no storage associated with it.

OpenGL - How do I compare pixel values from separate textures with the same location

I was wondering what is the best way to go about comparing a pixel that
is currently being rendered (and accessed using a fragment shader) to a
pixel with the same location in a previously stored unbound texture (both
textures are the same size)?
Now that the question is more clear, it's possible to give an answer.
The main issue is that the framebuffer contents and the fragment parameters (position) are not available in the fragment shader. Indeed, you can't execute the "compare" operation while rendering.
You have to render the model in a texture (search for render to texture, using frame buffer objects), and then run a fragment shader (maybe using GL_texture_rectangle) on a otho view with a viewport of the same size of the texture.
The fragment shader shall have two textures as input: the first texture (containing detected edges) and the texture-rendered wireframe model. Then, it's easy to perform complex computation in the fragment shader once you can access to each textel of both textures.
Hope this can help you.