GLSL access coverage mask of sampler2D texel - opengl

If I access a multisample texture in GLSL through a sampler2DMS, how do I know which of the samples in a texel of this multisample texture have actually been covered?
From the multisample extension reference:
"... Each pixel fragment thus consists of integer x and y grid coordinates, a color, SAMPLES_ARB depth values, texture coordinates, and a coverage value with a maximum of SAMPLES_ARB bits."
So what I would like to access is the coverage value of the texel. There is gl_SampleMask (https://www.opengl.org/sdk/docs/man/html/gl_SampleMask.xhtml) that I can use to WRITE the coverage value of the FRAGMENT currently processed, but how do I access the coverage value of the TEXEL I'm fetching from the multisample texture?

The idea with multisampling is that, when you render to a multisampled image, you only execute the fragment shader once for each pixel-sized area. The rasterizer-generated coverage mask determines which samples within the pixel the fragment's outputs go to.
But once that process is done, once the fragment shader writes its data, the multisample image itself has absolutely no idea what these coverage masks are. A multisample texture simply has multiple sample values per texel. It has no idea what fragments generated which samples with which sample masks.
Sample masks are only a part of rendering.
Think of it like this. This is a multisample texture's pixel:
vec4 pixel[SAMPLE_COUNT];
Your fragment shader, when you were rendering to the multisample texture, did the equivalent of this:
for(int sample_ix = 0; sample_ix < SAMPLE_COUNT; ++sample_ix)
{
if(sampleMask[sample_ix])
pixel[sample_ix] = output;
}
pixel's data may have originally come from a sample mask. But pixel has no idea that this happens; it's just an array of vec4 values.
You can get the coverage value of your current fragment. But that has no relation to the actual coverage value(s) used to originally compose the pixels in a multisample texture.

Related

Manually change color of framebuffer

I am having a scene containing of thousands of little planes. The setup is that the plane can occlude each other in the depth.
The planes are red and green. Now I want to do the following in a shader:
Render all the planes. As soon as a plane is red, substract 0.5 from the currently bound framebuffer and if the texture is green, add 0.5 to the framebuffer.
Therefore I should be able to see for each pixel in the texture of the framebuffer: < 0 => more red planes at this pixel, = 0 => Same amount of red and green and for the last case >0 => more green planes, as well as I can tell the difference.
This is just a very rough simplification of what I need to do, but the core is to write change a pixel of a texture/framebuffer depending on the given values of planes in the scene influencing the current fragment. This should happen in the fragment shader.
So how do I change the values of the framebuffer using GLSL? using gl_FragColor just sets a new color, but does not manipulate the color set before.
PS I also gonna deactivate depth testing.
The fragment shader cannot read the (old) value from the framebuffer; it just generates a new value to put into the framebuffer. When multiple fragments output to the same pixel (overlapping planes in your example), how those value combine is controlled by the BLEND function of the pipeline.
What you appear to want can be done by setting a custom blending function. The GL_FUNC_ADD blending function allows adding the old value and new value (with weights); what you want is probably something like:
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ONE, GL_ONE);
this will simply add each output pixel to the old pixel in the framebuffer (in all four channels; its not clear from your question whether you're using a 1-channel, 3-channel, or 4-channel frame buffer). Then, you just have your fragment shader output 0.5 or -0.5 depending. In order for this to make sense, you need a framebuffer format that supports values outside the normal [0..1] range, such as GL_RGBA32F or GL_R32F

Use shader on texture instead of screen

I've written a simple GL fragment shader which performs an RGB gamma adjustment on an image:
uniform sampler2D tex;
uniform vec3 gamma;
void main()
{
vec3 texel = texture2D(tex, gl_TexCoord[0].st).rgb;
texel = pow(texel, gamma);
gl_FragColor.rgb = texel;
}
The texture paints most of the screen and it's occurred to me that this is applying the adjustment per output pixel on the screen, instead of per input pixel on the texture. Although this doesn't change its appearance, this texture is small compared to the screen.
For efficiency, how can I make the shader process the texture pixels instead of the screen pixels? If it helps, I am changing/reloading this texture's data on every frame anyway, so I don't mind if the texture gets permanently altered.
and it's occurred to me that this is applying the adjustment per output pixel on the screen
Almost. Fragment shaders are executed per output fragment (hence the name). A fragment is a the smallest unit of rasterization, before it's written into a pixel. Every pixel that's covered by a piece of visible rendered geometry is turned into one or more fragments (yes, there may be even more fragments than covered pixels, for example when drawing to an antialiased framebuffer).
For efficiency,
Modern GPUs won't even "notice" the slightly reduced load. This is a kind of microoptimization, that's on the brink of non-measureability. My advice: Don' worry about it.
how can I make the shader process the texture pixels instead of the screen pixels?
You could preprocess the texture, by first rendering it through a texture sized, not antialiased framebuffer object to a intermediate texture. However if your change is nonlinear, and a gamma adjustment is exactly that, then you should not do this. You want to process images in a linear color space and apply nonlinear transformation only as late as possible.

How to obtain the list of pixels/fragments generated by the fragment shader?

The fragment shader draws to the framebuffer.
But how can I efficiently obtain just the pixels/fragments generated as a result of execution
of the fragment shader?
Setup a stencil mask so each time you draw a fragment, it will set that mask to 1.
Retrieve the stencil mask and the color buffer with glReadPixels function.
In the general case, you don't.
You could have the fragment shader write a specific color value to an image. Then you can read back from the image and test where that color is. That would get you the information you want. If you write to a floating-point framebuffer, you can even use an additive blend mode so that you can see how much you write to each sample location.
But that's about it.
The fragment shader draws to the framebuffer.
Not directly. Although later versions of OpenGL support scatter operation in the fragment shader, gather execution comes more natural to it.
Before the fragment processing stage is executed, the rastering stage first determines, which fragments get written to by the currently processed primitive. This happens through a scanline range estimator or such. I.e. the set of fragments processed is determined before execution of the fragment shader. The only thing the fragment shader then does is computing the values used by the following blending stage to combine into the framebuffer.

Setting neighbor fragment color via GLSL

I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.

How are depth values resolved in OpenGL textures when multisampling?

I'm using an FBO to render my scene to a depth texture (GL_DEPTH_COMPONENT). When I enable multisampling in my application, those samples are resolved to a single texel, but how are they combined? Is the depth of the nearest sample stored to the texture, or the average of the samples? Is this behavior vendor-dependent?
See the multisample specification document:
"If the depth test passes, all multisample buffer depth sample values
are set to the depth of the fragment's centermost sample's depth
value, and all multisample buffer color sample values are set to
the color value of the incoming fragment."