Framebuffer's RGB colors are different when alpha value is modified - opengl

I have fragment shader that writes to framebuffer's texture. I have 2 shaders that use the framebuffer's output texture. First uses just alpha and Second just RGB. When framebuffer's shader return fragmentColor.a = 0 the RGB component completely dissapear when used in Second shader even when I set fragmentColor.a to 1 manually. Is it possible to prevent this "RGB dissapear" or am I getting some bug? Yes I can generate different textures for each shader but it costs huge amount of perfomance because even with only one "draw" it's very costly. But if I could output to two textures at same time that would also solve whole problem. Anyone has any advice/solution?

You probably have blending enabled and set glBlendFunc to something like glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), which will also affect the destination alpha values. To overcome this there's separable blending, set it up with glBlendFuncSeparate[i] and choose the source and destination equations so that the RGB values are passed through as needed for your application. Use the "i" variant to set different blending functions for each fragment shader output target (if you have a single fragment shader with multiple outputs), or call the non-"i" version multiple times, before each render pass to set it up for your individual fragment shaders.

Related

Conditional Blending with OpenGL ES

I want to implement a fragment/blending operation using OpenGL ES 3.1 that fulfills the following requirements:
If the pixel produced by the fragment shader fulfills a certain condition (that can be determined as early as in the vertex shader) then its color value should be added to the one in the framebuffer.
If the pixel doesn't fulfill the condition, then the color should completely replace the one in the framebuffer.
Can this be done via the usual blending functions, alpha tricks etc.?
I think you could just use standard premultiplied alpha blending:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
If you want to replace, then you output an alpha value of 1 from your fragment shader. If you want to do additive then you output an alpha value of 0 from your fragment shader.
That assumes you're only really interested in the RGB values that end up in your framebuffer.
If you know this at vertex shading time I presume that whole triangles are either one blend or another. This is an ideal thing for stencil testing provided that you don't have too much geometry.
Clear stencil to zero.
Draw shape with color writes disabled, and stencil being set to one for regions which match one of the two blend rules.
Set blend rule one.
Draw shape with stencil testing enabled and passing when stencil == 0.
Set blend rule two.
Draw shape with stencil testing enabled and passing when stencil == 1.
How much this costs depends on your geometry, as you need to pass it in to rendering multiple times, but the stencil testing part of this is normally close to "free".

Background pixel in fragment shader

There is some method to access the background pixel in a fragment shader in order to change the alpha blending function?
I try to implement the fragment shader from page 5 of Weighted Blended Order-Independent Transparency but I don't know how to get Ci.
In standard OpenGL, you can't read the current value in the color buffer in your fragment shader. As far as I'm aware, the only place this functionality is available is as an extension in OpenGL ES (EXT_shader_framebuffer_fetch).
I didn't study the paper you linked, but there are two main options to blend your current rendering with previously rendered content:
Fixed function blending
If the blending functionality you need is covered by the blending functions/equations supported by OpenGL, this is the easiest and likely most efficient option. You set up the blending with glBlendFunc() and glBlendEquation() (or there more flexible variations glBlendFuncSeparate() and glBlendEquationSeparate()), enable blending with glEnable(GL_BLEND), and you're ready to draw.
There are also extensions that enable more variations, like KHR_blend_equation_advanced. Of course, like with all extensions, you can't count on them being supported on all platforms.
Multiple Passes
If you really do need programmable control over the blending, you can always do that with more rendering passes.
Say you render two passes that need to be blended together, and want the result in framebuffer C. The conventional sequence would be:
set current framebuffer to C
render pass 1
set up and enable blending
render pass 2
Now if this is not enough, you can render pass 1 and pass 2 into separate framebuffers, and then combine them:
set current framebuffer to A
render pass 1
set current framebuffer to B
render pass 2
set current framebuffer to C
bind color buffers from framebuffer A and B as textures
draw screen size quad, and sample/combine A and B in fragment shader
A and B in this sequence are FBOs with texture attachments. So you end up with the result of each rendering pass in a texture. You can then bind both of the textures for a final pass, sample them both in your fragment shader, and combine the colors in a fully programmable fashion to produce the final output.

How to obtain the list of pixels/fragments generated by the fragment shader?

The fragment shader draws to the framebuffer.
But how can I efficiently obtain just the pixels/fragments generated as a result of execution
of the fragment shader?
Setup a stencil mask so each time you draw a fragment, it will set that mask to 1.
Retrieve the stencil mask and the color buffer with glReadPixels function.
In the general case, you don't.
You could have the fragment shader write a specific color value to an image. Then you can read back from the image and test where that color is. That would get you the information you want. If you write to a floating-point framebuffer, you can even use an additive blend mode so that you can see how much you write to each sample location.
But that's about it.
The fragment shader draws to the framebuffer.
Not directly. Although later versions of OpenGL support scatter operation in the fragment shader, gather execution comes more natural to it.
Before the fragment processing stage is executed, the rastering stage first determines, which fragments get written to by the currently processed primitive. This happens through a scanline range estimator or such. I.e. the set of fragments processed is determined before execution of the fragment shader. The only thing the fragment shader then does is computing the values used by the following blending stage to combine into the framebuffer.

Setting neighbor fragment color via GLSL

I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.

What color does a fragment get if there are two vertices at the very same position with two different colors?

I have a question concerning the OpenGL rendering pipeline.
I have recently been reading theory about the GLSL's Geometry Shader. I think I do understand the basics of how to emit new geometry and assign colors to the new vertices. I am, however, not sure what color a fragment would get if one of those new vertices would have the very same position as one coming in from the Vertex shader.
Consider this example:
I far as I understand it, I am able to handle a single vertex with the Vertex shader. I make some transformation and store the position in glPosition. It is furthermore possible to assign a color to that vertex, e.g. by storing it to glFrontColor. As an example, I give it the color red. If all channels have 32 bits, that would be 0xFFFFFFFF'00000000'00000000'00000000, right?.
Next, Geometry shader is involved. I want my geometry shader to emit some additional vertices. At least one of them is at the very same position as the original vertex coming in from the Vertex shader. However, it is assigned another color, e.g. green. That would be 0x00000000'FFFFFFFF'00000000'00000000, right?
Sooner or later, after every vertex has been dealt, the rasterization takes place. As I understand, both vertices are rasterized and will therefore become the very same fragment. So, there we go. What color will that particular fragment get? Is there some kind of automatic blending and the fragment becomes yellow? Or is red or rather green?
This question might be silly. But I am simply not clear on that and would appreciate if somebody could clarify that for me.
If there is no blending (which I assume), how could I possibly create a blending effect?
Assuming you're rendering points (which seems to be what you're describing), the two vertices with the different colors will result in two fragments (one for each vertex) at the same location. What final color will be written to the output depends on the Z values for each, the blending function set and the order in which they are processed (which is effectively random -- you can't count on either order unless you do some extra sync stuff, so you need to set your blend func/Z-culling such that it doesn't matter).
I think they will be Z-Fighting, if they have the exact same values for x y and z.
About blending:
This is separate from the programmable pipeline, so you don't have to do most of the work in the shaders for it.
First enable blending with glEnable(GL_BLEND),
then specify your desired blending function with glBlendFunc, most commonly glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
Now the vertices only need an alpha value set at gl_FragColor.a and their color will blend.