There is some method to access the background pixel in a fragment shader in order to change the alpha blending function?
I try to implement the fragment shader from page 5 of Weighted Blended Order-Independent Transparency but I don't know how to get Ci.
In standard OpenGL, you can't read the current value in the color buffer in your fragment shader. As far as I'm aware, the only place this functionality is available is as an extension in OpenGL ES (EXT_shader_framebuffer_fetch).
I didn't study the paper you linked, but there are two main options to blend your current rendering with previously rendered content:
Fixed function blending
If the blending functionality you need is covered by the blending functions/equations supported by OpenGL, this is the easiest and likely most efficient option. You set up the blending with glBlendFunc() and glBlendEquation() (or there more flexible variations glBlendFuncSeparate() and glBlendEquationSeparate()), enable blending with glEnable(GL_BLEND), and you're ready to draw.
There are also extensions that enable more variations, like KHR_blend_equation_advanced. Of course, like with all extensions, you can't count on them being supported on all platforms.
Multiple Passes
If you really do need programmable control over the blending, you can always do that with more rendering passes.
Say you render two passes that need to be blended together, and want the result in framebuffer C. The conventional sequence would be:
set current framebuffer to C
render pass 1
set up and enable blending
render pass 2
Now if this is not enough, you can render pass 1 and pass 2 into separate framebuffers, and then combine them:
set current framebuffer to A
render pass 1
set current framebuffer to B
render pass 2
set current framebuffer to C
bind color buffers from framebuffer A and B as textures
draw screen size quad, and sample/combine A and B in fragment shader
A and B in this sequence are FBOs with texture attachments. So you end up with the result of each rendering pass in a texture. You can then bind both of the textures for a final pass, sample them both in your fragment shader, and combine the colors in a fully programmable fashion to produce the final output.
Related
I have fragment shader that writes to framebuffer's texture. I have 2 shaders that use the framebuffer's output texture. First uses just alpha and Second just RGB. When framebuffer's shader return fragmentColor.a = 0 the RGB component completely dissapear when used in Second shader even when I set fragmentColor.a to 1 manually. Is it possible to prevent this "RGB dissapear" or am I getting some bug? Yes I can generate different textures for each shader but it costs huge amount of perfomance because even with only one "draw" it's very costly. But if I could output to two textures at same time that would also solve whole problem. Anyone has any advice/solution?
You probably have blending enabled and set glBlendFunc to something like glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), which will also affect the destination alpha values. To overcome this there's separable blending, set it up with glBlendFuncSeparate[i] and choose the source and destination equations so that the RGB values are passed through as needed for your application. Use the "i" variant to set different blending functions for each fragment shader output target (if you have a single fragment shader with multiple outputs), or call the non-"i" version multiple times, before each render pass to set it up for your individual fragment shaders.
I want to implement a fragment/blending operation using OpenGL ES 3.1 that fulfills the following requirements:
If the pixel produced by the fragment shader fulfills a certain condition (that can be determined as early as in the vertex shader) then its color value should be added to the one in the framebuffer.
If the pixel doesn't fulfill the condition, then the color should completely replace the one in the framebuffer.
Can this be done via the usual blending functions, alpha tricks etc.?
I think you could just use standard premultiplied alpha blending:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
If you want to replace, then you output an alpha value of 1 from your fragment shader. If you want to do additive then you output an alpha value of 0 from your fragment shader.
That assumes you're only really interested in the RGB values that end up in your framebuffer.
If you know this at vertex shading time I presume that whole triangles are either one blend or another. This is an ideal thing for stencil testing provided that you don't have too much geometry.
Clear stencil to zero.
Draw shape with color writes disabled, and stencil being set to one for regions which match one of the two blend rules.
Set blend rule one.
Draw shape with stencil testing enabled and passing when stencil == 0.
Set blend rule two.
Draw shape with stencil testing enabled and passing when stencil == 1.
How much this costs depends on your geometry, as you need to pass it in to rendering multiple times, but the stencil testing part of this is normally close to "free".
I am trying to experiment with different alpha blending equations for transparent objects using OpenGL but it looks like fragment shaders operate on the color of fragments on single objects and cant take into account the scene behind the object.
On the other hand there doesn't seem to be a way to intercept the blending stage with arbitrary GLSL code, for example I can't think of a way to reproduce soft light blend mode with the current OpenGL primitives.
Is there a way to reconcile these?
There are a couple relatively well-supported extensions:
KHR_blend_equation_advanced - implements common blending modes (including soft light).
EXT_shader_framebuffer_fetch - provides destination color from the framebuffer for fully custom blending in the shader.
Blending is still one of those few parts of the fragment pipeline that's a hardwired circuit on the GPU. Hence it's not programmable. Your best bet is rendering to a texture and do a blending postprocessing pass.
copy render target, and draw your object with it as texture.
if there is many small object, you can only copy part of your render target.
first pass: draw object with render target as texture to texture_2;
second pass: draw object to render target with texture_2;
I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.
I need to setup a GLSL fragment shader to change the color of a fragment other than the one currently being processed. Since that may not seem desirable, I'll provide a very brief context.
The project utilizes a render pass whereby a given model is drawn into an FBO with unique colors that correspond to UV coordinates in the texture map. These colors are then sampled and converted to image coordinates so that the texture map for the model can be updated based on what's visible to the camera. Essentially:
Render model to FBO
For each FBO pixel
1. sample secondary texture based on FBO pixel position
2. convert color at current pixel to image coordinate for the model's texture map
3. update model's texture with sampled secondary texture at calculated coordinate
End loop
The problem is that the current implementation is very CPU bound, so I'm reading the pixels out of the FBO and then manipulating them. Ideally, since I already have the color of the fragment to work with in the fragment shader, I want to just tack on the last few steps to the process and keep everything on the GPU.
The specific issue I'm having is that I don't quite know how (or if it's even possible) to have a fragment shader set the color of a fragment that it is not processing. If I can't work something up by using an extra large FBO and just offsetting the fragment that I want to set the color on, can I work something up that writes directly into a texture?
Any help/advice?
It's not possible to have a fragment shader write to anywhere other than the fragment it is processing. What you probably want to do is ping pong rendering.
In your code, you'd have three textures, matching your listed tasks:
the secondary texture
the source model texture map
the destination model texture map
At a first run, you'd use (1) and (2) as source textures, to draw to (3). Next time through the loop you'd use (1) and (3) to write to (2). Then you'd switch back to using (1) and (2) to write to (3). And so on.
So (2) and (3) are connected with framebuffer objects with the textures supplied as the colour buffer in place of a renderbuffer.
NVidia authored the GL_NV_texture_barrier extension in 2009 that allows you to compact (2) and (3) into a single texture, provided you are explicit about the dividing line between where you're reading and where you're writing. I don't have the expertise to say how widely available it is.
Attempting to read and write to the same texture (as is possible with FBOs) otherwise produces undefined results in OpenGL. Prompting issues at the hardware level are related to caching and multisampling.
As far as I understand, you need a scatter operation (uniform FBO pixel space -> random mesh UV texture destination) to be performed in OpenGL. There is a way to do this, not as simple as you may expect, and not even as fast, but I can't find a better one:
Run a draw call of type GL_POINTS and size equal to the width*height of your source FBO.
Select model texture as a destination FBO color layer, with no depth layer attached
In a vertex shader, compute the original screen coordinate by using gl_VertexID.
Sample from the source FBO texture to get color and target position (assuming your original FBO surface was a texture). Assign a proper gl_Position and pass the target color to the fragment shader.
In a fragment shader, just copy the color to the output.
This will make GPU to go through each of your original FBO pixels and scatter the computed colors over the destination texture.