Custom blending equations (shader) for OpenGL - opengl

I am trying to experiment with different alpha blending equations for transparent objects using OpenGL but it looks like fragment shaders operate on the color of fragments on single objects and cant take into account the scene behind the object.
On the other hand there doesn't seem to be a way to intercept the blending stage with arbitrary GLSL code, for example I can't think of a way to reproduce soft light blend mode with the current OpenGL primitives.
Is there a way to reconcile these?

There are a couple relatively well-supported extensions:
KHR_blend_equation_advanced - implements common blending modes (including soft light).
EXT_shader_framebuffer_fetch - provides destination color from the framebuffer for fully custom blending in the shader.

Blending is still one of those few parts of the fragment pipeline that's a hardwired circuit on the GPU. Hence it's not programmable. Your best bet is rendering to a texture and do a blending postprocessing pass.

copy render target, and draw your object with it as texture.
if there is many small object, you can only copy part of your render target.
first pass: draw object with render target as texture to texture_2;
second pass: draw object to render target with texture_2;

Related

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

Background pixel in fragment shader

There is some method to access the background pixel in a fragment shader in order to change the alpha blending function?
I try to implement the fragment shader from page 5 of Weighted Blended Order-Independent Transparency but I don't know how to get Ci.
In standard OpenGL, you can't read the current value in the color buffer in your fragment shader. As far as I'm aware, the only place this functionality is available is as an extension in OpenGL ES (EXT_shader_framebuffer_fetch).
I didn't study the paper you linked, but there are two main options to blend your current rendering with previously rendered content:
Fixed function blending
If the blending functionality you need is covered by the blending functions/equations supported by OpenGL, this is the easiest and likely most efficient option. You set up the blending with glBlendFunc() and glBlendEquation() (or there more flexible variations glBlendFuncSeparate() and glBlendEquationSeparate()), enable blending with glEnable(GL_BLEND), and you're ready to draw.
There are also extensions that enable more variations, like KHR_blend_equation_advanced. Of course, like with all extensions, you can't count on them being supported on all platforms.
Multiple Passes
If you really do need programmable control over the blending, you can always do that with more rendering passes.
Say you render two passes that need to be blended together, and want the result in framebuffer C. The conventional sequence would be:
set current framebuffer to C
render pass 1
set up and enable blending
render pass 2
Now if this is not enough, you can render pass 1 and pass 2 into separate framebuffers, and then combine them:
set current framebuffer to A
render pass 1
set current framebuffer to B
render pass 2
set current framebuffer to C
bind color buffers from framebuffer A and B as textures
draw screen size quad, and sample/combine A and B in fragment shader
A and B in this sequence are FBOs with texture attachments. So you end up with the result of each rendering pass in a texture. You can then bind both of the textures for a final pass, sample them both in your fragment shader, and combine the colors in a fully programmable fashion to produce the final output.

OpenGL: Post-Processing + Multisampling =?

I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.

Blend FBO onto default framebuffer

To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt

Defining a custom Blend Function (OpenGL)

For implementing a physically accurate motion blur by actually rendering at intermediate locations, it seems that to do this correctly I need a special blending function. Additive blending would only work on a black background, and the standard "transparency" function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) may look okay for small numbers of samples, but it is physically inaccurate because samples rendered at the end will contribute more to the resulting color.
The function I need has to produce a color which is the weighted average of the original and destination colors, depending on the number of samples covering a fragment. However I can generalize this to better account for rendering differences between samples: Suppose I am to render a blurred object n times. Treating color as a 3-vector, Let D be the color DEST - SRC. I want each render to add D/n to the source color.
Can this be done using the fixed-function pipeline? The glBlendFunc reference is rather cryptic, at least to me. It seems like this can be done either trivially or is impossible. It seems like I would want to set alpha to 1/n. For the behavior I just described, am I in need of a GL_DEST_MINUS_SRC_COLOR option?
I also have a related question: At which stage does this blending operation occur? Before or after the fragment shader program? Would i be able to access the source and destination colors in a fragment shader?
I know that one way to accomplish what I want is by using an accumulation buffer. I do not want to do this because it is a waste of memory and fillrate.
The solution I ended up using to implement my effect is a combination of additive blending and a render target that I access as a texture from the fragment shader.