How to handle alpha compositing correctly with OpenGL - opengl

I was using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for alpha composing as the document said (and actually same thing was said in the Direct3D document).
Everything was fine at first, until I downloaded the result from GPU and made it a PNG image. The result alpha component is wrong. Before drawing, I had cleared the frame buffer with opaque black colour. And after I drew something semi-transparent, the frame buffer became semi-transparent.

Well the reason is obvious. With glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), we actually ignore the destination alpha channel and assume it always be 1. This is OK when we treat the frame buffer as something opaque.
But what if we need the correct alpha value? glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and make the source premultiplied (premultiplied texture/vertex color or multiply the alpha component with alpha components before setting it to gl_FragColor).
glBlendFunc can only multiply the original color components with one factor, but alpha compositing needs the destination be multiplied with both one_minus_src_alpha and dst_alpha. So it must be premultiplied. We can't do the premultiplication in the frame buffer, but as long as the source and destination are both premultipied, the result is premultipied. That is, we first clear the frame buffer with any premultiplied color (for example: 0.5, 0.5, 0.5, 0.5 for 50% transparent white instead of 1.0, 1.0, 1.0, 0.5), and draw premultipied fragments on it with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), and we will have correct alpha channel in the result. But remember to undo the premultiplication if it is not desired for the final result

Related

WebGL / How to remove the dark edges that appear on the border with the transparent part of the image after applying the warp effect?

Demo https://codepen.io/Andreslav/pen/wvmjzwe
Scheme: Top left - the original.
Top right - the result.
Bottom right - rounding coordinates when extracting color.
The problem can be solved this way, but then the result is less smoothed:
coord = floor(coord) + .5;
How to make it better? Make it so that when calculating the average color, the program ignores the color of transparent pixels?
Maybe there are some settings that I haven't figured out..
Updated the demo
The result is even better after such an adjustment:
vec4 color = texture2D(texture, coord / texSize);
vec4 color_ = texture2D(texture, coordNoRound / texSize);
if(color_.a != color.a) {
color.a *= color_.a;
}
On the preview: bottom left. But this is not an ideal option, the correction is partial. The problem is relevant.
This appears to be a premultiplied alpha problem. And it's not as much of a glsl problem as it is a glfx problem.
Here's what happens:
Consider the RGBA values of two adjacent pixels at the edge of your source image. It would be something like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [?, ?, ?, 0]
Meaning that there is a fully opaque, fully-white pixel to the left, and then comes a fully-transparent (A=0) pixel to the right.
But what are the RGB values of a completely transparent pixel?
They are technically ill-defined (this fact is the core problem which needs to be solved). In practice, pretty much every image processing software will put [0, 0, 0] there.
So the pixels are actually like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [0, 0, 0, 0]
What happens if your swirl shader samples the texture halfway between those 2 pixels? You get [0.5, 0.5, 0.5, 0.5]. That's color [0.5 0.5 0.5], with 0.5 Alpha. Which is gray, not white.
The generally chosen solution to this problem is premultiplied alpha. Which means that, for any given RGBA color, the RGB components are defined so that they don't range from 0 .. 1.0, but instead from 0 .. A. With that definition, color [0.5 0.5 0.5 0.5] is now "0.5 A, with maximum RGB, which is white". One side effect of this definition is that the RGB values of a fully transparent pixel are no longer ill-defined; they must now be exactly [0, 0, 0].
As you can see, we didn't really change any values, instead, we just defined that our result is now correct. Of course, we still need to tell the other parts of the graphics pipeline of our definition.
Premultiplied alpha is not the only solution to the problem. Now that you know what's happening, you might be able to come up with your own solution. But pretty much all modern graphics pipelines expect that you are working with premultiplied alpha all the time. So the correct solution would be to make that true. That means:
(1) You need to make sure that your input texture also has premultiplied alpha, i.e. all its RGB values must be multiplied with their alpha value. This is generally what game engines do, all their textures have premultiplied alpha. Either every pixel must already be edited in the source file, or you do the multiplication once for each pixel in the texture after loading the image.
AND
(2) You need to convince every alpha blending component in your rendering pipeline to use premultiplied alpha blending, instead of "normal" alpha blending. It seems you use the "glfx" framework, I don't know glfx, so I don't know how you can make it blend correctly. Maybe check the docs. In case you are using raw OpenGL/WebGL, then this is the way to tell the pipeline that it should assume premultiplied alpha values when blending:
gl.blendEquation(gl.FUNC_ADD); // Normally not needed because it's the default
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
(This can be derived from the analyzing the formula for source-over alpha blending, but without the last division step.)
The above code tells OpenGL/WebGL that every time it's blending two pixels on top of another, it should calculate the final RGBA values in a way that's correct assuming that both the "top" and the "bottom" pixel has premultiplied alpha applied to it.
For higher level APIs (for example, GDI+), you can typically specify the pixel format of images, where there is a separation between RGBA and RGBPA, in which case the API will automatically choose correct blending. That may not be true for glfx though. In essence, you always need to be aware whether the pixel format of your textures and drawing surfaces have premultiplied alpha or not, there is no magic code that always works correctly.
(Also note that using premultiplied alpha has other advantages too.)
For a quick fix, it appears that the framework you're using performs alpha blending so that it expects non-premultiplied alpha values. So you could just undo the premultiplication by adding this at the end:
color.rgb /= color.a;
gl_FragColor = color;
But for correctness, you still need to premultiply the alpha values of your input texture.
Because at the rounded corners, your input texture contains pixels which are fully white, but semi-transparent; their RGBA values would look like this:
[1.0, 1.0, 1.0, 0.8]
For the blending code to work correctly, the values should be
[0.8, 0.8, 0.8, 0.8]
,
because otherwise the line color.rgb /= color.a; would give you RGB values greater than 1.

OpenGL (libgdx) - blending alpha map

I am trying to blend a white texture with varying alpha values with a colored background. I am expecting the result to retain colors from the background, and have alpha values replaced by ones from the blended texture.
So, for background I use:
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendEquationSeparate(GL20.GL_FUNC_ADD, GL20.GL_FUNC_ADD);
Gdx.gl.glBlendFuncSeparate(GL20.GL_ONE, GL20.GL_ZERO, GL20.GL_ONE, GL20.GL_ZERO);
I expect the background triangles mesh to override the destination, both color and alpha.
Question 1 - why with those blendFunc parameters, alpha value is being ignored?
If I set blendfunc to GL_ONE, GL_ONE, GL_ZERO, GL_ZERO then the filled mesh is rendered with proper alpha level - but both the source and dest alpha are supposed to be multiplied by zero - why does this work?
====
Now to blend the alpha map I use:
Gdx.gl.glBlendEquationSeparate(GL20.GL_FUNC_ADD, GL20.GL_FUNC_ADD);
Gdx.gl.glBlendFuncSeparate(GL20.GL_ZERO, GL20.GL_ONE, GL20.GL_ONE, GL20.GL_ZERO);
Question 2 - This supposed to keep the destination color and replace alpha. However, when I render the texture with those blendfunc params, I get no change to the output at all...
I've been reading the opengl blending chapter over and over again to understand what I fail to understand, please, share your insight on how those parameters actually work
I use this:
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
It works but only in the order of rendering. In order to account for depth you will need alpha testing which libGDX does not include.

OpenGL , How to accumulate alpha and use glAlphaFunc to alpha test?

glAlphaFunc(GL_GEQUAL, 0.5) can display the image where alpha >= 0.5.
Can opengl display the accumulation of alpha?
Example:
2 images, they will not display some part separately, because alpha < 0.5.
Now some of them overlap, their alpha sum to 0.6, how to display this overlap part?
I try to make a metaball example use opengl, if you have any idea, please please give me some hint.
Thank you so much
I would render the images to separate buffer (with Alpha Test off, and adding their alpha). Then, render the buffer onto the screen with Alpha Test.
First, create an empty buffer and set it's alpha to 0 for every pixel.
Then, render all of your objects on said buffer, using your blend function on colors and adding alpha.
Then, re-render the buffen on screen with Alpha Test turned on.
And the answer to the "Can opengl display the accumulation of alpha ?" - yes. You can render alpha as grayscale, for example.
worked for me with glBlendFunc(GL_SRC_ALPHA, GL_ONE);

Coloring grayscale image in OpenGL

I'm trying to draw grayscale image in color as texture in OpenGL using two colors. Black goes to color1 and white goes to color2. Texture is loaded using GL_RGBA.
I have tried two solutions:
1)
Load image as texture
Draw image on screen
Enable blending
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO); and draw rectangle with color1
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ONE); and draw rectangle with color2
But... When I apply first color, there is no black color on screen and when second color is applied it is combined with first color too.
2)
Save image as texture but don't use grayscale image, use white image with alpha channel that is same as grayscale
Draw rectangle with color1
Draw image
But... When image is drawn it doesn't use color1 where image is transparent, instead it uses current color set with glColor.
Any help will come in handy :)
In general, when dealing with OpenGL texturing pipeline, I recommend writing down the end result you want. here, you want your grayscale color to be used as an interpolant between your 2 colors.
out_color = mix(color1, color2, texValue)
The math for this actually is something like:
out_color = color1 + (color2 - color1) * texValue
So... is there a texture environment value that helps do that ? Yes, and it's also called GL_BLEND (not to be confused with the blending to frame buffer that glEnable(GL_BLEND) enables).
So... something like
// pass color1 as the incoming color to the texture unit
glColor4fv(color1);
GLfloat color2[4] = { ... };
// ask for the texture to be blended/mixed/lerped with incoming color
glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_BLEND);
// specify the color2 as per the TexEnv documentation
glTexEnvfv(GL_TEXTURE_2D, GL_TEXTURE_ENV_COLOR, color2)
There is no need to draw multiple times or anything more complicated than this, like you tried to do. The texturing pipeline has plenty of ways to get controlled. Learn them to your advantage!
Your #2 idea would work, but it seems like you didn't set blending correctly.
It should be:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );

How to setup blending for additive color overlays?

Is it possible in opengl to setup blending to achieve additive color overlays?
Red + green = yellow, cyan + magenta = white, etc.. (see diagram)
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
should do it.
Have a look at the full description of glBlendFunc
EDIT: Old tutorial link seems to be dead (403 Forbidden). Wayback'd.
Simple additive blending is achieved with glBlendFunc (GL_ONE, GL_ONE). You need to be aware of the fact that OpenGL's color value range is limited to [0,1], and values greater than 1 will be clamped to 1, so adding bright colors may not produce physically correctly blended colors. If you want to achieve that, you will have to add and scale the colors in your own software rather than having OpenGL handle it, or write a shader program that does that while rendering.