Background color issue when blending with transparency in OpenGL - opengl

Let's take the simplest case of rendering two overlapping transparent rectangles, one red and one green, both with alpha=0.5. Assume that the drawing order is from back to front, meaning that the rectangle farther from the camera is drawn first.
In realistic scenarios, irrespective of which rectangle happens to be in front, the overlapping color should be the same, i.e. RGBA = [0.5, 0.5, 0.0, 0.5].
In practice, however, assuming that we are blending with weights SRC_ALPHA and ONE_MINUS_SRC_ALPHA, the overlapping color is dominated by the color of the front rectangle, as in this image:
I believe this happens because the first rectangle is blended with the background color, and the second rectangle is then blended with the resultant color. With this logic, assuming white background, the overlapping color in the two cases works out to be:
Red on top: 0.5*(0.5*[1,1,1,0] + 0.5*[0,1,0,0.5]) + 0.5*[1,0,0,0.5] = [0.75, 0.50, 0.25, 0.375]
Green on top: 0.5*(0.5*[1,1,1,0] + 0.5*[1,0,0,0.5]) + 0.5*[0,1,0,0.5] = [0.50, 0.75, 0.25, 0.375]
which explains the dominance of the color on top. In principle, this could be easily corrected if all the objects were blended first, and the resultant color is blended with the background color.
Is there a way to achieve this in OpenGL?

Ideally, irrespective of which rectangle happens to be in front, the overlapping color should be the same
No, because when you use "SourceAlpha, InvSourceAlpha" blending, the the formula for calculating the final color is:
destRGB = destRGB * (1-sourceAlpha) + sourceRGB * sourceAlpha
This causes that the color of the rectangle which is drawn first, is multiplied by the alpha channel and add to the framebuffer. When the second rectangle is drawn, then the content of the framebuffer (which includes the color of the first rectangle) is multiplied again, but now by the inverse alpha channel of the second rectangle.
The color of the second rectangle is multiplied by alpha channel of the 2nd rectangle only:
destRGB = (destRGB * (1-Alpha_1) + RGB_1 * Alpha_1) * (1-Alpha_2) + RGB_2 * Alpha_2
or
destRGB = destRGB * (1-Alpha_1)*(1-Alpha_2) + RGB_1 * Alpha_1*(1-Alpha_2) + RGB_2 * Alpha_2
While RGB_2 is multiplied by Alpha_2, RGB_1 is multiplied by Alpha_1 * (1-Alpha_2).
So the result depends on the drawing order, if the color in the framebuffer is modified by the alpha channel of the new (source) color.
If you want to achieve an order independent effect, then the the color of the framebuffer must not be modified by the alpha channel of the source fragment. e.g.:
destRGB = destRGB * 1 + sourceRGB * sourceAlpha
Which can be achieved by the parameter GL_ONE for the destination factor of glBlendFunc:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);

Drawing transparent surfaces depends a lot on order. Most issues happen because you're using depth tests and writing to the depth buffer (in which case the result depends not only on which triangle is in front, but also on which triangle is drawn first). But if you ignore depth and just want to draw triangles one after another, your results still depend on the order in which you draw them, unless you use certain commutative blend functions.
Since you've been talking about stained glass, here's one option that works roughly like stained glass:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
This essentially multiplies each color channel of the destination by the corresponding color channel of the source. So if you draw a triangle with color (0.5, 1.0, 1.0), then it will basically divide the red channel of whatever it's been drawn onto by two. Drawing on a black destination will keep the pixel black, just like stained glass does.
To reduce the "opacity" of your stained glass, you'll have to mix your colors with (1.0, 1.0, 1.0). The alpha value is ignored.
As a bonus, this blend function is independent of the order you draw your shapes (assuming you've locked the depth buffer or disabled depth testing).

Related

WebGL / How to remove the dark edges that appear on the border with the transparent part of the image after applying the warp effect?

Demo https://codepen.io/Andreslav/pen/wvmjzwe
Scheme: Top left - the original.
Top right - the result.
Bottom right - rounding coordinates when extracting color.
The problem can be solved this way, but then the result is less smoothed:
coord = floor(coord) + .5;
How to make it better? Make it so that when calculating the average color, the program ignores the color of transparent pixels?
Maybe there are some settings that I haven't figured out..
Updated the demo
The result is even better after such an adjustment:
vec4 color = texture2D(texture, coord / texSize);
vec4 color_ = texture2D(texture, coordNoRound / texSize);
if(color_.a != color.a) {
color.a *= color_.a;
}
On the preview: bottom left. But this is not an ideal option, the correction is partial. The problem is relevant.
This appears to be a premultiplied alpha problem. And it's not as much of a glsl problem as it is a glfx problem.
Here's what happens:
Consider the RGBA values of two adjacent pixels at the edge of your source image. It would be something like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [?, ?, ?, 0]
Meaning that there is a fully opaque, fully-white pixel to the left, and then comes a fully-transparent (A=0) pixel to the right.
But what are the RGB values of a completely transparent pixel?
They are technically ill-defined (this fact is the core problem which needs to be solved). In practice, pretty much every image processing software will put [0, 0, 0] there.
So the pixels are actually like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [0, 0, 0, 0]
What happens if your swirl shader samples the texture halfway between those 2 pixels? You get [0.5, 0.5, 0.5, 0.5]. That's color [0.5 0.5 0.5], with 0.5 Alpha. Which is gray, not white.
The generally chosen solution to this problem is premultiplied alpha. Which means that, for any given RGBA color, the RGB components are defined so that they don't range from 0 .. 1.0, but instead from 0 .. A. With that definition, color [0.5 0.5 0.5 0.5] is now "0.5 A, with maximum RGB, which is white". One side effect of this definition is that the RGB values of a fully transparent pixel are no longer ill-defined; they must now be exactly [0, 0, 0].
As you can see, we didn't really change any values, instead, we just defined that our result is now correct. Of course, we still need to tell the other parts of the graphics pipeline of our definition.
Premultiplied alpha is not the only solution to the problem. Now that you know what's happening, you might be able to come up with your own solution. But pretty much all modern graphics pipelines expect that you are working with premultiplied alpha all the time. So the correct solution would be to make that true. That means:
(1) You need to make sure that your input texture also has premultiplied alpha, i.e. all its RGB values must be multiplied with their alpha value. This is generally what game engines do, all their textures have premultiplied alpha. Either every pixel must already be edited in the source file, or you do the multiplication once for each pixel in the texture after loading the image.
AND
(2) You need to convince every alpha blending component in your rendering pipeline to use premultiplied alpha blending, instead of "normal" alpha blending. It seems you use the "glfx" framework, I don't know glfx, so I don't know how you can make it blend correctly. Maybe check the docs. In case you are using raw OpenGL/WebGL, then this is the way to tell the pipeline that it should assume premultiplied alpha values when blending:
gl.blendEquation(gl.FUNC_ADD); // Normally not needed because it's the default
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
(This can be derived from the analyzing the formula for source-over alpha blending, but without the last division step.)
The above code tells OpenGL/WebGL that every time it's blending two pixels on top of another, it should calculate the final RGBA values in a way that's correct assuming that both the "top" and the "bottom" pixel has premultiplied alpha applied to it.
For higher level APIs (for example, GDI+), you can typically specify the pixel format of images, where there is a separation between RGBA and RGBPA, in which case the API will automatically choose correct blending. That may not be true for glfx though. In essence, you always need to be aware whether the pixel format of your textures and drawing surfaces have premultiplied alpha or not, there is no magic code that always works correctly.
(Also note that using premultiplied alpha has other advantages too.)
For a quick fix, it appears that the framework you're using performs alpha blending so that it expects non-premultiplied alpha values. So you could just undo the premultiplication by adding this at the end:
color.rgb /= color.a;
gl_FragColor = color;
But for correctness, you still need to premultiply the alpha values of your input texture.
Because at the rounded corners, your input texture contains pixels which are fully white, but semi-transparent; their RGBA values would look like this:
[1.0, 1.0, 1.0, 0.8]
For the blending code to work correctly, the values should be
[0.8, 0.8, 0.8, 0.8]
,
because otherwise the line color.rgb /= color.a; would give you RGB values greater than 1.

Blend equation considering target alpha, that leaves source color untouched when target alpha is 0 [duplicate]

This question already has answers here:
Blend mode on a transparent and semi transparent background
(3 answers)
Closed last year.
The "normal" source-over blend equation is
outColor = srcAlpha * srcColor + (1 - srcAlpha) * dstColor
This equation does not consider the destination alpha, and as such produces poor results when the destination alpha is not 1.0.
For example, consider the case of a 50%-opaque yellow source color over a destination that is fully transparent, but has a red color. [Edit: e.g. the RGBA buffer has values of [255, 0, 0, 255] in each channel.] The above equation results in 50% yellow blended with 50% red, tainting the yellow even though the background is fully transparent.
What is a blend equation that works with destination alpha, such that a source image with semi-transparent pixels blended over a fully-transparent target remains unchanged?
You should draw translucent objects from furthest to closest.
If you are going to draw N objects sorted by distance, when you render object i you only need take into account the alpha of i. All the alphas for objects < i have already been taken into account when they were drawn.
(image from: https://www.sterlingpartyrentals.com/product/color-gels-for-par-light/)
The dstAlpha is almost never used.
In your example, having transparent red in the destination... how was the red drawn in the first place if its fully transparent?
You should always draw fully opaque objects first, and make sure that the whole screen gets something opaque drawn in to it. In 3D you can for example use cubemaps for making sure that this is the case.
(image from: https://learnopengl.com/Advanced-OpenGL/Cubemaps)

Negate Image Without gray Overlapping

Image negative effect is typically done like this:
pixel = rgb(1, 1 ,1) - pixel
but if the pixel color is close to gray, than:
pixel = rgb(1, 1, 1) - rgb(0.5, 0.5, 0.5) = 0.5
That's not a problem and it's how it should be, but for me it is, I am making a crosshair texture in my 3D game, which will be drawn in the center of the screen and I want it to have negative effect, reason for it is clearity, if I were to make crosshair white, it would not be visible when looking on white objects (I know I can make it with black outline so it is visible, but thats ugly), but it still has problems for grayish colors as I described, what can be done to fix that?

Why border texels get the same color when magnified/scaled up using Bilinear filtering?

As in Bilinear filtering, sampled color is calculated based on the weighted average of 4 closest texels, then why corner texels get the same color when magnified?
Eg:
In this case (image below) when a 3x3 image is magnified/scaled to 5x5 pixel image (using Bilinear filtering) corner 'Red' pixels get exact same color and border 'Green' as well?
In some documents, it is explained that corner texels are extended with the same color to give 4 adjacent texels which explains why corner 'Red' texels are getting the same color in 5x5 image but how come border 'Green' texels are getting same color (if they are calculated based on weighted average of 4 closest texels)
When you are using bilinear texture sampling, the texels in the texture are not treated as colored squares but as samples of a continuous color field. Here is this field for a red-green checkerboard, where the texture border is outlined:
The circles represent the texels, i.e., the sample locations of the texture. The colors between the samples are calculated by bilinear interpolation. As a special case, the interpolation between two adjacent texels is a simple linear interpolation. When x is between 0 and 1, then: color = (1 - x) * leftColor + x * rightColor.
The interpolation scheme only defines what happens in the area between the samples, i.e. not even up to the edge of the texture. What OpenGL uses to determine the missing area is the texture's or sampler's wrap mode. If you use GL_CLAMP_TO_EDGE, the texel values from the edge will just be repeated like in the example above. With this, we have defined the color field for arbitrary texture coordinates.
Now, when we render a 5x5 image, the fragments' colors are evaluated at the pixel centers. This looks like the following picture, where the fragment evaluation positions are marked with black dots:
Assuming that you draw a full-screen quad with texture coordinates ranging from 0 to 1, the texture coordinates at the fragment evaluation positions are interpolations of the vertices' texture coordinates. We can now just overlay the color field from before with the fragments and we will find the color that the bilinear sampler produces:
We can see a couple of things:
The central fragment coincides exactly with the red texel and therefore gets a perfect red color.
The central fragments on the edges fall exactly between two green samples (where one sample is a virtual sample outside of the texture). Therefore, they get a perfect green color. This is due to the wrap mode. Other wrap modes produce different colors. The interpolation is then: color = (1 - t) * outsideColor + t * insideColor, where t = 3 * (0.5 / 5 + 0.5 / 3) = 0.8 is the interpolation parameter.
The corner fragments are also interpolations from four texel colors (1 real inside the texture and three virtual outside). Again, due to the wrap mode, these will get a perfect red color.
All other colors are some interpolation of red and green.
You're looking at bilinear interpolation incorrectly. Look at it as a mapping from the destination pixel position to the source pixel position. So for each desintation pixel, there is a source coordinate that corresponds to it. This source coordinate is what determines the 4 neighboring pixels, as well as the bilinear weights assigned to them.
Let us number your pixels with (0, 0) at the top left.
Pixel (0, 0) in the destination image maps to the coordinate (0, 0) in the source image. The four neighboring pixels in the source image are (0, 0), (1, 0), (0, 1) and (1, 1). We compute the bilinear weights with simple math: the weight in the X direction for a particular pixel is 1 - (pixel.x - source.x), where source is the source coordinate. The same goes for Y. So the bilinear weights for each of the four neighboring pixels are (respective to the above order): (1, 1), (0, 0), (0, 0) and (0, 0).
In short, because the destination pixel mapped exactly to a source pixel, it gets exactly that source pixel's value. This is as it should be.

How to handle alpha compositing correctly with OpenGL

I was using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for alpha composing as the document said (and actually same thing was said in the Direct3D document).
Everything was fine at first, until I downloaded the result from GPU and made it a PNG image. The result alpha component is wrong. Before drawing, I had cleared the frame buffer with opaque black colour. And after I drew something semi-transparent, the frame buffer became semi-transparent.
Well the reason is obvious. With glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), we actually ignore the destination alpha channel and assume it always be 1. This is OK when we treat the frame buffer as something opaque.
But what if we need the correct alpha value? glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and make the source premultiplied (premultiplied texture/vertex color or multiply the alpha component with alpha components before setting it to gl_FragColor).
glBlendFunc can only multiply the original color components with one factor, but alpha compositing needs the destination be multiplied with both one_minus_src_alpha and dst_alpha. So it must be premultiplied. We can't do the premultiplication in the frame buffer, but as long as the source and destination are both premultipied, the result is premultipied. That is, we first clear the frame buffer with any premultiplied color (for example: 0.5, 0.5, 0.5, 0.5 for 50% transparent white instead of 1.0, 1.0, 1.0, 0.5), and draw premultipied fragments on it with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), and we will have correct alpha channel in the result. But remember to undo the premultiplication if it is not desired for the final result