WebGL / How to remove the dark edges that appear on the border with the transparent part of the image after applying the warp effect? - glsl

Demo https://codepen.io/Andreslav/pen/wvmjzwe
Scheme: Top left - the original.
Top right - the result.
Bottom right - rounding coordinates when extracting color.
The problem can be solved this way, but then the result is less smoothed:
coord = floor(coord) + .5;
How to make it better? Make it so that when calculating the average color, the program ignores the color of transparent pixels?
Maybe there are some settings that I haven't figured out..
Updated the demo
The result is even better after such an adjustment:
vec4 color = texture2D(texture, coord / texSize);
vec4 color_ = texture2D(texture, coordNoRound / texSize);
if(color_.a != color.a) {
color.a *= color_.a;
}
On the preview: bottom left. But this is not an ideal option, the correction is partial. The problem is relevant.

This appears to be a premultiplied alpha problem. And it's not as much of a glsl problem as it is a glfx problem.
Here's what happens:
Consider the RGBA values of two adjacent pixels at the edge of your source image. It would be something like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [?, ?, ?, 0]
Meaning that there is a fully opaque, fully-white pixel to the left, and then comes a fully-transparent (A=0) pixel to the right.
But what are the RGB values of a completely transparent pixel?
They are technically ill-defined (this fact is the core problem which needs to be solved). In practice, pretty much every image processing software will put [0, 0, 0] there.
So the pixels are actually like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [0, 0, 0, 0]
What happens if your swirl shader samples the texture halfway between those 2 pixels? You get [0.5, 0.5, 0.5, 0.5]. That's color [0.5 0.5 0.5], with 0.5 Alpha. Which is gray, not white.
The generally chosen solution to this problem is premultiplied alpha. Which means that, for any given RGBA color, the RGB components are defined so that they don't range from 0 .. 1.0, but instead from 0 .. A. With that definition, color [0.5 0.5 0.5 0.5] is now "0.5 A, with maximum RGB, which is white". One side effect of this definition is that the RGB values of a fully transparent pixel are no longer ill-defined; they must now be exactly [0, 0, 0].
As you can see, we didn't really change any values, instead, we just defined that our result is now correct. Of course, we still need to tell the other parts of the graphics pipeline of our definition.
Premultiplied alpha is not the only solution to the problem. Now that you know what's happening, you might be able to come up with your own solution. But pretty much all modern graphics pipelines expect that you are working with premultiplied alpha all the time. So the correct solution would be to make that true. That means:
(1) You need to make sure that your input texture also has premultiplied alpha, i.e. all its RGB values must be multiplied with their alpha value. This is generally what game engines do, all their textures have premultiplied alpha. Either every pixel must already be edited in the source file, or you do the multiplication once for each pixel in the texture after loading the image.
AND
(2) You need to convince every alpha blending component in your rendering pipeline to use premultiplied alpha blending, instead of "normal" alpha blending. It seems you use the "glfx" framework, I don't know glfx, so I don't know how you can make it blend correctly. Maybe check the docs. In case you are using raw OpenGL/WebGL, then this is the way to tell the pipeline that it should assume premultiplied alpha values when blending:
gl.blendEquation(gl.FUNC_ADD); // Normally not needed because it's the default
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
(This can be derived from the analyzing the formula for source-over alpha blending, but without the last division step.)
The above code tells OpenGL/WebGL that every time it's blending two pixels on top of another, it should calculate the final RGBA values in a way that's correct assuming that both the "top" and the "bottom" pixel has premultiplied alpha applied to it.
For higher level APIs (for example, GDI+), you can typically specify the pixel format of images, where there is a separation between RGBA and RGBPA, in which case the API will automatically choose correct blending. That may not be true for glfx though. In essence, you always need to be aware whether the pixel format of your textures and drawing surfaces have premultiplied alpha or not, there is no magic code that always works correctly.
(Also note that using premultiplied alpha has other advantages too.)
For a quick fix, it appears that the framework you're using performs alpha blending so that it expects non-premultiplied alpha values. So you could just undo the premultiplication by adding this at the end:
color.rgb /= color.a;
gl_FragColor = color;
But for correctness, you still need to premultiply the alpha values of your input texture.
Because at the rounded corners, your input texture contains pixels which are fully white, but semi-transparent; their RGBA values would look like this:
[1.0, 1.0, 1.0, 0.8]
For the blending code to work correctly, the values should be
[0.8, 0.8, 0.8, 0.8]
,
because otherwise the line color.rgb /= color.a; would give you RGB values greater than 1.

Related

Colors in range [0, 255] doesn't correspond to colors in range [0, 1]

I am trying to implement in my shader a way of reading normals from a normal map. However, I found a problem when reading colors that prevents it.
I thought that one color such as (0, 0, 255) (blue) was equivalent to (0, 0, 1) in the shader. However, recently I found out that, for instance, if I pass a texture with the color (128, 128, 255), it is not equivalent to ~(0.5, 0.5, 1) in the shader.
In a fragment shader I write the following code:
vec3 col = texture(texSampler[0], vec2(1, 1)).rgb; // texture with color (128, 128, 255)
if(inFragPos.x > 0)
outColor = vec4(0.5, 0.5, 1, 1); // I get (188, 188, 255)
else
outColor = vec4(col, 1); // I get (128, 128, 255)
In x<0 I get the color (128, 128, 255), which is expected. But in x>0 I get the color (188, 188, 255), which I didn't expect. I expected both colors to be the same. What do I not know? What am I missing?
But in x>0 I get the color (188, 188, 255), which I didn't expect.
Did you render these values to a swapchain image, by chance?
If so, swapchain images are almost always in the sRGB colorspace. Which means that all floats written to them will be expected to be in a linear colorspace and therefore will be converted into sRGB.
If the source image was also in the sRGB colorspace, reading from it will reverse the transformation into a linear RGB colorspace. But since these are inverse transformations, the overall output you get will be the same as the input.
If you want to treat data in a texture as data rather than as colors, you must not use image formats that use the sRGB colorspace. And swapchain images are almost always sRGB, so you'll have to use a user-created image for such outputs.
Also, 128 will never yield exactly 0.5. 128/255 is slightly larger than 0.5.
After some research, I could solve it, so I will explain the solution. Nicol Bolas' answer shed some light on the problem too (thank you!).
In the old days, images were in (linear) RGB. Today, images are expected to be in (non-linear) sRGB. The sRGB color space gives more resolution to darker colors and less to lighter colors, because human eye distinguishes darker colors better.
Internet images (including normal maps) are almost always in sRGB by convention. When I analyze the colors of an image with Paint, I get the sRGB colors. When I pass that image as a texture to the shader, it is automatically converted to RGB (if you told Vulkan to do so), because the RGB color space is more appropriate for making operations with colors. Then, when the shader outputs the result, it automatically converts it back to sRGB.
My mistake was to consider the color information I got from the source image (using Paint) to be RGB, while it was really sRGB. When the color was converted to RGB in the shader, I was confused because I expected the same color I got in Paint. Since I want to use the texture as data rather than as color, I see 3 ways to solve this:
Save normals in a RGB image (tell Vulkan about this) (most correct option).
Transform the image to sRGB in the shader (my solution). Since the data was saved in an image as sRGB colors, it should be read in the shader as sRGB in order to get the correct data.
Now, talking about Vulkan, we have to specify the color space for the surface format and the swap chain (for instance: VK_COLOR_SPACE_SRGB_NONLINEAR_KHR). This way, the swapchain\display interprets the values when the image is presented. Also, we have to specify the color space of the Vulkan images we create.
References
Linear Vs Non-linear RGB: Great answer from Dan Hulme
Vulkan color space: Vulkan related info
Normal mapping 1 & Normal mapping 2

Background color issue when blending with transparency in OpenGL

Let's take the simplest case of rendering two overlapping transparent rectangles, one red and one green, both with alpha=0.5. Assume that the drawing order is from back to front, meaning that the rectangle farther from the camera is drawn first.
In realistic scenarios, irrespective of which rectangle happens to be in front, the overlapping color should be the same, i.e. RGBA = [0.5, 0.5, 0.0, 0.5].
In practice, however, assuming that we are blending with weights SRC_ALPHA and ONE_MINUS_SRC_ALPHA, the overlapping color is dominated by the color of the front rectangle, as in this image:
I believe this happens because the first rectangle is blended with the background color, and the second rectangle is then blended with the resultant color. With this logic, assuming white background, the overlapping color in the two cases works out to be:
Red on top: 0.5*(0.5*[1,1,1,0] + 0.5*[0,1,0,0.5]) + 0.5*[1,0,0,0.5] = [0.75, 0.50, 0.25, 0.375]
Green on top: 0.5*(0.5*[1,1,1,0] + 0.5*[1,0,0,0.5]) + 0.5*[0,1,0,0.5] = [0.50, 0.75, 0.25, 0.375]
which explains the dominance of the color on top. In principle, this could be easily corrected if all the objects were blended first, and the resultant color is blended with the background color.
Is there a way to achieve this in OpenGL?
Ideally, irrespective of which rectangle happens to be in front, the overlapping color should be the same
No, because when you use "SourceAlpha, InvSourceAlpha" blending, the the formula for calculating the final color is:
destRGB = destRGB * (1-sourceAlpha) + sourceRGB * sourceAlpha
This causes that the color of the rectangle which is drawn first, is multiplied by the alpha channel and add to the framebuffer. When the second rectangle is drawn, then the content of the framebuffer (which includes the color of the first rectangle) is multiplied again, but now by the inverse alpha channel of the second rectangle.
The color of the second rectangle is multiplied by alpha channel of the 2nd rectangle only:
destRGB = (destRGB * (1-Alpha_1) + RGB_1 * Alpha_1) * (1-Alpha_2) + RGB_2 * Alpha_2
or
destRGB = destRGB * (1-Alpha_1)*(1-Alpha_2) + RGB_1 * Alpha_1*(1-Alpha_2) + RGB_2 * Alpha_2
While RGB_2 is multiplied by Alpha_2, RGB_1 is multiplied by Alpha_1 * (1-Alpha_2).
So the result depends on the drawing order, if the color in the framebuffer is modified by the alpha channel of the new (source) color.
If you want to achieve an order independent effect, then the the color of the framebuffer must not be modified by the alpha channel of the source fragment. e.g.:
destRGB = destRGB * 1 + sourceRGB * sourceAlpha
Which can be achieved by the parameter GL_ONE for the destination factor of glBlendFunc:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Drawing transparent surfaces depends a lot on order. Most issues happen because you're using depth tests and writing to the depth buffer (in which case the result depends not only on which triangle is in front, but also on which triangle is drawn first). But if you ignore depth and just want to draw triangles one after another, your results still depend on the order in which you draw them, unless you use certain commutative blend functions.
Since you've been talking about stained glass, here's one option that works roughly like stained glass:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
This essentially multiplies each color channel of the destination by the corresponding color channel of the source. So if you draw a triangle with color (0.5, 1.0, 1.0), then it will basically divide the red channel of whatever it's been drawn onto by two. Drawing on a black destination will keep the pixel black, just like stained glass does.
To reduce the "opacity" of your stained glass, you'll have to mix your colors with (1.0, 1.0, 1.0). The alpha value is ignored.
As a bonus, this blend function is independent of the order you draw your shapes (assuming you've locked the depth buffer or disabled depth testing).

OpenGL alpha blending suddenly stops

I'm using OpenGL to draw a screen size quad to the same position with low alpha (lesser than 0.1) on every frame, without glClear(GL_COLOR_BUFFER_BIT) between them. This way the quad should increasingly damp the visibility of the drawings of the previous frames.
However the damping effect stops after some seconds. If I use alpha value no lower than 0.1 for the quad, it works as expected. It seems to me, that the OpenGL blending equation fails after a number of iterations (higher alpha values need less iteration to accumulate to 1, so if alpha >= 0.1 the problem doesn't occur). The lower limit of alpha in 8bit is about 0.0039, i.e. 1/255, so alpha 0.01 should be fine.
I have tried several blending settings, using the following render method:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_DEPTH_BUFFER_BIT);
// draw black quad with translucency (using glDrawArrays)
And the simple fragment shader:
#version 450
uniform vec4 Color;
out vec4 FragColor;
void main()
{
FragColor = Color;
}
How could I fix this issue?
It seems to me, that the OpenGL blending equation fails after a number
of iterations (higher alpha values need less iteration to accumulate
to 1, so if alpha >=0.1 the problem doesn't occur). The lower limit of
the alpha in 8bit is about 0.0039 (1/255), so alpha 0.01 should be
fine.
Your reasoning is wrong here. If you draw a black quad with alpha of 0.01 and the blending setup you described, you basically get new_color = 0.99 * old_color with every iteration. As a function of the iteration number i, it would be new_color(i) = original_color * pow (0.99,i). Now with unlimited precision, this will move towards 0.
But as you already noted, the precision is not unlimited. You get a requantization with every step. So if your new_color color value does not fall below the threshold for the integer value, it will stay the same as before. Now if we consider x the unnomralized color value in the range [0,255], and we assume that the quantization is just done by usual rounding rules, we must get at a difference of at least 0.5 to get a different value: x - x * (1-alpha) > 0.5, or simply x > 0.5 / alpha.
So in your case, you get x > 50, and that is where the blending will "stop" (and everything below that will stay as it was at the beginning, so you get a "shadow" of the dark parts). For alpha of 0.1, it will end at x=5, which is probably close enough to zero that you didn't notice it (with your particular display and settings).
EDIT
Let me recommend a startegy that will work. You must avoid the iterative compontation (at least with non-floatingpoint framebuffers). You want to achieve a fade to black effect. So you could render your original content into a texture, and render that over and over again, while blending it to black by varying the alpha value from frame to frame, so you end up with alpha as a function of the time (or frame number). Using a linear transition would probably make the most sense, but you could even use some nonlinear function to get the slowdown of the fadeout as your original approach with unlimited precision would have done.
Note that you do not need blending at all for that, you simply can multiply the color value from the texture with some uniform "alpha" value in the fragment shader.

Should the gl_FragColor value be normalized?

I am writing a Phong lighting shader and I have a hard time deciding whether the value I pass to gl_FragColor should be normalized or not.
If I use normalized values, the lighting is a bit weird. For example an object far away from a light source (un-lighted) would have its color determined by the sum of the emissive component, ambient component and global ambiental light. Let us say that adds up to (0.3, 0.3, 0.3). The normal for this is roughly (0.57, 0.57, 0.57), which is quite more luminous than what I'm expecting.
However, if I use non-normalized values, for close objects the specular areas get really, really bright and I have to make sure I generally use low values for my material constants.
As a note, I am normalizing only the RGB component and the alpha component is always 1.
I am a bit swamped and I could not find anything related to this. Either that or my searches were completely wrong.
No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)), bright colors such as yellow (norm = sqrt(2)), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255] or [0.0, 1.0]. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0) and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0). The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255] RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).

How to handle alpha compositing correctly with OpenGL

I was using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for alpha composing as the document said (and actually same thing was said in the Direct3D document).
Everything was fine at first, until I downloaded the result from GPU and made it a PNG image. The result alpha component is wrong. Before drawing, I had cleared the frame buffer with opaque black colour. And after I drew something semi-transparent, the frame buffer became semi-transparent.
Well the reason is obvious. With glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), we actually ignore the destination alpha channel and assume it always be 1. This is OK when we treat the frame buffer as something opaque.
But what if we need the correct alpha value? glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and make the source premultiplied (premultiplied texture/vertex color or multiply the alpha component with alpha components before setting it to gl_FragColor).
glBlendFunc can only multiply the original color components with one factor, but alpha compositing needs the destination be multiplied with both one_minus_src_alpha and dst_alpha. So it must be premultiplied. We can't do the premultiplication in the frame buffer, but as long as the source and destination are both premultipied, the result is premultipied. That is, we first clear the frame buffer with any premultiplied color (for example: 0.5, 0.5, 0.5, 0.5 for 50% transparent white instead of 1.0, 1.0, 1.0, 0.5), and draw premultipied fragments on it with glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), and we will have correct alpha channel in the result. But remember to undo the premultiplication if it is not desired for the final result