Shader - Color blending - opengl

I would like to know how to blend colors in a specific way.
Let's imagine that I have a color (A) and an other color (B).
I would like to blend them in such a way that if I choose white for the (B) then the output color is (A) but if have any other color for (B) it outputs a blending of (A) and (B).
I've tried the addition, but it doesn't give the expected result.
I've tried the multiplicative blending it's quite good for the (B) white value but it fail for a blue (B) and red (A) colors.
Any idea how to do that ?

With GLSL, the simplest approach is probably to use a branch. If colA and colB are the two vectors (of type vec4) holding your colors A and B:
if (any(lessThan(colB.xyz, vec3(1.0)))) {
outColor = colB;
} else {
outColor = colA;
}
Or, if you really want to avoid a branch, you could rely more on built-in functions. For example, using the observation that if all components are in the range [0.0, 1.0], the dot product of the vector with itself is 3.0 for the vector (1.0, 1.0, 1.0), and smaller for all other vectors:
outColor = mix(colB, colA, step(3.0, dot(colB.xyz, colB.xyz)))
You will have to benchmark to find out which of these is faster.
There may be some concern about floating point precision in the comparisons for both variations above. I believe it should be fine, since 1.0 can be represented as a float exactly. But if you do run into problems, you may want to allow for some imprecision by changing the constants that colB is compared against to slightly smaller values.

To avoid branching in a shader, one technique is to use lerp (linear interpolation). That is, use the would-be conditional variable as the lerp factor, so if it's 0 it's one color and if it's 1 it's the other color.
Be sure to reverse the logic since the second argument is what it blends to if cond=1. This also allows you to blend half way.
Example:
Instead of
Color result = (cond)? A:B;
use:
Color result=lerp(cond,B,A);

Related

WebGL / How to remove the dark edges that appear on the border with the transparent part of the image after applying the warp effect?

Demo https://codepen.io/Andreslav/pen/wvmjzwe
Scheme: Top left - the original.
Top right - the result.
Bottom right - rounding coordinates when extracting color.
The problem can be solved this way, but then the result is less smoothed:
coord = floor(coord) + .5;
How to make it better? Make it so that when calculating the average color, the program ignores the color of transparent pixels?
Maybe there are some settings that I haven't figured out..
Updated the demo
The result is even better after such an adjustment:
vec4 color = texture2D(texture, coord / texSize);
vec4 color_ = texture2D(texture, coordNoRound / texSize);
if(color_.a != color.a) {
color.a *= color_.a;
}
On the preview: bottom left. But this is not an ideal option, the correction is partial. The problem is relevant.
This appears to be a premultiplied alpha problem. And it's not as much of a glsl problem as it is a glfx problem.
Here's what happens:
Consider the RGBA values of two adjacent pixels at the edge of your source image. It would be something like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [?, ?, ?, 0]
Meaning that there is a fully opaque, fully-white pixel to the left, and then comes a fully-transparent (A=0) pixel to the right.
But what are the RGB values of a completely transparent pixel?
They are technically ill-defined (this fact is the core problem which needs to be solved). In practice, pretty much every image processing software will put [0, 0, 0] there.
So the pixels are actually like this:
[R G B A ] [R, G, B, A]
[1.0, 1.0, 1.0, 1.0] [0, 0, 0, 0]
What happens if your swirl shader samples the texture halfway between those 2 pixels? You get [0.5, 0.5, 0.5, 0.5]. That's color [0.5 0.5 0.5], with 0.5 Alpha. Which is gray, not white.
The generally chosen solution to this problem is premultiplied alpha. Which means that, for any given RGBA color, the RGB components are defined so that they don't range from 0 .. 1.0, but instead from 0 .. A. With that definition, color [0.5 0.5 0.5 0.5] is now "0.5 A, with maximum RGB, which is white". One side effect of this definition is that the RGB values of a fully transparent pixel are no longer ill-defined; they must now be exactly [0, 0, 0].
As you can see, we didn't really change any values, instead, we just defined that our result is now correct. Of course, we still need to tell the other parts of the graphics pipeline of our definition.
Premultiplied alpha is not the only solution to the problem. Now that you know what's happening, you might be able to come up with your own solution. But pretty much all modern graphics pipelines expect that you are working with premultiplied alpha all the time. So the correct solution would be to make that true. That means:
(1) You need to make sure that your input texture also has premultiplied alpha, i.e. all its RGB values must be multiplied with their alpha value. This is generally what game engines do, all their textures have premultiplied alpha. Either every pixel must already be edited in the source file, or you do the multiplication once for each pixel in the texture after loading the image.
AND
(2) You need to convince every alpha blending component in your rendering pipeline to use premultiplied alpha blending, instead of "normal" alpha blending. It seems you use the "glfx" framework, I don't know glfx, so I don't know how you can make it blend correctly. Maybe check the docs. In case you are using raw OpenGL/WebGL, then this is the way to tell the pipeline that it should assume premultiplied alpha values when blending:
gl.blendEquation(gl.FUNC_ADD); // Normally not needed because it's the default
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
(This can be derived from the analyzing the formula for source-over alpha blending, but without the last division step.)
The above code tells OpenGL/WebGL that every time it's blending two pixels on top of another, it should calculate the final RGBA values in a way that's correct assuming that both the "top" and the "bottom" pixel has premultiplied alpha applied to it.
For higher level APIs (for example, GDI+), you can typically specify the pixel format of images, where there is a separation between RGBA and RGBPA, in which case the API will automatically choose correct blending. That may not be true for glfx though. In essence, you always need to be aware whether the pixel format of your textures and drawing surfaces have premultiplied alpha or not, there is no magic code that always works correctly.
(Also note that using premultiplied alpha has other advantages too.)
For a quick fix, it appears that the framework you're using performs alpha blending so that it expects non-premultiplied alpha values. So you could just undo the premultiplication by adding this at the end:
color.rgb /= color.a;
gl_FragColor = color;
But for correctness, you still need to premultiply the alpha values of your input texture.
Because at the rounded corners, your input texture contains pixels which are fully white, but semi-transparent; their RGBA values would look like this:
[1.0, 1.0, 1.0, 0.8]
For the blending code to work correctly, the values should be
[0.8, 0.8, 0.8, 0.8]
,
because otherwise the line color.rgb /= color.a; would give you RGB values greater than 1.

Comparing two textures in openGL

I'm new to OpenGL and I'm looking forward to compare two textures to understand how much they are similar to each other. I know how to to this with two bitmap images but I really need to use a method to compare two textures.
Question is: Is there any way to compare two textures as we compare two images? Like comparing two images pixel by pixel?
Actually what you seem to be asking for is not possible or at least not as easy as it would seem to accomplish on the GPU. The problem is GPU is designed to accomplish as many small tasks as possible in the shortest amount of time. Iterating through an array of data such as pixels is not included so getting something like an integer or a floating value might be a bit hard.
There is one very interesting procedure you may try but I can not say the result will be appropriate for you:
You may first create a new texture that is a difference between the two input textures and then keep downsampling the result till 1x1 pixel texture and get the value of that pixel to see how different it is.
To achieve this it would be best to use a fixed size of the target buffer which is POT (power of two) for instance 256x256. If you didn't use a fixed size then the result could vary a lot depending on the image sizes.
So in first pass you would redraw the two textures to the 3rd one (using FBO - frame buffer object). The shader you would use is simply:
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
fragColor = abs(a-b);
So now you have a texture which represents the difference between the two images per pixel, per color component. If the two images will be the same, the result will be a totally black picture.
Now you will need to create a new FBO which is scaled by half in every dimension which comes to 128x128 in this example. To draw to this buffer you would need to use GL_NEAREST as a texture parameter so no interpolations on the texel fetching is done. Then for each new pixel sum the 4 nearest pixels of the source image:
vec4 originalTextCoord = varyingTextCoord;
vec4 textCoordRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y);
vec4 textCoordBottom = vec2(varyingTextCoord.x, varyingTextCoord.y+1.0/256);
vec4 textCoordBottomRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y+1.0/256);
fragColor = texture2D(iChannel0, originalTextCoord) +
texture2D(iChannel0, textCoordRight) +
texture2D(iChannel0, textCoordBottom) +
texture2D(iChannel0, textCoordBottomRight);
The 256 value is from the source texture so that should come as a uniform so you may reuse the same shader.
After this is drawn you need to drop down to 64, 32, 16... Then read the pixel back to the CPU and see the result.
Now unfortunately this procedure may produce very unwanted results. Since the colors are simply summed together this will produce an overflow for all the images which are not similar enough (results in a white pixel or rather (1,1,1,0) for non-transparent). This may be overcome first by using a scale on the first shader pass, to divide the output by a large enough value. Still this might not be enough and an average might need to be done in the second shader (multiply all the texture2D calls by .25).
In the end the result might still be a bit strange. You get 4 color components on the CPU which represent the sum or the average of an image differential. I guess you could sum them up and choose what you consider for the images to be much alike or not. But if you want to have a more sense in the result you are getting you might want to treat the whole pixel as a single 32-bit floating value (these are a bit tricky but you may find answers around the SO). This way you may compute the values without the overflows and get quite exact results from the algorithms. This means you would write the floating value as if it is a color which starts with the first shader output and continues for every other draw call (get texel, convert it to float, sum it, convert it back to vec4 and assign as output), GL_NEAREST is essential here.
If not then you may optimize the procedure and use GL_LINEAR instead of GL_NEAREST and simply keep redrawing the differential texture till it gets to a single pixel size (no need for 4 coordinates). This should produce a nice pixel which represents an average of all the pixels in the differential textures. So this is the average difference between pixels in the two images. Also this procedure should be quite fast.
Then if you want to do a bit smarter algorithm you may do some wonders on creating the differential texture. Simply subtracting the colors may not be the best approach. It would make more sense to blur one of the images and then comparing it to the other image. This will lose precision for those very similar images but for everything else it will give you a much better result. For instance you could say you are interested only if the pixel is 30% different then the weight of the other image (the blurred one) so you would discard and scale the 30% for every component such as result.r = clamp(abs(a.r-b.r)-30.0/100.0, .0, 1.0)/((100.0-30.0)/100.0);
You can bind both textures to a shader and visit each pixel by drawing a quad or something like this.
// Equal pixels are marked green. Different pixels are shown in red color.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
if(a != b)
fragColor = vec4(1,0,0,1);
else
fragColor = vec4(0,1,0,1);
}
You can test the shader on Shadertoy.
Or you can also bind both textures to a compute shader and visit every pixel by iteration.
You cannot compare vectors. You have to use
if( any(notEqual(a,b)))
Check the GLSL language spec

Should the gl_FragColor value be normalized?

I am writing a Phong lighting shader and I have a hard time deciding whether the value I pass to gl_FragColor should be normalized or not.
If I use normalized values, the lighting is a bit weird. For example an object far away from a light source (un-lighted) would have its color determined by the sum of the emissive component, ambient component and global ambiental light. Let us say that adds up to (0.3, 0.3, 0.3). The normal for this is roughly (0.57, 0.57, 0.57), which is quite more luminous than what I'm expecting.
However, if I use non-normalized values, for close objects the specular areas get really, really bright and I have to make sure I generally use low values for my material constants.
As a note, I am normalizing only the RGB component and the alpha component is always 1.
I am a bit swamped and I could not find anything related to this. Either that or my searches were completely wrong.
No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)), bright colors such as yellow (norm = sqrt(2)), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255] or [0.0, 1.0]. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0) and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0). The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255] RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).

Precision loss with mod in GLSL

I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.

GL_UNSIGNED_BYTE

I've got colors in GL_UNSIGNED_BYTE r,g,b but I want to use the alpha channel to put a custom value that is going to be used inside the pixel shader to color the geometry differently.
There are two possible values 0 and 127 now my problem is that when I do this in the vertex shader :
[vertex]
varying float factor;
factor = gl_Color.w
it seems that the factor is always 1.0 because if I do this:
[fragment]
varying float factor;
factor = factor;
gl_FragColor = vec4(factor, 0.0, 0.0, 1.0)
The output is always red why I would expect two different colors, one when the factor is zero and one when the factor is 127.
So if I assign two values 0 and 127 I should get in the vertex shader 0/0.5? is this correct?
[Edit]
Ok I see now two different values but I don't know why I get them, there s any operation the GPU does in the gl_Colow.w component I am not aware of?
[Edit2]
As Nicholas has pointed out I am using glColorPointer(4...);
Since you are using the gl_Color input, and you make reference to GL_UNSIGNED_BYTE, I surmise that you are using glColorPointer to specify these color values. So in your code, you're calling something to the effect of:
glColorPointer(4, GL_UNSIGNED_BYTE, ...);
(BTW, in the future, it would be best if you actually provide this information, rather than forcing us to deduce it)
So, first issue: are you actually using 4 for the number of color components? You should be, but you don't say one way or the other.
Now that this has been corrected, let's get to the real issue (or at least it was in the original form of the question):
factor = factor/127.0;(to normalize)
OpenGL already normalized that for you. If you use any integer type with glColorPointer, the values you get in gl_Color will be normalized, either [0, 1] for UNSIGNED types, or [-1, 1] for non-unsigned types.