how implement own blend function? - c++

I want to implement the following blend function in my program which isn't using OpenGL.
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
In the OpenGL realtime test Application I was able to blend with this function colors on a white background. The blended result should look like http://postimg.org/image/lwr9ossen/.
I have a white background and want to blend red points over it. a high density of red points should be get opaque / black.
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
for(many many many times)
glColor4f(0.75, 0.0, 0.1, 0.85f);
DrawPoint(..)
I tried something, but I had no success.
Has anyone the equation for this blend function?

The blend function should translate directly to the operations you need if you want to implement the whole thing in your own code.
The first argument specifies a scaling factor for your source color, i.e. the color you're drawing the pixel with.
The second argument specifies a scaling factor for your destination color, i.e. the current color value at the pixel position in the output image.
These two terms are then added, and the result written to the output image.
GL_DST_COLOR corresponds to the color in the destination, which is the output image.
'GL_ONE_MINUS_SRC_ALPHA` is 1.0 minus the alpha component of the pixel you are rendering.
Putting this all together, with (colR, colG, colB, colA) the color of the pixel you are rendering, and (imgR, imgG, imgB) the current color in the output image at the pixel position:
GL_DST_COLOR = (imgR, imgG, imgB)
GL_ONE_MINUS_SRC_ALPHA = (1.0 - colA, 1.0 - colA, 1.0 - colA)
GL_DST_COLOR * (colR, colG, colB) + GL_ONE_MINUS_SRC_ALPHA * (imgR, imgG, imgB)
= (imgR, imgG, imgB) * (colR, colG, colB) +
(1.0 - colA, 1.0 - colA, 1.0 - colA) * (imgR, imgG, imgB)
= (imgR * colR + (1.0 - colA) * imgR,
imgG * colG + (1.0 - colA) * imgG,
imgB * colB + (1.0 - colA) * imgB)
This is the color you write to your image as the result of rendering the pixel.

Related

Alpha blending between two objects

I have two objects in my scene , a rectangle and a circle.
rectangle is 1 unit in z axis and circle is 0 units in z axis.
rectangle has opacity of 50 and the circle has opacity of 100
Why is the alpha of rectagle reducing the alpha of the circle even though the circle has opacity of 100.
This is how the alpha looks like.
This is the blend mode i am using.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
If you want different equation for RGB color and different for Alpha you can use.
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA, GL_ONE);
A possible explanation is, that the rectangle is "darker" than the circle.
When alpha blending is set by
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Then the formula for the final color is
color_dest = color_dest * (1-alpha_source) + color_source * alpha_source
Lets assume the color of the circle 1.0. and the alpha channel of the circle is 1.0, too. The circle is drawn first. The content of the target buffer is black (0, 0, 0, 0).
When the circle is drawn then blending is applied:
color_dest = color_dest * (1-alpha_source) + color_source * alpha_source
(1, 1, 1, 1) = (0, 0, 0, 0) * (1 - 1.0) + (1, 1, 1, 1) * 1.0
The rectangle has a color of 0.5 and and an alpha channel of 0.5, too. Again blending is applied:
color_dest = color_dest * (1-alpha_source) + color_source * alpha_source
(0.75, 0.75, 0.75, 0.75) = (1, 1, 1, 1) * (1 - 0.5) + (0.5, 0.5, 0.5, 0.5) * 0.5
So the final color at the fragments where rectangle covers the the circle is (0.75, 0.75, 0.75, 0.75). The "darker" rectangle darkens the circle.

OpenGL ReadPixels (Screenshot) Alpha

I'm using a Tile Rendering (using glReadPixels) setup to take screenshots of a scene. My output looks correct:
But looking at the alpha channel the pixels are still transparent even if the transparent texture is rendered on top of an opaque texture.
In particular the "inside" of the car should be completely opaque where we see the ceiling, but partially transparent when going through the back windows.
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
I think you got some useful direction from the comments and the other answer, but not the solution in full detail. Instead of just giving the result, let me walk through it, so that you know how to figure it out yourself next time.
I'm assuming that you draw your translucent objects back to front, using a GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. You don't directly mention that, but it's fairly standard, and consistent with what you're seeing. The correct solution will also need an alpha component in the framebuffer, but you have that already, otherwise you would get nothing when reading back the alpha.
To illustrate the whole process, I'm going to use two examples on the way. I'll only list the (R, A) components for the colors, G and B would behave just like R.
Draw a layer with color (R1, 1.0), then a layer with (R2, 0.4) on top of it.
Draw a layer with color (R1, 0.5), then a layer with (R2, 0.4) on top of it.
The background color is (Rb, 0.0), you always want to clear with an alpha value of 0.0 for this kind of blending.
First, let's calculate the result we want to achieve for the colors:
For case 1, drawing the first layer completely covers the background, since it has alpha = 1.0. Then we blend the second layer on top of it. Since it has alpha = 0.4, we keep 60% of the first layer, and add 40% of the second layer. So the color we want is
0.6 * R1 + 0.4 * R2
For case 1, drawing the first layer keeps 50% of the background background, since it has alpha = 0.5. So the color so far is
0.5 * Rb + 0.5 * R1
Then we blend the second layer on top of it. Again we keep 60% of the previous color, and add 40% of the second layer. So the color we want is
0.6 * (0.5 * Rb + 0.5 * R1) + 0.4 * R2
= 0.3 * Rb + 0.3 * R1 + 0.4 * R2
Now, let's figure out what we want the result for alpha to be:
For case 1, our first layer was completely opaque. One way of looking at opacity is as a measure of what fraction of light is absorbed by the object. Once we have a layer that absorbs all the light, anything else we render will not change that. Our total alpha should be
1.0
For case 2, we have one layer that absorbs 50% of the light, and one that absorbs 40% of the remaining light. Since 40% of 50% is 20%, a total of 70% is absorbed. Our total alpha should be
0.7
Using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA for blending gives you the desired result for the color. But as you noticed, not for alpha. Doing the calculation on the example:
Case 1: Drawing layer 1, SRC_ALPHA is 1.0, the source value is S = (R1, 1.0) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 1.0 * (R1, 1.0) + 0.0 * (Rb, 0.0)
= (R1, 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (R1, 1.0)
= (0.4 * R2 + 0.6 * R1, 0.76)
Case 2: Drawing layer 1, SRC_ALPHA is 0.5, the source value is S = (R1, 0.5) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.5 * (R1, 0.5) + 0.5 * (Rb, 0.0)
= (0.5 * R1 + 0.5 * Rb, 0.25).
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (0.5 * R1 + 0.5 * Rb, 0.25)
= (0.4 * R2 + 0.3 * R1 + 0.3 * Rb, 0.31).
So we confirmed what you already knew: We get the desired colors, but the wrong alphas. How do we fix this? We need a different blend function for alpha. Fortunately OpenGL has glBlendFuncSeparate(), which allows us to do exactly that. All we need to figure out is what blend function to use for alpha. Here is the thought process:
Let's say we already rendered some translucent objects, with a total alpha of A1, which is stored in the framebuffer. What we rendered so far absorbs a fraction A1 of the total light, and lets a fraction 1.0 - A1 pass through. We render another layer with alpha A2 on top of it. This layer absorbs a fraction A2 of the light that passed through before, so it absorbs an additional (1.0 - A1) * A2 of all the light. We need to add this to the amount of light that was already absorbed, so that a total of (1.0 - A1) * A2 + A1 is now absorbed.
All that's left to do is translate that into an OpenGL blend equation. A2 is the source value S, and A1 the destination value D. So our desired alpha result becomes
(1.0 - A1) * A2 + A1
= (1.0 - A1) * S + 1.0 * D
What I called A1 is the alpha value in the framebuffer, which is referred to as DST_ALPHA in the blend function specification. So we use ONE_MINUS_DST_ALPHA to match our source multiplier of 1.0 - A1. We use GL_ONE to match the destination multiplier 1.0.
So the blend function parameters for alpha are (GL_ONE_MINUS_DST_ALPHA, GL_ONE), and the complete blend function call is:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
We can double check the math for alpha with the examples one more time:
Case 1: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 1.0) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 1.0) + 1.0 * (.., 0.0)
= (.., 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 1.0. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.0 * (.., 0.4) + 1.0 * (.., 1.0)
= (.., 1.0)
We got the desired alpha value of 1.0!
Case 2: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 0.5) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 0.5) + 1.0 * (.., 0.0)
= (.., 0.5)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 0.5. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.5 * (.., 0.4) + 1.0 * (.., 0.5)
= (.., 0.7)
We got the desired alpha value of 0.7!
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
No. OpenGL stores one RGBA value for each pixel; there's no way to get the previous values (as that would need a ton of RAM).
What alpha values get written to the framebuffer depend on your alpha blending equation, which you can set with glBlendFunc or glBlendFuncSeparate. See the blending page on the OpenGL wiki for more info, and this JavaScript app lets you see the effects of various blending modes.

OpenGL - blending

I have raycasting with this kind of color blending in for cycle in PS
actual = <some color loaded from texture>;
actual.a *= 0.05; //reduce the alpha to have a more transparent result
//Front to back blending
actual.rgb *= actual.a;
last = (1.0 - last.a) * actual + last;
Can this equation be rewritten to use OpenGL 3 blending functions ? The goal is to remove cycle from PS by rendering more qauds over themselves
So far I am using this: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but result looks different
EDIT:
last = cumulated color (aka. final color)
actual = current color from texture
The main problem is that you still have to premultiply the source color with the alpha value in your shader (actual.rgb *= actual.a).
I think for blending you have to use this function:
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
glBlendEquation(GL_FUNC_ADD):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) - by using this, you have formula like:
last.rgb = actual.a * actual.rgb + ( 1.0 - actual.a ) * last.rgb;
It's completely different from yours shader formula:
actual.rgb *= actual.a;
last = (1.0 - last.a) * actual + last;
So, you have different result.

OpenGL tune from one color to another

I have made a small particle system. What I now would like to do is have the particles fade from one color to another during it's life time. For example from black to white or Yellow to Red.
I use the glColor() functions to set the color of the particle.
How do I do this?
you have to blende the colors by your self:
calculate the blend factor between 0 and 1 and mix the colors
float blend = lifeTime / maxLifeTime;
float red = (destRed * blend) + (srcRed * (1.0 - blend));
float green = (destGreen * blend) + (srcGreen * (1.0 - blend));
float blue = (destBlue * blend) + (srcBlue * (1.0 - blend));
regards
ron

Why does this OpenGL shader use texture coordinates beyond 1.0?

I'm trying to get familiar with shaders in opengl. Here is some sample code that I found (working with openframeworks). The code simply blurs an image in two passes, first horizontally, then vertically. Here is the code from the horizontal shader. My only confusion is the texture coordinates. They exceed 1.
void main( void )
{
vec2 st = gl_TexCoord[0].st;
//horizontal blur
//from http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
vec4 color;
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -4.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -3.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -2.0, 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -1.0, 0));
color += 5.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt , 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 1.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 2.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 3.0, 0));
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 4.0, 0));
color /= 5.0;
gl_FragColor = color;
}
I can't make heads or tails out of this code. Texture coordinates are supposed to be between 0 and 1, and I've read a bit about what happens when they're greater than 1, but that's not the behavior I'm seeing (or I don't see the connection). blurAmnt varies between 0.0 and 6.4, so s can go from 0 to 25.6. The image just gets blurred more or less depending on the value, I don't see any repeating patterns.
My question boils down to this: what exactly is happening when the texture coordinate argument in the call to texture2DRect exceeds 1? And why does the blurring behavior still function perfectly despite this?
The [0, 1] texture coordinate range only applies to the GL_TEXTURE_2D texture target. Since that code uses texture2DRect (and a samplerRect), it's using the GL_TEXTURE_RECTANGLE_ARB texture target, that this target uses Unnormalized texture coordinates, in the range [0, width]x[0, height].
That's why you have "weird" texture coords. Don't worry, they work fine with this texture target.
Depends on the host code. If you saw something like
glTexParameteri (GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
Then the out of bounds s dimension will be zeros, IIRC. Similar for t.