I'm using a Tile Rendering (using glReadPixels) setup to take screenshots of a scene. My output looks correct:
But looking at the alpha channel the pixels are still transparent even if the transparent texture is rendered on top of an opaque texture.
In particular the "inside" of the car should be completely opaque where we see the ceiling, but partially transparent when going through the back windows.
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
I think you got some useful direction from the comments and the other answer, but not the solution in full detail. Instead of just giving the result, let me walk through it, so that you know how to figure it out yourself next time.
I'm assuming that you draw your translucent objects back to front, using a GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA blend function. You don't directly mention that, but it's fairly standard, and consistent with what you're seeing. The correct solution will also need an alpha component in the framebuffer, but you have that already, otherwise you would get nothing when reading back the alpha.
To illustrate the whole process, I'm going to use two examples on the way. I'll only list the (R, A) components for the colors, G and B would behave just like R.
Draw a layer with color (R1, 1.0), then a layer with (R2, 0.4) on top of it.
Draw a layer with color (R1, 0.5), then a layer with (R2, 0.4) on top of it.
The background color is (Rb, 0.0), you always want to clear with an alpha value of 0.0 for this kind of blending.
First, let's calculate the result we want to achieve for the colors:
For case 1, drawing the first layer completely covers the background, since it has alpha = 1.0. Then we blend the second layer on top of it. Since it has alpha = 0.4, we keep 60% of the first layer, and add 40% of the second layer. So the color we want is
0.6 * R1 + 0.4 * R2
For case 1, drawing the first layer keeps 50% of the background background, since it has alpha = 0.5. So the color so far is
0.5 * Rb + 0.5 * R1
Then we blend the second layer on top of it. Again we keep 60% of the previous color, and add 40% of the second layer. So the color we want is
0.6 * (0.5 * Rb + 0.5 * R1) + 0.4 * R2
= 0.3 * Rb + 0.3 * R1 + 0.4 * R2
Now, let's figure out what we want the result for alpha to be:
For case 1, our first layer was completely opaque. One way of looking at opacity is as a measure of what fraction of light is absorbed by the object. Once we have a layer that absorbs all the light, anything else we render will not change that. Our total alpha should be
1.0
For case 2, we have one layer that absorbs 50% of the light, and one that absorbs 40% of the remaining light. Since 40% of 50% is 20%, a total of 70% is absorbed. Our total alpha should be
0.7
Using GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA for blending gives you the desired result for the color. But as you noticed, not for alpha. Doing the calculation on the example:
Case 1: Drawing layer 1, SRC_ALPHA is 1.0, the source value is S = (R1, 1.0) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 1.0 * (R1, 1.0) + 0.0 * (Rb, 0.0)
= (R1, 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (R1, 1.0)
= (0.4 * R2 + 0.6 * R1, 0.76)
Case 2: Drawing layer 1, SRC_ALPHA is 0.5, the source value is S = (R1, 0.5) and the destination value is D = (Rb, 0.0). So the blend function evaluates as
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.5 * (R1, 0.5) + 0.5 * (Rb, 0.0)
= (0.5 * R1 + 0.5 * Rb, 0.25).
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (R2, 0.4). Evaluating with 0.4 for SRC_ALPHA gives
SRC_ALPHA * S + ONE_MINUS_SRC_ALPHA * D
= 0.4 * (R2, 0.4) + 0.6 * (0.5 * R1 + 0.5 * Rb, 0.25)
= (0.4 * R2 + 0.3 * R1 + 0.3 * Rb, 0.31).
So we confirmed what you already knew: We get the desired colors, but the wrong alphas. How do we fix this? We need a different blend function for alpha. Fortunately OpenGL has glBlendFuncSeparate(), which allows us to do exactly that. All we need to figure out is what blend function to use for alpha. Here is the thought process:
Let's say we already rendered some translucent objects, with a total alpha of A1, which is stored in the framebuffer. What we rendered so far absorbs a fraction A1 of the total light, and lets a fraction 1.0 - A1 pass through. We render another layer with alpha A2 on top of it. This layer absorbs a fraction A2 of the light that passed through before, so it absorbs an additional (1.0 - A1) * A2 of all the light. We need to add this to the amount of light that was already absorbed, so that a total of (1.0 - A1) * A2 + A1 is now absorbed.
All that's left to do is translate that into an OpenGL blend equation. A2 is the source value S, and A1 the destination value D. So our desired alpha result becomes
(1.0 - A1) * A2 + A1
= (1.0 - A1) * S + 1.0 * D
What I called A1 is the alpha value in the framebuffer, which is referred to as DST_ALPHA in the blend function specification. So we use ONE_MINUS_DST_ALPHA to match our source multiplier of 1.0 - A1. We use GL_ONE to match the destination multiplier 1.0.
So the blend function parameters for alpha are (GL_ONE_MINUS_DST_ALPHA, GL_ONE), and the complete blend function call is:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA,
GL_ONE_MINUS_DST_ALPHA, GL_ONE);
We can double check the math for alpha with the examples one more time:
Case 1: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 1.0) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 1.0) + 1.0 * (.., 0.0)
= (.., 1.0)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 1.0. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.0 * (.., 0.4) + 1.0 * (.., 1.0)
= (.., 1.0)
We got the desired alpha value of 1.0!
Case 2: Drawing layer 1, DST_ALPHA is 0.0, the source value is S = (.., 0.5) and the destination value is D = (.., 0.0). So the blend function evaluates as
ONE_MINUS_DST_ALPHA * S + ONE * D
= 1.0 * (.., 0.5) + 1.0 * (.., 0.0)
= (.., 0.5)
This is written to the framebuffer, and becomes the destination value for drawing layer 2. The source for layer 2 is (.., 0.4), and DST_ALPHA is now 0.5. Evaluating the blend equation for layer 2 gives
ONE_MINUS_DST_ALPHA * S + ONE * D
= 0.5 * (.., 0.4) + 1.0 * (.., 0.5)
= (.., 0.7)
We got the desired alpha value of 0.7!
Is there a method for testing the alpha component of each texture at a pixel, not just the closest one?
No. OpenGL stores one RGBA value for each pixel; there's no way to get the previous values (as that would need a ton of RAM).
What alpha values get written to the framebuffer depend on your alpha blending equation, which you can set with glBlendFunc or glBlendFuncSeparate. See the blending page on the OpenGL wiki for more info, and this JavaScript app lets you see the effects of various blending modes.
Related
I want to implement the following blend function in my program which isn't using OpenGL.
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
In the OpenGL realtime test Application I was able to blend with this function colors on a white background. The blended result should look like http://postimg.org/image/lwr9ossen/.
I have a white background and want to blend red points over it. a high density of red points should be get opaque / black.
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
for(many many many times)
glColor4f(0.75, 0.0, 0.1, 0.85f);
DrawPoint(..)
I tried something, but I had no success.
Has anyone the equation for this blend function?
The blend function should translate directly to the operations you need if you want to implement the whole thing in your own code.
The first argument specifies a scaling factor for your source color, i.e. the color you're drawing the pixel with.
The second argument specifies a scaling factor for your destination color, i.e. the current color value at the pixel position in the output image.
These two terms are then added, and the result written to the output image.
GL_DST_COLOR corresponds to the color in the destination, which is the output image.
'GL_ONE_MINUS_SRC_ALPHA` is 1.0 minus the alpha component of the pixel you are rendering.
Putting this all together, with (colR, colG, colB, colA) the color of the pixel you are rendering, and (imgR, imgG, imgB) the current color in the output image at the pixel position:
GL_DST_COLOR = (imgR, imgG, imgB)
GL_ONE_MINUS_SRC_ALPHA = (1.0 - colA, 1.0 - colA, 1.0 - colA)
GL_DST_COLOR * (colR, colG, colB) + GL_ONE_MINUS_SRC_ALPHA * (imgR, imgG, imgB)
= (imgR, imgG, imgB) * (colR, colG, colB) +
(1.0 - colA, 1.0 - colA, 1.0 - colA) * (imgR, imgG, imgB)
= (imgR * colR + (1.0 - colA) * imgR,
imgG * colG + (1.0 - colA) * imgG,
imgB * colB + (1.0 - colA) * imgB)
This is the color you write to your image as the result of rendering the pixel.
I have raycasting with this kind of color blending in for cycle in PS
actual = <some color loaded from texture>;
actual.a *= 0.05; //reduce the alpha to have a more transparent result
//Front to back blending
actual.rgb *= actual.a;
last = (1.0 - last.a) * actual + last;
Can this equation be rewritten to use OpenGL 3 blending functions ? The goal is to remove cycle from PS by rendering more qauds over themselves
So far I am using this: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but result looks different
EDIT:
last = cumulated color (aka. final color)
actual = current color from texture
The main problem is that you still have to premultiply the source color with the alpha value in your shader (actual.rgb *= actual.a).
I think for blending you have to use this function:
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
glBlendEquation(GL_FUNC_ADD):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) - by using this, you have formula like:
last.rgb = actual.a * actual.rgb + ( 1.0 - actual.a ) * last.rgb;
It's completely different from yours shader formula:
actual.rgb *= actual.a;
last = (1.0 - last.a) * actual + last;
So, you have different result.
I need to write a function which shall take a sub-rectangle from a 2D texture (non power-of-2) and copy it to a destination sub-rectangle of an output 2D texture, using a shader (no glSubImage or similar).
Also the source and the destination may not have the same size, so I need to use linear filtering (or even mipmap).
void CopyToTex(GLuint dest_tex,GLuint src_tex,
GLuint src_width,GLuint src_height,
GLuint dest_width,GLuint dest_height,
float srcRect[4],
GLuint destRect[4]);
Here srcRect is in normalized 0-1 coordinates, that is the rectangle [0,1]x[0,1] touch the center of every border pixel of the input texture.
To achieve a good result when the input and source dimensions don't match, I want to use a GL_LINEAR filtering.
I want this function to behave in a coherent manner, i.e. calling it multiple times with many subrects shall produce the same result as one invocation with the union of the subrects; that is the linear sampler should sample the exact center of the input pixel.
Moreover, if the input rectangle fit exactly the destination rectangle an exact copy should occur.
This seems to be particularly hard.
What I've got now is something like this:
//Setup RTT, filtering and program
float vertices[4] = {
float(destRect[0]) / dest_width * 2.0 - 1.0,
float(destRect[1]) / dest_height * 2.0 - 1.0,
//etc..
};
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width - 0.5 / dest_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height - 0.5 / dest_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width + 0.5 / dest_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height + 0.5 / dest_height,
};
glBegin(GL_QUADS);
glTexCoord2f(texcoords[0], texcoords[1]);
glVertex2f(vertices[0], vertices[1]);
glTexCoord2f(texcoords[2], texcoords[1]);
glVertex2f(vertices[2], vertices[1]);
//etc...
glEnd();
To write this code I followed the information from this page.
This seems to work as intended in some corner cases (exact copy, copying a row or a column of one pixel).
My hardest test case is to perform an exact copy of a 2xN rectangle when both the input and output textures are bigger than 2xN.
I probably have some problem with offsets and scaling (the trivial ones don't work).
Solution:
The 0.5/tex_width part in the definition of the texcoords was wrong.
An easy way to work around is to completely remove that part.
float texcoords[4] = {
(srcRect[0] * (src_width - 1) + 0.5) / src_width,
(srcRect[1] * (src_height - 1) + 0.5) / src_height,
(srcRect[2] * (src_width - 1) + 0.5) / src_width,
(srcRect[3] * (src_height - 1) + 0.5) / src_height
};
Instead, we draw a smaller quad, by offsetting the vertices by:
float dx = 1.0 / (dest_rect[2] - dest_rect[0]) - epsilon;
float dy = 1.0 / (dest_rect[3] - dest_rect[1]) - epsilon;
// assume glTexCoord for every vertex
glVertex2f(vertices[0] + dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[1] + dy);
glVertex2f(vertices[2] - dx, vertices[3] - dy);
glVertex2f(vertices[0] + dx, vertices[3] - dy);
In this way we draw a quad which pass from the exact center of every border pixel.
Since OpenGL may or may not draw the border pixels in this case, we need the epsilons.
I believe that my original solution (don't offset vertex coords) can still work, but need a bit of extra math to compute the right offsets for the texcoords.
I have made a small particle system. What I now would like to do is have the particles fade from one color to another during it's life time. For example from black to white or Yellow to Red.
I use the glColor() functions to set the color of the particle.
How do I do this?
you have to blende the colors by your self:
calculate the blend factor between 0 and 1 and mix the colors
float blend = lifeTime / maxLifeTime;
float red = (destRed * blend) + (srcRed * (1.0 - blend));
float green = (destGreen * blend) + (srcGreen * (1.0 - blend));
float blue = (destBlue * blend) + (srcBlue * (1.0 - blend));
regards
ron
I'm trying to get familiar with shaders in opengl. Here is some sample code that I found (working with openframeworks). The code simply blurs an image in two passes, first horizontally, then vertically. Here is the code from the horizontal shader. My only confusion is the texture coordinates. They exceed 1.
void main( void )
{
vec2 st = gl_TexCoord[0].st;
//horizontal blur
//from http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
vec4 color;
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -4.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -3.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -2.0, 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -1.0, 0));
color += 5.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt , 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 1.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 2.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 3.0, 0));
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 4.0, 0));
color /= 5.0;
gl_FragColor = color;
}
I can't make heads or tails out of this code. Texture coordinates are supposed to be between 0 and 1, and I've read a bit about what happens when they're greater than 1, but that's not the behavior I'm seeing (or I don't see the connection). blurAmnt varies between 0.0 and 6.4, so s can go from 0 to 25.6. The image just gets blurred more or less depending on the value, I don't see any repeating patterns.
My question boils down to this: what exactly is happening when the texture coordinate argument in the call to texture2DRect exceeds 1? And why does the blurring behavior still function perfectly despite this?
The [0, 1] texture coordinate range only applies to the GL_TEXTURE_2D texture target. Since that code uses texture2DRect (and a samplerRect), it's using the GL_TEXTURE_RECTANGLE_ARB texture target, that this target uses Unnormalized texture coordinates, in the range [0, width]x[0, height].
That's why you have "weird" texture coords. Don't worry, they work fine with this texture target.
Depends on the host code. If you saw something like
glTexParameteri (GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
Then the out of bounds s dimension will be zeros, IIRC. Similar for t.