OpengGL Light Blending - opengl

I am writing a Lights/Shadows system for my game using Java alongside the LWJGL. For each one of the Light-Emitting Entities I generate such a texture:
I should warn you that these Textures are full of (0, 0, 1) or (1, 0, 0) pixels, and the gradient effect is achieved with the alpha channel. I interpret the Alpha channel as a gradient factor.
Afterwards, I wish to blend every light/shadow texture together on a single texture, each at it's respective correct position. For that, I use a Framebuffer. I tried to achieve the desired effect using the following blend equation/function combination:
glBlendEquationSeparateEXT(GL_FUNC_ADD, GL_MAX);
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE);
I chose GL_ONE/GL_ONE for the Alpha Channel Blend Function arbitrarily, for GL_MAX will only do max(Sa, Da), as stated here, which means that the scaling factors are not used. The result of this combination is the following:
This image was obtained with Apple's OpenGL Driver Profiler, so I did not render it using my application (which could mess with the final result). The next step would be to render this texture over the actual game using multiply-blending, in order to darken the image, but the lights/shadows texture is obviously wrong, because we can see the edges of individual light/shadow textures over each other.
How should I proceed to achieve the desired result?
Edit:
I forgot to explain my choices for the scaling factors:
I think that it would be right to simply add the colors of each light (pondering each of them with their respective alpha values) and choose the alpha of the final fragment to be the biggest of each overlapping light.

Imagine that one of your texture rectangles was extended outside its current border with some arbitrary pattern, like pure green. Imagine further that we were somehow allowed to use two different blending functions, one inside the border, and one outside. You would get the same image you have here (none of the green showing) if outside the border you used the blend function
glBlendFuncSeparateEXT(GL_ZERO, GL_ONE, GL_ONE, GL_ONE)
We would then want whatever blending function we use inside to give us a continuous blending result. The blending function
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE)
would not give us such a result. It is not so much because the first parameter which would mean ignoring the source near the border (small alpha on the border, if I read your image description correctly). So it must be the second parameter. We want the destination only when the source alpha is small. Change GL_DST_ALPHA to GL_ONE_MINUS_SRC_ALPHA. This would be more standard, but maybe I'm not understanding your objectives?

Related

Blending sprite with pre-existing texture

I am just learning the intricacies of OpenGL. What I would like to do is render a sprite onto a pre-existing texture. The texture will consist of terrain with some points alpha=1 and some points alpha=0. I would like the sprite to appear on a pixel of the texture if and only if the corresponding texture's pixel's alpha = 0. That is, for each pixel of the sprite, the output colour is:
Color of the sprite, if terrain alpha = 0.
Color of the terrain, if terrain alpha = 1.
Is this possible to do with blending function, if not how should I do it?
This is the exact opposite of the traditional blending function. The usual blend function is a linear interpolation between the source and destination colors, based on the source alpha.
What you want is a linear interpolation between the source and destination colors, based on the destination alpha. But you also want to invert the usual meaning; a destination alpha of 1 means that the destination color should be taken, not the source color.
That's pretty easy.
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
However, the above assumes that your sprites do not themselves have some from of inherent transparency. And most sprites do. That is, if the sprite alpha is 0 at some pixel, you don't want to overwrite the terrain color, no matter what the terrain's alpha is.
That makes this whole process excessively difficult. Pre-multiplying the alpha will not save you either, since black will just as easily overwrite the color in the terrain if there is no terrain color there.
In effect, you would need to do a linear interpolation based on neither the source nor the destination, but on a combination of them. I think multiplication of the two (src-alpha * (1 - dst-alpha)) would do a good job.
This is not possible with OpenGL's standard blending system. You would need to employ some form of programmatic blending technique. This typically involves read/modify/write operations using NV/ARB_texture_barrier or otherwise ping-ponging between bound textures.

How to set up the correct blend function?

I would like to achieve the effect like the image shown:
When the star pass through a mask, part of the star which under the mask is not shown.
I tried to use blend function, but I don't know how to set up the correct blend function.
I followed this example ( https://gist.github.com/mattdesl/6076846 ), but still can't figure out how to achieve the result I want.
Can anyone teach me how to find the blend function to achieve this effect ??
You can't do that only with the blending function.
One possible solution is to draw only the part of the star that is visible. You just have to draw a smaller quad.
Another solution is to multiply the star texture alpha by the alpha of the gray texture in the fragment shader (so you need to bind both textured when drawing the star). But that only works if the gray part of the screen is a whole texture and not made of several tiles. I would choose the first solution.
Third solution, you draw the background again on top of the star to hide it.
In first two cases, you only need the usual alpha blending function. To draw the background you can just disable the blending.

How can I blend two colours to a mix of them in OpenGL?

I'm rendering both a blue and a red line (in the context of an anaglyph). When the red line and blue line overlap I want a purple color to be rendered instead of the line in front.
I am using OpenGL. Some of the code I have tried so far is this:
glBlendFunc(GL_ONE, GL_DST_ALPHA);
This causes the overlap to render white, and the line appears as follows:
I thought maybe using an RGB scale factor on top of this blend would be the right thing to do.
So I tried using the glBlendFuncSeparate which takes parameters:
Source Factor RGB
Destination Factor RGB
Source Factor Alpha
Destination Factor Alpha
I could not find parameters which made this work for me.
I also attempted using glBlendEquation with an additive equation, but didn't notice any success in that method.
How do I produce a function which successfully blends the two lines into a purple color?
Edit:
I've noticed that glBlendFunc(GL_ONE, GL_DST_ALPHA) does perform some blending to produce intermediate colors (the actual lines are just nonsensical here, it was just to display some blending).
I'm rendering both a blue and a red line (in the context of an anaglyph)
Not the answer you expect, but the answer you need: The usual approach to render anaglyph images in OpenGL is not to use blending. Blending is hard enough to get right, you don't want to mess things up further with the anaglyph part.
There are two commonly used methods:
Rendering each view into a FBO attached texture and combining them in a postprocessing step.
using glColorMask to select the active color channels for each rendering step.
I think for now you're good with the color mask method: Do it as following:
display:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glViewport(…)
# prepare left view
setup_left_view_projection()
glColorMask(1,0,0, 1); # red is commonly used for the left eye
draw_scene()
glClear(GL_DEPTH_BUFFER_BIT) # clear the depth buffer (and just the depth buffer!)
# prepare right view
setup_right_view_projection()
glColorMask(0,1,1, 1); # cyan is commonly used for the right eye
draw_scene()
The typical blend function is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);; note that the final color changes according to the draw order.
Blend functions are not so obvious, so a graphical representation sometimes is better; check out for example this site.

OpenGL - mask with multiple textures

I have implemented masking in OpenGL according to the following concept:
The mask is composed of black and white colors.
A foreground texture should only be visible in the white parts of the mask.
A background texture should only be visible in the black parts of the mask.
I can make the white part or the black part work as supposed by using glBlendFunc(), but not the two at the same time, because the foreground layer not only blends onto the mask, but also onto the background layer.
Is there anyone who knows how to accomplish this in the best way? I have been searching the net and read something about fragment shaders. Is this the way to go?
This should work:
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
This is fairly tricky, so tell me if anything is unclear.
Don't forget to request an alpha buffer when creating the GL context. Otherwise it's possible to get a context without an alpha buffer.
Edit: Here, I made an illustration.
Edit: Since writing this answer, I've learned that there are better ways to do this:
If you're limited to OpenGL's fixed-function pipeline, use texture environments
If you can use shaders, use a fragment shader.
The way described in this answer works and is not particularly worse in performance than these 2 better options, but is less elegant and less flexible.
Stefan Monov's is great answer! But for those who still have issues to get his answer working:
you need to check GLES20.glGetIntegerv(GLES20.GL_ALPHA_BITS, ib) - you need non zero result.
if you got 0 - goto EGLConfig and ensure that you pass alpha bits
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8, <- i havn't this and spent a much of time
EGL14.EGL_DEPTH_SIZE, 16,

opengl - blending with previous contents of framebuffer

I am rendering to a texture through a framebuffer object, and when I draw transparent primitives, the primitives are blended properly with other primitives drawn in that single draw step, but they are not blended properly with the previous contents of the framebuffer.
Is there a way to properly blend the contents of the texture with the new data coming in?
EDIT: More information requsted, I will attempt to explain more clearly;
The blendmode I am using is GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. (I believe that is typically the standard blendmode)
I am creating an application that tracks mouse movement. It draws lines connecting the previous mouse position to the current mouse position, and as I do not want to draw the lines over again each frame, I figured I would draw to a texture, never clear the texture and then just draw a rectangle with that texture on it to display it.
This all works fine, except that when I draw shapes with alpha less than 1 onto the texture, it does not blend properly with the texture's previous contents. Let's say I have some black lines with alpha = .6 drawn onto the texture. A couple draw cycles later, I then draw a black circle with alpha = .4 over those lines. The lines "underneath" the circle are completely overwritten. Although the circle is not flat black (It blends properly with the white background) there are no "darker lines" underneath the circle as you would expect.
If I draw the lines and the circle in the same frame however, they blend properly. My guess is that the texture just does not blend with it's previous contents. It's like it's only blending with the glclearcolor. (Which, in this case is <1.0f, 1.0f, 1.0f, 1.0f>)
I think there are two possible problems here.
Remember that all of the overlay lines are blended twice here. Once when they are blended into the FBO texture, and again when the FBO texture is blended over the scene.
So the first possibility is that you don't have blending enabled when drawing one line over another in the FBO overlay. When you draw into an RGBA surface with blending off, the current alpha is simply written directly into the FBO overlay's alpha channel. Then later when you blend the whole FBO texture over the scene, that alpha makes your lines translucent. So if you have blending against "the world" but not between overlay elements, it is possible that no blending is happening.
Another related problem: when you blend one line over another in "standard" blend mode (src alpha, 1 - src alpha) in the FBO, the alpha channel of the "blended" part is going to contain a blend of the alphas of the two overlay elements. This is probably not what you want.
For example, if you draw two 50% alpha lines over each other in the overlay, to get the equivalent effect when you blit the FBO, you need the FBO's alpha to be...75%. (That is, 1 - (1-.5) * (1-0.5), which is what would happen if you just drew two 50% alpha lines over your scene. But when you draw the two 50% lines, you'll get 50% alpha in the FBO (a blend of 50% with...50%.
This brings up the final issue: by pre-mixing the lines with each other before you blend them over the world, you are changing the draw order. Whereas you might have had:
blend(blend(blend(background color, model), first line), second line);
now you will have
blend(blend(first line, second line), blend(background color, model)).
In other words, pre-mixing the overlay lines into an FBO changes the order of blending and thus changes the final look in a way you may not want.
First, the simple way to get around this: don't use an FBO. I realize this is a "go redesign your app" kind of answer, but using an FBO is not the cheapest thing, and modern GL cards are very good at drawing lines. So one option would be: instead of blending lines into an FBO, write the line geometry into a vertex buffer object (VBO). Simply extend the VBO a little bit each time. If you are drawing less than, say, 40,000 lines at a time, this will almost certainly be as fast as what you were doing before.
(One tip if you go this route: use glBufferSubData to write the lines in, not glMapBuffer - mapping can be expensive and doesn't work on sub-ranges on many drivers...better to just let the driver copy the few new vertices.)
If that isn't an option (for example, if you draw a mix of shape types or use a mix of GL state, such that "remembering" what you did is a lot more complex than just accumulating vertices) then you may want to change how you draw into the VBO.
Basically what you'll need to do is enable separate blending; initialize the overlay to black + 0% alpha (0,0,0,0) and blend by "standard blending" the RGB but additive blending the alpha channels. This still isn't quite correct for the alpha channel but it's generally a lot closer - without this, over-drawn areas will be too transparent.
Then, when drawing the FBO, use "pre-multiplied" alpha, that is, (one, one-minus-src-alph).
Here's why that last step is needed: when you draw into the FBO, you have already multiplied every draw call by its alpha channel (if blending is on). Since you are drawing over black, a green (0,1,0,0.5) line is now dark green (0,0.5,0,0.5). If alpha is on and you blend normally again, the alpha is reapplied and you'l have 0,0.25,0,0.5.). By simply using the FBO color as is, you avoid the second alpha multiplication.
This is sometimes called "pre-multiplied" alpha because the alpha has already been multiplied into the RGB color. In this case you want it to get correct results, but in other cases, programmers use it for speed. (By pre-multiplying, it removes a mult per pixel when the blend op is performed.)
Hope that helps! Getting blending right when the layers are not mixed in order gets really tricky, and separate blend isn't available on old hardware, so simply drawing the lines every time may be the path of least misery.
Clear the FBO with transparent black (0, 0, 0, 0), draw into it back-to-front with
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
and draw the FBO with
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
to get the exact result.
As Ben Supnik wrote, the FBO contains colour already multiplied with the alpha channel, so instead of doing that again with GL_SRC_ALPHA, it is drawn with GL_ONE. The destination colour is attenuated normally with GL_ONE_MINUS_SRC_ALPHA.
The reason for blending the alpha channel in the buffer this way is different:
The formula to combine transparency is
resultTr = sTr * dTr
(I use s and d because of the parallel to OpenGL's source and destination, but as you can see the order doesn't matter.)
Written with opacities (alpha values) this becomes
1 - rA = (1 - sA) * (1 - dA)
<=> rA = 1 - (1 - sA) * (1 - dA)
= 1 - 1 + sA + dA - sA * dA
= sA + (1 - sA) * dA
which is the same as the blend function (source and destination factors) (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) with the default blend equation GL_FUNC_ADD.
As an aside:
The above answers the specific problem from the question, but if you can easily choose the draw order it may in theory be better to draw premultiplied colour into the buffer front-to-back with
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
and otherwise use the same method.
My reasoning behind this is that the graphics card may be able to skip shader execution for regions that are already solid. I haven't tested this though, so it may make no difference in practice.
As Ben Supnik said, the best way to do this is rendering the entire scene with separate blend functions for color and alpha. If you are using the classic non premultiplied blend function try glBlendFuncSeparateOES(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE) to render your scene to FBO. and glBlendFuncSeparateOES(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) to render the FBO to screen.
It is not 100% accurate, but in most of the cases that will create no unexpected transparency.
Keep in mind that old Hardware and some mobile devices (mostly OpenGL ES 1.x devices, like the original iPhone and 3G) does not support separated blend functions. :(