Blending sprite with pre-existing texture - opengl

I am just learning the intricacies of OpenGL. What I would like to do is render a sprite onto a pre-existing texture. The texture will consist of terrain with some points alpha=1 and some points alpha=0. I would like the sprite to appear on a pixel of the texture if and only if the corresponding texture's pixel's alpha = 0. That is, for each pixel of the sprite, the output colour is:
Color of the sprite, if terrain alpha = 0.
Color of the terrain, if terrain alpha = 1.
Is this possible to do with blending function, if not how should I do it?

This is the exact opposite of the traditional blending function. The usual blend function is a linear interpolation between the source and destination colors, based on the source alpha.
What you want is a linear interpolation between the source and destination colors, based on the destination alpha. But you also want to invert the usual meaning; a destination alpha of 1 means that the destination color should be taken, not the source color.
That's pretty easy.
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
However, the above assumes that your sprites do not themselves have some from of inherent transparency. And most sprites do. That is, if the sprite alpha is 0 at some pixel, you don't want to overwrite the terrain color, no matter what the terrain's alpha is.
That makes this whole process excessively difficult. Pre-multiplying the alpha will not save you either, since black will just as easily overwrite the color in the terrain if there is no terrain color there.
In effect, you would need to do a linear interpolation based on neither the source nor the destination, but on a combination of them. I think multiplication of the two (src-alpha * (1 - dst-alpha)) would do a good job.
This is not possible with OpenGL's standard blending system. You would need to employ some form of programmatic blending technique. This typically involves read/modify/write operations using NV/ARB_texture_barrier or otherwise ping-ponging between bound textures.

Related

Drawing a Circle on a plane, Boolean Subtraction - OpenGL

I'm hoping to draw a plane in OpenGL, using C++, with a hole in the center, much like the green of a golf course for example.
I was wondering what the easiest way to achieve this is?
It's fairly simple to draw a circle and a plane (tutorials all over google will show this for those curious), but I was wondering if there is a boolean subtraction technique like you can get when modelling in 3Ds Max or similar software? Where you create both objects, then take the intersection/union etc to leave a new object/shape? In this case subtract the circle from the plane, creating a hole.
Another way I thought of doing it is giving the circle alpha values and making it transparent, but then of course it still leaves the planes surface visible anyway.
Any help or points in the right direction?
I would avoid messing around with transparency, blending mode, and the like. Just create a mesh with the shape you need and draw it. Remember OpenGL is for graphics, not modelling.
There are a couple ways you could do this. The first way is the one you already stated which is to draw the circle as transparent. The caveat is that you must draw the circle first before you draw the plane so that the alpha blending will blend the circle with the background. Then when you render the plane the parts that are covered by the circle will be discarded in the depth test.
The second method you could try is with texture mapping. You could create a texture that is basically a mask with everything set to opaque white except the circle portion which is set to have an alpha value of 0. In your shader you would then multiply your fragment color by this mask texture color so that the portions where the circle is located are now transparent.
Both of these methods would work with shapes other than a circle as well.
I suggest the stencil buffer. Use the stencil buffer to mark the area where you want the hole to be by masking the color and depth buffers and drawing only to the stencil buffer, then unmask your color and depth, avoid drawing to the stencil buffer, and draw your plane with a stencil function telling OpenGL to discard all pixels where the stencil buffer "markings" are.

OpengGL Light Blending

I am writing a Lights/Shadows system for my game using Java alongside the LWJGL. For each one of the Light-Emitting Entities I generate such a texture:
I should warn you that these Textures are full of (0, 0, 1) or (1, 0, 0) pixels, and the gradient effect is achieved with the alpha channel. I interpret the Alpha channel as a gradient factor.
Afterwards, I wish to blend every light/shadow texture together on a single texture, each at it's respective correct position. For that, I use a Framebuffer. I tried to achieve the desired effect using the following blend equation/function combination:
glBlendEquationSeparateEXT(GL_FUNC_ADD, GL_MAX);
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE);
I chose GL_ONE/GL_ONE for the Alpha Channel Blend Function arbitrarily, for GL_MAX will only do max(Sa, Da), as stated here, which means that the scaling factors are not used. The result of this combination is the following:
This image was obtained with Apple's OpenGL Driver Profiler, so I did not render it using my application (which could mess with the final result). The next step would be to render this texture over the actual game using multiply-blending, in order to darken the image, but the lights/shadows texture is obviously wrong, because we can see the edges of individual light/shadow textures over each other.
How should I proceed to achieve the desired result?
Edit:
I forgot to explain my choices for the scaling factors:
I think that it would be right to simply add the colors of each light (pondering each of them with their respective alpha values) and choose the alpha of the final fragment to be the biggest of each overlapping light.
Imagine that one of your texture rectangles was extended outside its current border with some arbitrary pattern, like pure green. Imagine further that we were somehow allowed to use two different blending functions, one inside the border, and one outside. You would get the same image you have here (none of the green showing) if outside the border you used the blend function
glBlendFuncSeparateEXT(GL_ZERO, GL_ONE, GL_ONE, GL_ONE)
We would then want whatever blending function we use inside to give us a continuous blending result. The blending function
glBlendFuncSeparateEXT(GL_SRC_ALPHA, GL_DST_ALPHA, GL_ONE, GL_ONE)
would not give us such a result. It is not so much because the first parameter which would mean ignoring the source near the border (small alpha on the border, if I read your image description correctly). So it must be the second parameter. We want the destination only when the source alpha is small. Change GL_DST_ALPHA to GL_ONE_MINUS_SRC_ALPHA. This would be more standard, but maybe I'm not understanding your objectives?

Pixel manipulation in OpenGL

Lets say I have this image and in it is an object (a cube). That object is being tracked (with labels) and I manage to render a virtual cube onto it (augmented reality). Now that I can render a virtual cube onto it I want to be able to make the object 'disappear' with some really basic diminished-reality technique called "inpainting". The inpaint in question is pretty simple (it has to be or the FPS will suffer) and it requires me to do some operations on pixels and their neighbors (like with Gaussian blur or other basic image processing).
To do that I first need:
A mask: black background with a white cube in it.
Access each pixel of the initial image (at coordinates x and y) as well as its neighborhood and do stuff based on the pixel value of the mask at the same x and y coordinates. So basically the mask serves as a way to say ignore this pixel or use this pixel.
How do I do this using OpenGL? I want to be able to access pixel values 1 by 1 preferably in 2D because of the neighbors.
Do I use FBOs or PBOs? I've read many things about buffers and methods like glDrawPixels() but I'm having trouble putting them all together. The paper I saw this method in used the GL_BACK buffer but mine is already used. Some sample code (C++) would be really appreciated with all the formalities (OpenG` calls) since I'm still a beginner in OpenGL.
I'm even thinking of using OpenCV if pixel manipulation is too hard in OpenGL since my AR library (Aruco) works on top of OpenCV. In that case I will still need to get the mask (white cube on black background), convert it to a cv::Mat and then do my processing.
I know this approach is inefficient (going back and forth from the GPU/CPU) but my goal (for now) is to at least make the basics work.
Setup a framebuffer object to render your original image + virtual cube. Here's a tutorial.
Next you can attach that framebuffer texture as a input (sampler) texture of your next stage and render a quad (two triangles) that cover your mask.
In the fragment shader you should be able to sample your "screen coordinate" by reading the variable gl_FragCoord. Setting up the texture filter functions as GL_NEAREST, you can access the exact texture coordinates. Also the neighboring pixels are available with a displacement (deltaX = 2/Width, deltaY=2/Height).
Using a previous framebuffer texture as source is mandatory, as the currently active framebuffer is write only.

Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again

opengl - blending with previous contents of framebuffer

I am rendering to a texture through a framebuffer object, and when I draw transparent primitives, the primitives are blended properly with other primitives drawn in that single draw step, but they are not blended properly with the previous contents of the framebuffer.
Is there a way to properly blend the contents of the texture with the new data coming in?
EDIT: More information requsted, I will attempt to explain more clearly;
The blendmode I am using is GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. (I believe that is typically the standard blendmode)
I am creating an application that tracks mouse movement. It draws lines connecting the previous mouse position to the current mouse position, and as I do not want to draw the lines over again each frame, I figured I would draw to a texture, never clear the texture and then just draw a rectangle with that texture on it to display it.
This all works fine, except that when I draw shapes with alpha less than 1 onto the texture, it does not blend properly with the texture's previous contents. Let's say I have some black lines with alpha = .6 drawn onto the texture. A couple draw cycles later, I then draw a black circle with alpha = .4 over those lines. The lines "underneath" the circle are completely overwritten. Although the circle is not flat black (It blends properly with the white background) there are no "darker lines" underneath the circle as you would expect.
If I draw the lines and the circle in the same frame however, they blend properly. My guess is that the texture just does not blend with it's previous contents. It's like it's only blending with the glclearcolor. (Which, in this case is <1.0f, 1.0f, 1.0f, 1.0f>)
I think there are two possible problems here.
Remember that all of the overlay lines are blended twice here. Once when they are blended into the FBO texture, and again when the FBO texture is blended over the scene.
So the first possibility is that you don't have blending enabled when drawing one line over another in the FBO overlay. When you draw into an RGBA surface with blending off, the current alpha is simply written directly into the FBO overlay's alpha channel. Then later when you blend the whole FBO texture over the scene, that alpha makes your lines translucent. So if you have blending against "the world" but not between overlay elements, it is possible that no blending is happening.
Another related problem: when you blend one line over another in "standard" blend mode (src alpha, 1 - src alpha) in the FBO, the alpha channel of the "blended" part is going to contain a blend of the alphas of the two overlay elements. This is probably not what you want.
For example, if you draw two 50% alpha lines over each other in the overlay, to get the equivalent effect when you blit the FBO, you need the FBO's alpha to be...75%. (That is, 1 - (1-.5) * (1-0.5), which is what would happen if you just drew two 50% alpha lines over your scene. But when you draw the two 50% lines, you'll get 50% alpha in the FBO (a blend of 50% with...50%.
This brings up the final issue: by pre-mixing the lines with each other before you blend them over the world, you are changing the draw order. Whereas you might have had:
blend(blend(blend(background color, model), first line), second line);
now you will have
blend(blend(first line, second line), blend(background color, model)).
In other words, pre-mixing the overlay lines into an FBO changes the order of blending and thus changes the final look in a way you may not want.
First, the simple way to get around this: don't use an FBO. I realize this is a "go redesign your app" kind of answer, but using an FBO is not the cheapest thing, and modern GL cards are very good at drawing lines. So one option would be: instead of blending lines into an FBO, write the line geometry into a vertex buffer object (VBO). Simply extend the VBO a little bit each time. If you are drawing less than, say, 40,000 lines at a time, this will almost certainly be as fast as what you were doing before.
(One tip if you go this route: use glBufferSubData to write the lines in, not glMapBuffer - mapping can be expensive and doesn't work on sub-ranges on many drivers...better to just let the driver copy the few new vertices.)
If that isn't an option (for example, if you draw a mix of shape types or use a mix of GL state, such that "remembering" what you did is a lot more complex than just accumulating vertices) then you may want to change how you draw into the VBO.
Basically what you'll need to do is enable separate blending; initialize the overlay to black + 0% alpha (0,0,0,0) and blend by "standard blending" the RGB but additive blending the alpha channels. This still isn't quite correct for the alpha channel but it's generally a lot closer - without this, over-drawn areas will be too transparent.
Then, when drawing the FBO, use "pre-multiplied" alpha, that is, (one, one-minus-src-alph).
Here's why that last step is needed: when you draw into the FBO, you have already multiplied every draw call by its alpha channel (if blending is on). Since you are drawing over black, a green (0,1,0,0.5) line is now dark green (0,0.5,0,0.5). If alpha is on and you blend normally again, the alpha is reapplied and you'l have 0,0.25,0,0.5.). By simply using the FBO color as is, you avoid the second alpha multiplication.
This is sometimes called "pre-multiplied" alpha because the alpha has already been multiplied into the RGB color. In this case you want it to get correct results, but in other cases, programmers use it for speed. (By pre-multiplying, it removes a mult per pixel when the blend op is performed.)
Hope that helps! Getting blending right when the layers are not mixed in order gets really tricky, and separate blend isn't available on old hardware, so simply drawing the lines every time may be the path of least misery.
Clear the FBO with transparent black (0, 0, 0, 0), draw into it back-to-front with
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
and draw the FBO with
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
to get the exact result.
As Ben Supnik wrote, the FBO contains colour already multiplied with the alpha channel, so instead of doing that again with GL_SRC_ALPHA, it is drawn with GL_ONE. The destination colour is attenuated normally with GL_ONE_MINUS_SRC_ALPHA.
The reason for blending the alpha channel in the buffer this way is different:
The formula to combine transparency is
resultTr = sTr * dTr
(I use s and d because of the parallel to OpenGL's source and destination, but as you can see the order doesn't matter.)
Written with opacities (alpha values) this becomes
1 - rA = (1 - sA) * (1 - dA)
<=> rA = 1 - (1 - sA) * (1 - dA)
= 1 - 1 + sA + dA - sA * dA
= sA + (1 - sA) * dA
which is the same as the blend function (source and destination factors) (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) with the default blend equation GL_FUNC_ADD.
As an aside:
The above answers the specific problem from the question, but if you can easily choose the draw order it may in theory be better to draw premultiplied colour into the buffer front-to-back with
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
and otherwise use the same method.
My reasoning behind this is that the graphics card may be able to skip shader execution for regions that are already solid. I haven't tested this though, so it may make no difference in practice.
As Ben Supnik said, the best way to do this is rendering the entire scene with separate blend functions for color and alpha. If you are using the classic non premultiplied blend function try glBlendFuncSeparateOES(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE) to render your scene to FBO. and glBlendFuncSeparateOES(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) to render the FBO to screen.
It is not 100% accurate, but in most of the cases that will create no unexpected transparency.
Keep in mind that old Hardware and some mobile devices (mostly OpenGL ES 1.x devices, like the original iPhone and 3G) does not support separated blend functions. :(