Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++ - c++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?

First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.

If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again

Related

How can I use openGL to draw a graph based on the values of a texture?

I want to take an RGB texture, convert it to YUV and draw a graph based on the UV components of each pixel, essentially, a vectorscope (https://en.wikipedia.org/wiki/Vectorscope).
I have no problem getting openGL to convert the texture to YUV in a fragment shader and even to draw the texture itself (even if it looks goofy because it is in YUV color space), but beyond that I am at a bit of a loss. Since I'm basically drawing a line from one UV coord to the next, using a fragment shader seems horribly inefficient (lots of discarded fragments).
I just don't know enough about what I can do with openGL to know what my next step is. I did do a CPU rendered version that I discarded since it simply wasn't fast enough (100ms for a single 1080p frame). My source image updates at up to 60fps.
Just for clarity, I am currently using openTK. Any help nudging me in a workable direction is very appreciated.
Assuming that the image you want a graph of is the texture, I suggest two steps.
First step, convert the RGB texture to YUV which you've done. Render this to an offscreen framebuffer/texture target instead of a window so you have the YUV texture map for the next step.
Second step, draw a line W x H times, ie once for each pixel in the texture. Use instanced rendering, one line N times, rather than actually creating geometry for them all, because the coordinates for the ends of the line will be dummies.
In the vertex shader, gl_InstanceID will be the number of this line from 0 to N - 1. Convert to 2D texture coords for the pixel in the YUV texture that you want to graph. I've never written a vectorscope myself, but presumably you know how to convert this YUV color you get from the texture into 2D/3D coords.

Blending sprite with pre-existing texture

I am just learning the intricacies of OpenGL. What I would like to do is render a sprite onto a pre-existing texture. The texture will consist of terrain with some points alpha=1 and some points alpha=0. I would like the sprite to appear on a pixel of the texture if and only if the corresponding texture's pixel's alpha = 0. That is, for each pixel of the sprite, the output colour is:
Color of the sprite, if terrain alpha = 0.
Color of the terrain, if terrain alpha = 1.
Is this possible to do with blending function, if not how should I do it?
This is the exact opposite of the traditional blending function. The usual blend function is a linear interpolation between the source and destination colors, based on the source alpha.
What you want is a linear interpolation between the source and destination colors, based on the destination alpha. But you also want to invert the usual meaning; a destination alpha of 1 means that the destination color should be taken, not the source color.
That's pretty easy.
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA);
However, the above assumes that your sprites do not themselves have some from of inherent transparency. And most sprites do. That is, if the sprite alpha is 0 at some pixel, you don't want to overwrite the terrain color, no matter what the terrain's alpha is.
That makes this whole process excessively difficult. Pre-multiplying the alpha will not save you either, since black will just as easily overwrite the color in the terrain if there is no terrain color there.
In effect, you would need to do a linear interpolation based on neither the source nor the destination, but on a combination of them. I think multiplication of the two (src-alpha * (1 - dst-alpha)) would do a good job.
This is not possible with OpenGL's standard blending system. You would need to employ some form of programmatic blending technique. This typically involves read/modify/write operations using NV/ARB_texture_barrier or otherwise ping-ponging between bound textures.

OpenGL blending: texture on top overlaps its pixels which should be transparent (alpha = 0)

I am drawing a map texture and, on top of it, a colorbar texture. Both have alpha channel and I am using blending, set as
// Turn on blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
However, the following happens:
The texture on top (colorbar) with alpha channel imposes its black pixels, which I don't want to happen. The map texture should appear behind where the colorbar alpha = 0.
Is this related to the blending definitions? How should I change it?
Assuming the texture has an alpha channel and it's transparent in the right places, I suspect the issue is with the rendering order and depth testing.
Lets say you render the scale texture first. It blends correctly with a black background. Then you render the orange texture behind it, but the pixels from the scale texture have a higher depth value and cause the orange texture there to be discarded.
So, make sure you render all your transparent stuff in back to front order, or farthest to nearest.
Without getting into order independent transparency, a common approach to alpha transparency is as follows:
Enable the depth buffer
Render all your opaque geometry
Disable depth writes (glDepthMask)
Enable alpha blending (as in the OP's code)
Render your transparent geometry in farthest to nearest order
For particles you can sometimes get away without sorting and it'll still look OK. Another approach is using the alpha test or using alpha to coverage with a multisample framebuffer.

Manually change color of framebuffer

I am having a scene containing of thousands of little planes. The setup is that the plane can occlude each other in the depth.
The planes are red and green. Now I want to do the following in a shader:
Render all the planes. As soon as a plane is red, substract 0.5 from the currently bound framebuffer and if the texture is green, add 0.5 to the framebuffer.
Therefore I should be able to see for each pixel in the texture of the framebuffer: < 0 => more red planes at this pixel, = 0 => Same amount of red and green and for the last case >0 => more green planes, as well as I can tell the difference.
This is just a very rough simplification of what I need to do, but the core is to write change a pixel of a texture/framebuffer depending on the given values of planes in the scene influencing the current fragment. This should happen in the fragment shader.
So how do I change the values of the framebuffer using GLSL? using gl_FragColor just sets a new color, but does not manipulate the color set before.
PS I also gonna deactivate depth testing.
The fragment shader cannot read the (old) value from the framebuffer; it just generates a new value to put into the framebuffer. When multiple fragments output to the same pixel (overlapping planes in your example), how those value combine is controlled by the BLEND function of the pipeline.
What you appear to want can be done by setting a custom blending function. The GL_FUNC_ADD blending function allows adding the old value and new value (with weights); what you want is probably something like:
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ONE, GL_ONE);
this will simply add each output pixel to the old pixel in the framebuffer (in all four channels; its not clear from your question whether you're using a 1-channel, 3-channel, or 4-channel frame buffer). Then, you just have your fragment shader output 0.5 or -0.5 depending. In order for this to make sense, you need a framebuffer format that supports values outside the normal [0..1] range, such as GL_RGBA32F or GL_R32F

How to render grayscale texture without using fragment shaders? (OpenGL)

Is it possible to draw a RGB texture as grayscale without using fragment shaders, using only fixed pipeline openGL?
Otherwise I'd have to create two versions of texture, one in color and one in black and white.
I don't know how to do this with an RGB texture and the fixed function pipeline.
If you create the texture from RGB source data but specify the internal format as GL_LUMINANCE, OpenGL will convert the color data into greyscale for you. Use the standard white material and MODULATE mode.
Hope this helps.
No. Texture environment combiners are not capable of performing a dot product without doing the scale/bias operation. That is, it always pretends that [0, 1] values are encoded as [-1, 1] values. Since you can't turn that off, you can't do a proper dot product.