What I'm want to do in OpenGL using C++ and GLSL:
When texture has alpha (texture.a! = 1.0;) then this pixel is not written to the depth buffer. (for color buffer it is written)
Write depth occurs only when a pixel texture.a == 1.0;
Discarding in shader is not a solution - then this pixel is not written to color buffer.
Any ideas?
#UPDATE:
Example: I've got some UI images rendered by OpenGL. Some of them have alpha in corners.
In scene rendering I have "depth prepass" to save some pixels by not calculating light on them.
I want to also get UI images to that prepass - but only completely opaque pixels (alpha = 1.0).
As mentioned in the comments by Andon: "You always have to write a depth value when you write a fragment, there is no way to selectively enable or disable this at the shader level."
Related
I'm programming a game in Allegro 5 and I'm currently working on my drawing algorithm. After calculation I end up with two ALLEGRO_BITMAP*-objects where one of them is my "scene" with the terrain drawn on it and the other one is a shadow-map.
The scene are simply textures of game-elements drawn on the bitmap.
The shadow-map is a bitmap using black colors for light and white colors for shadows rendered previously.
For drawing those bitmaps on the screen I use al_draw_scaled_bitmap(...) and al_set_blender(ALLEGRO_DEST_MINUS_SRC, ALLEGRO_ONE, ALLEGRO_ONE) to substract the white elements of the shadow-map from the scene to make those visible.
The problem I have is that I want to have all pixels that are white on the shadows-map to be tinted in a world-color, which has been calculated in every frame before, and all black elements simply not modified (gray means partially tinted).
The final color could be calculated like p.r * c.r + 1 - p.r with p = the pixel-color on the scene and c = the world-color for red, green and blue channels in rgb.
Is there any way to achieve a partial tinting effect in Allegro 5 (possibly without massive overdraw)?
I thought of using shaders, but I haven't found a solution to implement these with my ALLEGRO_BITMAP*-objects.
Allegro's blenders are fairly simple, so you would need a shader for this. Simply write a shader, and when you're drawing your shadow map al_use_shader the shader and then al_use_shader(NULL) when you're done. The shader can use the default vertex source which you can get with al_get_default_shader_source. So you only have to write the fragment shader. Your fragment shader should have a uniform vec4 which is the tint colour. You would also have x, y floats which respresent the x and y values of the destination. It would also have a sampler which is the scene bitmap (which should be the target, since you're drawing to it). You'd sample from the sampler at x, y (x and y should be from 0 to 1, you can get these by using xpixel / width (same with y)) and then do your calculation with the input colour, and then store the result in gl_FragColor (this text assume you're using GLSL (OpenGL). The same principle applies to Direct3D.) Check out the default shaders in src/shader_source.inc in the Allegro source code for more info on writing shaders.
I am drawing a map texture and, on top of it, a colorbar texture. Both have alpha channel and I am using blending, set as
// Turn on blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
However, the following happens:
The texture on top (colorbar) with alpha channel imposes its black pixels, which I don't want to happen. The map texture should appear behind where the colorbar alpha = 0.
Is this related to the blending definitions? How should I change it?
Assuming the texture has an alpha channel and it's transparent in the right places, I suspect the issue is with the rendering order and depth testing.
Lets say you render the scale texture first. It blends correctly with a black background. Then you render the orange texture behind it, but the pixels from the scale texture have a higher depth value and cause the orange texture there to be discarded.
So, make sure you render all your transparent stuff in back to front order, or farthest to nearest.
Without getting into order independent transparency, a common approach to alpha transparency is as follows:
Enable the depth buffer
Render all your opaque geometry
Disable depth writes (glDepthMask)
Enable alpha blending (as in the OP's code)
Render your transparent geometry in farthest to nearest order
For particles you can sometimes get away without sorting and it'll still look OK. Another approach is using the alpha test or using alpha to coverage with a multisample framebuffer.
Please have a look at this image.
I'd like to show an clipped detail of the texture while the clipping rect can be animated so I cannot crop the image upfront. The position of the image is animated too.
I'd like to show it in front of a background. The background is a color or a texture itself.
I'd like to blend both the image and the background combined with opacity
< 1.0 to the destination.
The real requirement here is to render it in one step, avoiding a temporary buffer. Obviously a (simple) shader is needed for that.
What I already tried to achieve this:
Rendering the background first and then the image each with opacity < 1. The problem here: It lets the background shine through the image. The background is not allowed to be visible where the image itself is opaque.
It works when rendering both into a temporary buffer using opacity = 1 and then rendering this buffer to destination with opacity < 1, but this needs more (too much) resources.
I can combine two textures (background, image) in a shader, transform the texture coordinates each with a different transformation matrices. The probleme here is, that I'm not able to clip the image. The rendered geometry is a simple rectangle consisting of two triangles.
Can anybody hint me in the right direction?
You're basically trying to render this.
(Image blended with background) blended with destination
The part in parentheses, you can do with a shader, the blending with destination, you have to do with glBlendFunc, since the destination isn't available in the pixel shader.
It sounds like you know how to clip the image in the shader and rotate it by animating texture coordinates.
Let's call your image with the childreb on it ImageA, and the grey square ImageB
You want your shader to produce this at each pixel:
outputColor.rgb = ImageA.rgb * ImageA.a + ImageB.rgb * (1.0 - ImageA.a);
This blends your two images exactly as you want. Now set the alpha output from your pixel shader to be your desired alpha (<1.0)
outputColor.a = <some alpha value>
Then, when you render your quad with your shader, set the blend function as follows.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
<draw quad>
I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again
This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4