Camera shader to change the target render texture alpha value - glsl

Is there a pre-maid unity shader so that my camera will render different alpha values depending on the color of what it sees?
For example, If i had a Gray square with a black square inside, the camera would render into the texture the current texture color with a 50% alpha on where the gray square is, + the current texture color with a 100% alpha where the black one is?
If there isn't one already made at least i'd like to confirm this is the correct approach so that i can try making the shader myself.
Thanks

Related

Is there a way to partially tint bitmaps in Allegro?

I'm programming a game in Allegro 5 and I'm currently working on my drawing algorithm. After calculation I end up with two ALLEGRO_BITMAP*-objects where one of them is my "scene" with the terrain drawn on it and the other one is a shadow-map.
The scene are simply textures of game-elements drawn on the bitmap.
The shadow-map is a bitmap using black colors for light and white colors for shadows rendered previously.
For drawing those bitmaps on the screen I use al_draw_scaled_bitmap(...) and al_set_blender(ALLEGRO_DEST_MINUS_SRC, ALLEGRO_ONE, ALLEGRO_ONE) to substract the white elements of the shadow-map from the scene to make those visible.
The problem I have is that I want to have all pixels that are white on the shadows-map to be tinted in a world-color, which has been calculated in every frame before, and all black elements simply not modified (gray means partially tinted).
The final color could be calculated like p.r * c.r + 1 - p.r with p = the pixel-color on the scene and c = the world-color for red, green and blue channels in rgb.
Is there any way to achieve a partial tinting effect in Allegro 5 (possibly without massive overdraw)?
I thought of using shaders, but I haven't found a solution to implement these with my ALLEGRO_BITMAP*-objects.
Allegro's blenders are fairly simple, so you would need a shader for this. Simply write a shader, and when you're drawing your shadow map al_use_shader the shader and then al_use_shader(NULL) when you're done. The shader can use the default vertex source which you can get with al_get_default_shader_source. So you only have to write the fragment shader. Your fragment shader should have a uniform vec4 which is the tint colour. You would also have x, y floats which respresent the x and y values of the destination. It would also have a sampler which is the scene bitmap (which should be the target, since you're drawing to it). You'd sample from the sampler at x, y (x and y should be from 0 to 1, you can get these by using xpixel / width (same with y)) and then do your calculation with the input colour, and then store the result in gl_FragColor (this text assume you're using GLSL (OpenGL). The same principle applies to Direct3D.) Check out the default shaders in src/shader_source.inc in the Allegro source code for more info on writing shaders.

Multisampling with triangles sharing the edge

According to my understanding:
Multisampling:
Consider we have primitive with 4X multisampling and only 1 sample is covered by primitive. So it calculates the percentage (pixel coverage based on how many samples passed the test) i.e. 25% and blends with the background color and we get blended color for that pixel which is copied to all the 4 samples when fragment shader is executed.
Antialiasing:
OpenGL calculates the amount of a pixel that is covered by a primitive (point, line, or triangle) and uses it to generate an alpha value for each fragment. This alpha value is multiplied by the alpha value of the fragment produced by your shader and so has an effect on blending when either the source or destination blend factor includes the source alpha term.
Please correct me it is wrong.
In case of multisampling we get 50-50 blend but with anti-aliasing we get 25-25-50 blend, hows that?
In case of multisampling when we render one triangle it will update the FBO with 25-75 pixel color and when second triangle being rendered it will blend with the background color, so we should get background color instead of 50-50, but this is not the case.
Here by multisampling, I meant to say enabling it with glEnable(GL_MULTISAMPLE) and by antialiasing, i meant enabling it with GL_POLYGON_SMOOTH

UV Texture oddity in textureUnits, openscenegraph

I am having issues with textures. I have the model open as a .osg so I will refer to it here as such here. I have one texture in textureUnit 0 which acts as a base texture. Then I have a second texture in textureUnit 1 which acts as a label of sorts. I apply a rgba texture in there which then should be transparent on the model in openscenegraph. However I get this:
The gray areas are the base texture. The darker areas are where the uv coordinates move off the edge of the texture itself. I cant seem to be able to remove the dark areas. Any ideas?
You probably need to set the edge clamping mode -- the dark is probably some of the Texture Border color creeping in. Try setting GL_CLAMP_TO_EDGE​ as your texture wrap mode.

Blend a clipped image together with a background into destination in one step

Please have a look at this image.
I'd like to show an clipped detail of the texture while the clipping rect can be animated so I cannot crop the image upfront. The position of the image is animated too.
I'd like to show it in front of a background. The background is a color or a texture itself.
I'd like to blend both the image and the background combined with opacity
< 1.0 to the destination.
The real requirement here is to render it in one step, avoiding a temporary buffer. Obviously a (simple) shader is needed for that.
What I already tried to achieve this:
Rendering the background first and then the image each with opacity < 1. The problem here: It lets the background shine through the image. The background is not allowed to be visible where the image itself is opaque.
It works when rendering both into a temporary buffer using opacity = 1 and then rendering this buffer to destination with opacity < 1, but this needs more (too much) resources.
I can combine two textures (background, image) in a shader, transform the texture coordinates each with a different transformation matrices. The probleme here is, that I'm not able to clip the image. The rendered geometry is a simple rectangle consisting of two triangles.
Can anybody hint me in the right direction?
You're basically trying to render this.
(Image blended with background) blended with destination
The part in parentheses, you can do with a shader, the blending with destination, you have to do with glBlendFunc, since the destination isn't available in the pixel shader.
It sounds like you know how to clip the image in the shader and rotate it by animating texture coordinates.
Let's call your image with the childreb on it ImageA, and the grey square ImageB
You want your shader to produce this at each pixel:
outputColor.rgb = ImageA.rgb * ImageA.a + ImageB.rgb * (1.0 - ImageA.a);
This blends your two images exactly as you want. Now set the alpha output from your pixel shader to be your desired alpha (<1.0)
outputColor.a = <some alpha value>
Then, when you render your quad with your shader, set the blend function as follows.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
<draw quad>

Rendering a texture with alpha test and blend it with a color and the scene behind

I would like to render this image (alpha = 0 on white)
On a square (actually a quad strip, but it doesnt matter I guess) blending it with a color, the blending should be performed on the whole surface, that is also where the texture has alpha equals zero.
Then the result should be blended on turn on the scene
How can I achieve that?
I dont want obviously all the code, just the key piece, like the blend function (I suppose)