Manually change color of framebuffer - opengl

I am having a scene containing of thousands of little planes. The setup is that the plane can occlude each other in the depth.
The planes are red and green. Now I want to do the following in a shader:
Render all the planes. As soon as a plane is red, substract 0.5 from the currently bound framebuffer and if the texture is green, add 0.5 to the framebuffer.
Therefore I should be able to see for each pixel in the texture of the framebuffer: < 0 => more red planes at this pixel, = 0 => Same amount of red and green and for the last case >0 => more green planes, as well as I can tell the difference.
This is just a very rough simplification of what I need to do, but the core is to write change a pixel of a texture/framebuffer depending on the given values of planes in the scene influencing the current fragment. This should happen in the fragment shader.
So how do I change the values of the framebuffer using GLSL? using gl_FragColor just sets a new color, but does not manipulate the color set before.
PS I also gonna deactivate depth testing.

The fragment shader cannot read the (old) value from the framebuffer; it just generates a new value to put into the framebuffer. When multiple fragments output to the same pixel (overlapping planes in your example), how those value combine is controlled by the BLEND function of the pipeline.
What you appear to want can be done by setting a custom blending function. The GL_FUNC_ADD blending function allows adding the old value and new value (with weights); what you want is probably something like:
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ONE, GL_ONE);
this will simply add each output pixel to the old pixel in the framebuffer (in all four channels; its not clear from your question whether you're using a 1-channel, 3-channel, or 4-channel frame buffer). Then, you just have your fragment shader output 0.5 or -0.5 depending. In order for this to make sense, you need a framebuffer format that supports values outside the normal [0..1] range, such as GL_RGBA32F or GL_R32F

Related

OpenGL - tex coord in fragment shader outside of specified range

I'm trying to draw a rectangle with a texture in OpenGL. I'm simply trying to render an entire .jpg image, so I specify the texture coordinates as [0, 0] to [1, 1] in the vertex buffer. I expect all the interpolated texture coordinates in the fragment shader to be between [0, 0] and [1, 1], however, depending on where the texture is drawn, I sometimes get a texture coordinate that is less than 0 (I know this is the case because I tried outputting red from the fragment shader if the tex coord is less than 0).
How come I get an interpolated value outside of the specified range? I currently visualize vertices/fragments like the following image (https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing):
If I imagine a rectangle instead, then if the pixel sample is inside the rectangle, then the interpolated texture coord must be at least 0, since the very left of the rectangle represents 0, right? So how do I end up with a value less than 0?
Edit: after some basic testing, it looks like the fragment shader is called if a shape simply intersects that pixel, not if the pixel sample point is inside the shape. I tested this by placing the start of the rectangle slightly before and slightly after the middle of a pixel - when slightly behind the middle of the pixel, I don't get a negative value, but if I place it slightly after the middle, then I do get a negative value. This contradicts what the website I linked to said - perhaps it's driver-dependent?
Edit: the previous test I did was with multisampling on. If I turn multisampling off, then even if the shape is past the middle, I don't get a negative value...
Turns out I just needed to keep reading the article I linked:
This is where multisampling becomes interesting. We determined that 2 subsamples were covered by the triangle so the next step is to determine a color for this specific pixel. Our initial guess would be that we run the fragment shader for each covered subsample and later average the colors of each subsample per pixel. In this case we'd run the fragment shader twice on the interpolated vertex data at each subsample and store the resulting color in those sample points. This is (fortunately) not how it works, because this basically means we need to run a lot more fragment shaders than without multisampling, drastically reducing performance.
How MSAA really works is that the fragment shader is only run once per pixel (for each primitive) regardless of how many subsamples the triangle covers. The fragment shader is run with the vertex data interpolated to the center of the pixel and the resulting color is then stored inside each of the covered subsamples. Once the color buffer's subsamples are filled with all the colors of the primitives we've rendered, all these colors are then averaged per pixel resulting in a single color per pixel. Because only two of the 4 samples were covered in the previous image, the color of the pixel was averaged with the triangle's color and the color stored at the other 2 sample points (in this case: the clear color) resulting in a light blue-ish color.
So I was getting a negative value because the fragment shader was being run on a pixel that had at least one of its sub-sample points covered by the shape, but the shape was slightly after the mid-point of the pixel, and since "the fragment shader is run with the vertex data interpolated to the center of the pixel", I was getting a negative value.

Is there a way to partially tint bitmaps in Allegro?

I'm programming a game in Allegro 5 and I'm currently working on my drawing algorithm. After calculation I end up with two ALLEGRO_BITMAP*-objects where one of them is my "scene" with the terrain drawn on it and the other one is a shadow-map.
The scene are simply textures of game-elements drawn on the bitmap.
The shadow-map is a bitmap using black colors for light and white colors for shadows rendered previously.
For drawing those bitmaps on the screen I use al_draw_scaled_bitmap(...) and al_set_blender(ALLEGRO_DEST_MINUS_SRC, ALLEGRO_ONE, ALLEGRO_ONE) to substract the white elements of the shadow-map from the scene to make those visible.
The problem I have is that I want to have all pixels that are white on the shadows-map to be tinted in a world-color, which has been calculated in every frame before, and all black elements simply not modified (gray means partially tinted).
The final color could be calculated like p.r * c.r + 1 - p.r with p = the pixel-color on the scene and c = the world-color for red, green and blue channels in rgb.
Is there any way to achieve a partial tinting effect in Allegro 5 (possibly without massive overdraw)?
I thought of using shaders, but I haven't found a solution to implement these with my ALLEGRO_BITMAP*-objects.
Allegro's blenders are fairly simple, so you would need a shader for this. Simply write a shader, and when you're drawing your shadow map al_use_shader the shader and then al_use_shader(NULL) when you're done. The shader can use the default vertex source which you can get with al_get_default_shader_source. So you only have to write the fragment shader. Your fragment shader should have a uniform vec4 which is the tint colour. You would also have x, y floats which respresent the x and y values of the destination. It would also have a sampler which is the scene bitmap (which should be the target, since you're drawing to it). You'd sample from the sampler at x, y (x and y should be from 0 to 1, you can get these by using xpixel / width (same with y)) and then do your calculation with the input colour, and then store the result in gl_FragColor (this text assume you're using GLSL (OpenGL). The same principle applies to Direct3D.) Check out the default shaders in src/shader_source.inc in the Allegro source code for more info on writing shaders.

Multisampling with triangles sharing the edge

According to my understanding:
Multisampling:
Consider we have primitive with 4X multisampling and only 1 sample is covered by primitive. So it calculates the percentage (pixel coverage based on how many samples passed the test) i.e. 25% and blends with the background color and we get blended color for that pixel which is copied to all the 4 samples when fragment shader is executed.
Antialiasing:
OpenGL calculates the amount of a pixel that is covered by a primitive (point, line, or triangle) and uses it to generate an alpha value for each fragment. This alpha value is multiplied by the alpha value of the fragment produced by your shader and so has an effect on blending when either the source or destination blend factor includes the source alpha term.
Please correct me it is wrong.
In case of multisampling we get 50-50 blend but with anti-aliasing we get 25-25-50 blend, hows that?
In case of multisampling when we render one triangle it will update the FBO with 25-75 pixel color and when second triangle being rendered it will blend with the background color, so we should get background color instead of 50-50, but this is not the case.
Here by multisampling, I meant to say enabling it with glEnable(GL_MULTISAMPLE) and by antialiasing, i meant enabling it with GL_POLYGON_SMOOTH

Why fragments do not necessarily correspond to pixels one to one?

Here is a good explanation what is a fragment:
https://gamedev.stackexchange.com/questions/8977/what-is-a-fragment/8981#8981
But here (and not only here) I have read that "I want to stress the fact that one pixel is not necessarily one fragment, multiple fragment can be combined to make one pixel....". But I don't understand clearly what are fragments and why they are not necessarily correspond to pixels one to one?
EDIT: When multiple fragments form one pixel it is only in the case when they overlap after projection, or it is because the pixel is bigger than the fragment, hence you need to put together next to each other multiple fragments with the same color to form a pixel?
A fragment has a location that can be queried via its built-in gl_FragCoord variable where the x and y component directly correspond to pixels on your screen. So you could say that a fragment indeed corresponds to a pixel.
However, a fragment outputs a color and stores that color in a color buffer at its coordinates. This does not mean this color is the actual pixel color that is shown to the viewer.
Because a fragment shader is run for each object, it could happen that other objects are drawn after your first object that also output a fragment at the same screen coordinate. When taking depth-testing, stencil testing and blending into account, the resulting color value in the color buffer might get overwritten and/or merged with new colors.
Think of it like this:
Object 1 gets drawn and draws the color purple at screen coordinate (200, 300);
Object 2 gets drawn and draws the color red at same coordinate, overwriting it.
Object 3 (is blue) has transparency of 50% at same coordinate, and merges colors.
Final fragment color output is then combination of red and blue (50%).
The final resulting pixel could then be a color from a single fragment shader run, a color that is overwritten by many other fragment shader runs, or a combination of colors via blending.
A fragment is not equal to a pixel when multi sample anti-aliasing (MSAA) or any of the other modes that change the ratio of rendered pixels to screen pixels is activated.
In the case of 4x MSAA, each screen pixel will be represented by 4 (2x2) fragments in the display buffer. The fragment shader for a particular polygon will only be run once for the screen pixel no matter how many of the fragments are covered by the polygon. Since a polygon may not cover all the fragments within a pixel it will only store color into the fragments it covers. This is repeated for every polygon that may cover one or more of the fragments. Then at the final display all 4 fragments are blended to produce the final screen pixel.

Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again