OpenGL alpha blending suddenly stops - c++

I'm using OpenGL to draw a screen size quad to the same position with low alpha (lesser than 0.1) on every frame, without glClear(GL_COLOR_BUFFER_BIT) between them. This way the quad should increasingly damp the visibility of the drawings of the previous frames.
However the damping effect stops after some seconds. If I use alpha value no lower than 0.1 for the quad, it works as expected. It seems to me, that the OpenGL blending equation fails after a number of iterations (higher alpha values need less iteration to accumulate to 1, so if alpha >= 0.1 the problem doesn't occur). The lower limit of alpha in 8bit is about 0.0039, i.e. 1/255, so alpha 0.01 should be fine.
I have tried several blending settings, using the following render method:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_DEPTH_BUFFER_BIT);
// draw black quad with translucency (using glDrawArrays)
And the simple fragment shader:
#version 450
uniform vec4 Color;
out vec4 FragColor;
void main()
{
FragColor = Color;
}
How could I fix this issue?

It seems to me, that the OpenGL blending equation fails after a number
of iterations (higher alpha values need less iteration to accumulate
to 1, so if alpha >=0.1 the problem doesn't occur). The lower limit of
the alpha in 8bit is about 0.0039 (1/255), so alpha 0.01 should be
fine.
Your reasoning is wrong here. If you draw a black quad with alpha of 0.01 and the blending setup you described, you basically get new_color = 0.99 * old_color with every iteration. As a function of the iteration number i, it would be new_color(i) = original_color * pow (0.99,i). Now with unlimited precision, this will move towards 0.
But as you already noted, the precision is not unlimited. You get a requantization with every step. So if your new_color color value does not fall below the threshold for the integer value, it will stay the same as before. Now if we consider x the unnomralized color value in the range [0,255], and we assume that the quantization is just done by usual rounding rules, we must get at a difference of at least 0.5 to get a different value: x - x * (1-alpha) > 0.5, or simply x > 0.5 / alpha.
So in your case, you get x > 50, and that is where the blending will "stop" (and everything below that will stay as it was at the beginning, so you get a "shadow" of the dark parts). For alpha of 0.1, it will end at x=5, which is probably close enough to zero that you didn't notice it (with your particular display and settings).
EDIT
Let me recommend a startegy that will work. You must avoid the iterative compontation (at least with non-floatingpoint framebuffers). You want to achieve a fade to black effect. So you could render your original content into a texture, and render that over and over again, while blending it to black by varying the alpha value from frame to frame, so you end up with alpha as a function of the time (or frame number). Using a linear transition would probably make the most sense, but you could even use some nonlinear function to get the slowdown of the fadeout as your original approach with unlimited precision would have done.
Note that you do not need blending at all for that, you simply can multiply the color value from the texture with some uniform "alpha" value in the fragment shader.

Related

OpenGL weird line linear filtering

I have programmed the following shader for testing how linear filtering works in OpenGL.
Here we have a 5x1 texture splatted onto a face of a cube (megenta region is just the color of the background).
The texture is this one (it's very small).
The botton-left corner corresponds to uv=(0, 0) and the top-right corresponds to uv=(1, 1).
Linear filtering is enabled.
The shaders splits vertically the v coordinate in 5 rows (from top to bottom):
Continuous sampling. Just sample normally.
Green if u is in [0, 1], red otherwise. Just for testing purposes.
The u coordinate in gray scale.
Sampling at the left of the texel.
Sampling at the center of the texel.
The problem is that between 3 and 4 there is a row of one pixel that flickers. The flickering changes by changing the camera distance, and sometimes you can even make it disappear. The problem seems to be in the shader code that handles the fourth row.
// sample at the left of the pixel
// the following line can fix the problem if I add any number different from 0
tc.y += 0.000000; // replace by any number other than 0 and works fine
tc.x = floor(5 * tc.x) * 0.2;
c = texture(tex0, tc);
This looks weird to me because in that zone the v coordinate is not near any edge of the texture.
Your code relies on undefined values during the texture fetch.
The GLSL 4.60 specification states in Section 8.9 Texture Functions (emphasis mine):
Some texture functions (non-“Lod” and non-“Grad” versions) may require
implicit derivatives. Implicit derivatives are undefined within
non-uniform control flow and for non-fragment-shader texture fetches.
While most people think that those derivatives are only required for mip-mapping, that is not correct. The LOD factor is also needed to determine if the texture is magnified or minified (and also for anisotropic filtering in the non-mipmapped case, but that is not of interest here).
GPUs usually approximate the derivatives by finite differencing between neighboring pixels in a 2x2 pixel quad.
What's happening is that at the edge between your various options, you have non-uniform control flow where for one line you do the texture filtering, and on the line above, you don't do it. The finite differencing will result in trying to access the texture coords for the texture sampling operation in the upper row, which aren't guaranteed to have been calculated at all, since that shader invocation did not actively execute that code path - this is why the spec treats them as undefined.
Now depending where in the 2x2 pixel quad your edge lies, you do get correct results, or you don't. For the cases you don't get correct results, one possible outcome could be that the GL uses the minification filter which is GL_NEAREST in your example.
It would probably help to just set both filters to GL_LINEAR. However, that would still not be correct code, as the results are still undefined as per the spec.
The only correct solution would be to move the texture sampling out of the non-uniform control flow, like
vec4 c1=texture(tex, tc); // sample directly at tc
vec4 c2=texture(tex, some_function_of(tc)); // sample somewhere else
vec4 c3=texture(tex, ...);
// select output color in some non-uniform way
if (foo) {
c=c1;
} else if (bar) {
c=c2;
} else {
c=c3;
}

How to Get Rid of Tiling Borders When Mipmapping is Disabled

I created a fragment shader that tiles a texture (it is actually a texture array, as you will see in the code). However, I get very ugly 1 pixel borders around each of the tiles. I know this can be caused by discontinuity of the function, causing weird stuff to happen while mipmapping, but I have mipmapping disabled (although I am using antialiasing). Here is my code:
vec2 tiledCoords = fract(pass_textureCoords * 256.0) / 4.0; // pass_textureCoords are the texture coordinates
out_Color = texture(testArray, vec3(tiledCoords.x, tiledCoords.y, 0)); // testArray is the sampler2DArray
I thought I'd have to give a bit of tolerance for rounding errors, but I can't seem to figure out where to put that. How can I get rid of these ugly borders? Thanks in advance!
Here is a screenshot:
I'm not exactly sure what you are trying to do with the calculations in your code. The first line doesn't seem to contribute to the result at all (is overwritten in the second line). Since you didn't set filter modes or wrap modes, you'll have the default settings enabled which is linear filtering and repeat mode.
It seems that you are using tiledCoords in the range of [0, 0.25] and try to wrap from 0.25 to 0. Linear filtering is not going to work automatically in this case since there is no automatic wrap mode for the 0.25 -> 0 transition. The black lines are cause from the linear interpolation in the range [0, 1/(2 * num_pixels)] because this area interpolates linearly between the 0th texel and "-1st" texel (which is due to wrapping mode the last texel of your texture). Same thing happens for the range [0.25 - 1/(2 * num_pixels), 0.25] where you interpolate the pixel directly left of texture coordinate 0.25 with the texel right of it.
There is no way to get hardware linear interpolation for your scenario. You can either use nearest neighbor sampling or calculate linear interpolate manually.

Why is my GLSL shader rendering a cleavage?

I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted

Precision loss with mod in GLSL

I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.

Texture lookup into rendered FBO is off by half a pixel

I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...