I created a fragment shader that tiles a texture (it is actually a texture array, as you will see in the code). However, I get very ugly 1 pixel borders around each of the tiles. I know this can be caused by discontinuity of the function, causing weird stuff to happen while mipmapping, but I have mipmapping disabled (although I am using antialiasing). Here is my code:
vec2 tiledCoords = fract(pass_textureCoords * 256.0) / 4.0; // pass_textureCoords are the texture coordinates
out_Color = texture(testArray, vec3(tiledCoords.x, tiledCoords.y, 0)); // testArray is the sampler2DArray
I thought I'd have to give a bit of tolerance for rounding errors, but I can't seem to figure out where to put that. How can I get rid of these ugly borders? Thanks in advance!
Here is a screenshot:
I'm not exactly sure what you are trying to do with the calculations in your code. The first line doesn't seem to contribute to the result at all (is overwritten in the second line). Since you didn't set filter modes or wrap modes, you'll have the default settings enabled which is linear filtering and repeat mode.
It seems that you are using tiledCoords in the range of [0, 0.25] and try to wrap from 0.25 to 0. Linear filtering is not going to work automatically in this case since there is no automatic wrap mode for the 0.25 -> 0 transition. The black lines are cause from the linear interpolation in the range [0, 1/(2 * num_pixels)] because this area interpolates linearly between the 0th texel and "-1st" texel (which is due to wrapping mode the last texel of your texture). Same thing happens for the range [0.25 - 1/(2 * num_pixels), 0.25] where you interpolate the pixel directly left of texture coordinate 0.25 with the texel right of it.
There is no way to get hardware linear interpolation for your scenario. You can either use nearest neighbor sampling or calculate linear interpolate manually.
Related
I have programmed the following shader for testing how linear filtering works in OpenGL.
Here we have a 5x1 texture splatted onto a face of a cube (megenta region is just the color of the background).
The texture is this one (it's very small).
The botton-left corner corresponds to uv=(0, 0) and the top-right corresponds to uv=(1, 1).
Linear filtering is enabled.
The shaders splits vertically the v coordinate in 5 rows (from top to bottom):
Continuous sampling. Just sample normally.
Green if u is in [0, 1], red otherwise. Just for testing purposes.
The u coordinate in gray scale.
Sampling at the left of the texel.
Sampling at the center of the texel.
The problem is that between 3 and 4 there is a row of one pixel that flickers. The flickering changes by changing the camera distance, and sometimes you can even make it disappear. The problem seems to be in the shader code that handles the fourth row.
// sample at the left of the pixel
// the following line can fix the problem if I add any number different from 0
tc.y += 0.000000; // replace by any number other than 0 and works fine
tc.x = floor(5 * tc.x) * 0.2;
c = texture(tex0, tc);
This looks weird to me because in that zone the v coordinate is not near any edge of the texture.
Your code relies on undefined values during the texture fetch.
The GLSL 4.60 specification states in Section 8.9 Texture Functions (emphasis mine):
Some texture functions (non-“Lod” and non-“Grad” versions) may require
implicit derivatives. Implicit derivatives are undefined within
non-uniform control flow and for non-fragment-shader texture fetches.
While most people think that those derivatives are only required for mip-mapping, that is not correct. The LOD factor is also needed to determine if the texture is magnified or minified (and also for anisotropic filtering in the non-mipmapped case, but that is not of interest here).
GPUs usually approximate the derivatives by finite differencing between neighboring pixels in a 2x2 pixel quad.
What's happening is that at the edge between your various options, you have non-uniform control flow where for one line you do the texture filtering, and on the line above, you don't do it. The finite differencing will result in trying to access the texture coords for the texture sampling operation in the upper row, which aren't guaranteed to have been calculated at all, since that shader invocation did not actively execute that code path - this is why the spec treats them as undefined.
Now depending where in the 2x2 pixel quad your edge lies, you do get correct results, or you don't. For the cases you don't get correct results, one possible outcome could be that the GL uses the minification filter which is GL_NEAREST in your example.
It would probably help to just set both filters to GL_LINEAR. However, that would still not be correct code, as the results are still undefined as per the spec.
The only correct solution would be to move the texture sampling out of the non-uniform control flow, like
vec4 c1=texture(tex, tc); // sample directly at tc
vec4 c2=texture(tex, some_function_of(tc)); // sample somewhere else
vec4 c3=texture(tex, ...);
// select output color in some non-uniform way
if (foo) {
c=c1;
} else if (bar) {
c=c2;
} else {
c=c3;
}
I'm trying to code a texture reprojection using a UV gBuffer (this is a texture that contains the UV desired value for mapping at that pixel)
I think that this should be easy to understand just by seeing this picture (I cannot attach due low reputation):
http://www.andvfx.com/wp-content/uploads/2012/12/3-objectes.jpg
The first image (the black/yellow/red/green one) is the UV gBuffer, it represents the uv values, the second one is the diffuse channel and the third the desired result.
Making this on OpenGL is pretty trivial.
Draw a simple rectangle and use as fragmented shader this pseudo-code:
float2 newUV=texture(UVgbufferTex,gl_TexCoord[0]).xy;
float3 finalcolor=texture(DIFFgbufferTex,newUV);
return float4(finalcolor,0);
OpenGL takes care about selecting the mipmap level, the anisotropic filtering etc, meanwhile if I make this on regular CPU process I get a single pixel for finalcolor so my result is crispy.
Any advice here? I was wondering about computing manually a kind of mipmaps and select the level by checking the contiguous pixel but not sure if this is the right way, also I doubt how to deal with that since it could be changing fast on horizontal but slower on vertical or viceversa.
In fact I don't know how this is computed internally on OpenGL/DirectX since I used this kind of code for a long time but never thought about the internals.
You are on the right track.
To select mipmap level or apply anisotropic filtering you need a gradient. That gradient comes naturally in GL (in fragment shaders) because it is computed for all interpolated variables after rasterization. This all becomes quite obvious if you ever try to sample a texture using mipmap filtering in a vertex shader.
You can compute the LOD (lambda) as such:
ρ = max (((du/dx)2 + (dv/dx)2)1/2
, ((du/dy)2 + (dv/dy)2)1/2)
λ = log2 ρ
The texture is picked basing on the size on the screen after reprojection. After you emit a triangle, check the rasterization size and pick the appropriate mipmap.
As for filtering, it's not that hard to implement i.e. bilinear filtering manually.
I'm using OpenGL to draw a screen size quad to the same position with low alpha (lesser than 0.1) on every frame, without glClear(GL_COLOR_BUFFER_BIT) between them. This way the quad should increasingly damp the visibility of the drawings of the previous frames.
However the damping effect stops after some seconds. If I use alpha value no lower than 0.1 for the quad, it works as expected. It seems to me, that the OpenGL blending equation fails after a number of iterations (higher alpha values need less iteration to accumulate to 1, so if alpha >= 0.1 the problem doesn't occur). The lower limit of alpha in 8bit is about 0.0039, i.e. 1/255, so alpha 0.01 should be fine.
I have tried several blending settings, using the following render method:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glClear(GL_DEPTH_BUFFER_BIT);
// draw black quad with translucency (using glDrawArrays)
And the simple fragment shader:
#version 450
uniform vec4 Color;
out vec4 FragColor;
void main()
{
FragColor = Color;
}
How could I fix this issue?
It seems to me, that the OpenGL blending equation fails after a number
of iterations (higher alpha values need less iteration to accumulate
to 1, so if alpha >=0.1 the problem doesn't occur). The lower limit of
the alpha in 8bit is about 0.0039 (1/255), so alpha 0.01 should be
fine.
Your reasoning is wrong here. If you draw a black quad with alpha of 0.01 and the blending setup you described, you basically get new_color = 0.99 * old_color with every iteration. As a function of the iteration number i, it would be new_color(i) = original_color * pow (0.99,i). Now with unlimited precision, this will move towards 0.
But as you already noted, the precision is not unlimited. You get a requantization with every step. So if your new_color color value does not fall below the threshold for the integer value, it will stay the same as before. Now if we consider x the unnomralized color value in the range [0,255], and we assume that the quantization is just done by usual rounding rules, we must get at a difference of at least 0.5 to get a different value: x - x * (1-alpha) > 0.5, or simply x > 0.5 / alpha.
So in your case, you get x > 50, and that is where the blending will "stop" (and everything below that will stay as it was at the beginning, so you get a "shadow" of the dark parts). For alpha of 0.1, it will end at x=5, which is probably close enough to zero that you didn't notice it (with your particular display and settings).
EDIT
Let me recommend a startegy that will work. You must avoid the iterative compontation (at least with non-floatingpoint framebuffers). You want to achieve a fade to black effect. So you could render your original content into a texture, and render that over and over again, while blending it to black by varying the alpha value from frame to frame, so you end up with alpha as a function of the time (or frame number). Using a linear transition would probably make the most sense, but you could even use some nonlinear function to get the slowdown of the fadeout as your original approach with unlimited precision would have done.
Note that you do not need blending at all for that, you simply can multiply the color value from the texture with some uniform "alpha" value in the fragment shader.
I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.
I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...