OpenGL compute shaders producing incorrect texture coordinates from invocation ID - opengl

I have a compute shader that samples a depth sampler2D, however when using texture() the result is weird with some jagged edges. If I use texelFetch() then it works fine, so I would guess its an issue with getting the correct texture coordinates? FYI the sampler uses nearest filtering and has no mips.
This is how the result looks when using texelFetch:
And this is the result when using texture:

texelFetch expects integer coordinates of the texel; you provide these, they are exact, suffering from no rounding errors, thus you see no artifacts.
texture* family of functions expect floating-point coordinates, and then converts them back to texel coordinates by floor(uv*size) formula (this is assuming it's a GL_TEXTURE_2D with GL_NEAREST filters, which seems to be your case). This calculation is sensitive to floating point rounding errors when uv sits right between the texels, which is exactly what happens here.
To fix that, offset uv coordinates by half a pixel:
vec2 TexCoord = (vec2(iuv) + 0.5)/vec2(size);

Related

How to Get Rid of Tiling Borders When Mipmapping is Disabled

I created a fragment shader that tiles a texture (it is actually a texture array, as you will see in the code). However, I get very ugly 1 pixel borders around each of the tiles. I know this can be caused by discontinuity of the function, causing weird stuff to happen while mipmapping, but I have mipmapping disabled (although I am using antialiasing). Here is my code:
vec2 tiledCoords = fract(pass_textureCoords * 256.0) / 4.0; // pass_textureCoords are the texture coordinates
out_Color = texture(testArray, vec3(tiledCoords.x, tiledCoords.y, 0)); // testArray is the sampler2DArray
I thought I'd have to give a bit of tolerance for rounding errors, but I can't seem to figure out where to put that. How can I get rid of these ugly borders? Thanks in advance!
Here is a screenshot:
I'm not exactly sure what you are trying to do with the calculations in your code. The first line doesn't seem to contribute to the result at all (is overwritten in the second line). Since you didn't set filter modes or wrap modes, you'll have the default settings enabled which is linear filtering and repeat mode.
It seems that you are using tiledCoords in the range of [0, 0.25] and try to wrap from 0.25 to 0. Linear filtering is not going to work automatically in this case since there is no automatic wrap mode for the 0.25 -> 0 transition. The black lines are cause from the linear interpolation in the range [0, 1/(2 * num_pixels)] because this area interpolates linearly between the 0th texel and "-1st" texel (which is due to wrapping mode the last texel of your texture). Same thing happens for the range [0.25 - 1/(2 * num_pixels), 0.25] where you interpolate the pixel directly left of texture coordinate 0.25 with the texel right of it.
There is no way to get hardware linear interpolation for your scenario. You can either use nearest neighbor sampling or calculate linear interpolate manually.

Vertex to Pixel Shader TEXCOORD interpolation precision issues

I think I'm experiencing precision issues in the pixel shader when reading the texcoords that's been interpolated from the vertex shader.
My scene constists of some very large triangles (edges being up to 5000 units long, and texcoords ranging from 0 to 5000 units, so that the texture is tiled about 5000 times), and I have a camera that is looking very close up at one of those triangles (the camera might be so close that its viewport only covers a couple of meters of the large triangles). When I pan the camera along the plane of the triangle, the texture is lagging and jumpy. My thought is that I am experiencing lack of precision on the interpolated texcoords.
Is there a way to increase the precision of the texcoords interpolation?
My first though was to let texcoord u be stored in double precision in the xy-components, and texcoord v in zw-components. But I guess that will not work since the shader interpolation assumes there are 4 separate components of single-precision, and not 2 components of double-precision?
If there is no solution on the shader side, I guess I'll just have to tesselate the triangles into finer pieces? I'd hate to do that just for this issue though.. Any ideas?
EDIT: The problem is also visible when printing texcoords as colors on the screen, without any actual texture sampling at all.
You're right, it looks like a precision problem. If your card supports it, you can indeed use double precision floats for interpolation. Just declare the variables as dvec2 and it should work.
The shader interpolation does not assumes there are 4 separate 8bit components. In recent cards, each scalar (ie. component in a vec) is interpolated separately as a float (or a double). Older cards, that could only interpolate vec4s, were also working with full floats (but these ones probably don't support doubles).

glsl sampler2DShadow and shadow2D clarification

Quick background of where I'm at (to make sure we're on the same page, and sanity check if I'm missing/assuming something stupid):
Goal: I want to render my scene with shadows, using deferred lighting
and shadowmaps.
Struggle: finding clear and consistent documentation regarding how to use shadow2D and sampler2DShadow.
Here's what I'm currently doing:
In the fragment shader of my final rendering pass (the one that actually calculates final frag values), I have the MVP matrices from the pass from the light's point of view, the depth texture from said pass (aka the "shadow map"), and the position/normal/color textures from my geometry buffer.
From what I understand, I need to find what UV of the shadow map the position of the current fragment corresponds to. I do that by the following:
//Bring position value at fragment (in world space) to screen space from lights POV
vec4 UVinShadowMap = (lightProjMat * lightViewMat * vec4(texture(pos_tex, UV).xyz,1.0)).xy;
//Convert screen space to 'texture space' (from -1to1 to 0to1)
UVinShadowMap = (UVinShadowMap+1)/2;
Now that I have this UV, I can get the percieved 'depth' from the light's pov with
float depFromLightPOV = texture2D(shadowMap, UVinShadowMap).r;
and compare that against the distance between the position at the current fragment and the light:
float actualDistance = distance(texture2D(pos_tex, UV).xyz, lightPos);
The problem comes from that 'depth' is stored in values 0-1, and actual distance is in world coordinates. I've tried to do that conversion manually, but couldn't get it to work. And in searching online, it looks like the way I SHOULD be doing this is with a sampler2DShadow...
So here's my question(s):
What changes do I need to make to instead use shadow2D? What does shadow2D even do? Is it just more-or-less an auto-conversion-from-depth-to-world texture? Can I use the same depth texture? Or do I need to render the depth texture a different way? What do I pass in to shadow2D? The world-space position of the fragment I want to check? Or the same UV as before?
If all these questions can be answered in a simple documentation page, I'd love if someone could just post that. But I swear I've been searching for hours and can't find anything that simply says what the heck is going on with shadow2D!
Thanks!
First of all, what version of GLSL are you using?
Beginning with GLSL 1.30, there is no special texture lookup function (name anyway) for use with sampler2DShadow. GLSL 1.30+ uses a bunch of overloads of texture (...) that are selected based on the type of sampler passed and the dimensions of the coordinates.
Second, if you do use sampler2DShadow you need to do two things differently:
Texture comparison must be enabled or you will get undefined results
GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE​
The coordinates you pass to texture (...) are 3D instead of 2D. The new 3rd coordinate is the depth value that you are going to compare.
Last, you should understand what texture (...) returns when using sampler2DShadow:
If this comparison passes, texture (...) will return 1.0, if it fails it will return 0.0. If you use a GL_LINEAR texture filter on your depth texture, then texture (...) will perform 4 depth comparisons using the 4 closest depth values in your depth texture and return a value somewhere in-between 1.0 and 0.0 to give an idea of the number of samples that passed/failed.
That is the proper way to do hardware anti-aliasing of shadow maps. If you tried to use a regular sampler2D with GL_LINEAR and implement the depth test yourself you would get a single averaged depth back and a boolean pass/fail result instead of the behavior described above for sampler2DShadow.
As for getting a depth value to test from a world-space position, you were on the right track (though you forgot perspective division).
There are three things you must do to generate a depth from a world-space position:
Multiply the world-space position by your (light's) projection and view matrices
Divide the resulting coordinate by its W component
Scale and bias the result (which will be in the range [-1,1]) into the range [0,1]
The final step assumes you are using the default depth range... if you have not called glDepthRange (...) then this will work.
The end result of step 3 serves as both a depth value (R) and texture coordinates (ST) for lookup into your depth map. This makes it possible to pass this value directly to texture (...). Recall that the first 2 components of the texture coordinates are the same as always, and that the 3rd is a depth value to test.

Precision loss with mod in GLSL

I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.

Texture lookup into rendered FBO is off by half a pixel

I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...