textureGather() behavior at texel center coordinates - opengl

Suppose I have a 100x100 texture and I do the following:
vec4 texelQuad = textureGather(sampler, vec2(50.5)/vec2(100.0));
The coordinate I am requesting is exactly at the center of texel (50, 50). So, will I get a quad of texels bounded by (49, 49) and (50, 50) or the one bounded by (50, 50) and (51, 51). The spec is evasive on the subject. It merely states the following:
The rules for the LINEAR minification filter are applied to
identify the four selected texels.
The relevant section 8.14.2 Coordinate Wrapping and Texel Selection of the spec is not terribly clear either. My best hypothesis would be the following:
ivec2 lowerBoundTexelCoord = ivec2(floor(textureCoord * textureSize - 0.5));
Does that hypothesis hold in practice? No, it doesn't. In fact no other hypothesis would hold either, since different hardware returns different results for this particular case:
textureSize: 100x100
textureCoord: vec2(50.5)/vec2(100.0)
Hyphotesis: (49, 49) to (50, 50)
GeForce 1050 Ti: (49, 49) to (50, 50)
Intel HD Graphics 630: (50, 50) to (51, 51)
another case:
textureSize: 100x100
textureCoord: vec2(49.5)/vec2(100.0)
Hyphotesis: (48, 48) to (49, 49)
GeForce 1050 Ti: (49, 49) to (50, 50)
Intel HD Graphics 630: (48, 48) to (49, 49)
Does that make textureGather() useless due to the unpredictable behavior at texel center coordinates? Not at all!. While you may not be able to predict which 4 texels it will return in some particular cases, you can still force it to return the ones you want, by giving it a coordinate between those 4 texels you want. That is, if I want texels bounded by (49, 49) and (50, 50), I would call:
textureGather(sampler, vec2(50.0, 50.0)/textureSize);
Since the coordinate I am requesting this time is the point where 4 texels meet, any implementation will surely return me those 4 texels.
Now, the question: Is my analysis correct? Does everyone who uses textureGather() force it to return a particular quad of texels rather then figuring out which ones it would return by itself? If so, it's such a shame it's not reflected in any documentation.
EDIT
It was pointed out that OpenGL doesn't guarantee the same result dividing identical floating point numbers on different hardware. Therefore, it becomes necessary to mention that in my actual code I had vec2(50.5)/vec2(textureSize(sampler, 0)) rather than vec2(50.5)/vec2(100.0). That's important, since the presence of textureSize() prevents that division from being carried out at shader compilation time.
Let me also rephrase the question:
Suppose you've got a normalized texture coordinate from a black box. That coordinate is then passed to textureGather():
vec2 textureCoord = takeFromBlackBox();
vec4 texelQuad = textureGather(sampler, textureCoord);
Can anyone produce GLSL code that would return the integer pair of coordinates of the texel returned in texelQuad[3], which is the lower-bound corner of a 2x2 box? The obvious solution below doesn't work in all cases:
vec2 textureDims = textureSize(sampler, 0);
ivec2 lowerBoundTexelCoord = ivec2(floor(textureCoord * textureDims - 0.5));
Examples of tricky cases where the above approach may fail are:
vec2 textureCoord = vec2(49.5)/vec2(textureSize(sampler, 0));
vec2 textureCoord = vec2(50.5)/vec2(textureSize(sampler, 0));
where textureSize(sampler, 0) returns ivec2(100, 100).

Recall that the texel locations for GL_LINEAR ([OpenGL 4.6 (Core) §8.14 Texture Minification]) are selected by the following formulas:
i0 = wrap(⌊u′ - 1/2⌋)
j0 = wrap(⌊v′ - 1/2⌋)
...
The value of (u′,v′) in this case is equal to
(vec2(50.5) / vec2(100)) * vec2(100)
However, note that this is not guaranteed to be equal to vec2(50.5). See The OpenGL Shading Language 4.60 §4.71 Range and Precision:
a / b, 1.0 / b: 2.5 ULP for b in the range [2-126, 2126].
So the value of u′ might be slightly larger than 50.5, slightly smaller than 50.5, or it might be 50.5 exactly. No promises! Well, the spec promises no more than 2.5 ULP, but that's nothing to write home about. You can see that when you subtract 0.5 and take the floor, you are either going to get 49 or 50, depending on how the number was rounded.
i0 = wrap(⌊(50.5 / 100) * 100 - 1/2⌋)
i0 = wrap(⌊(.505 ± error) * 100 - 1/2⌋)
i0 = wrap(⌊50.5 ± error - 1/2⌋)
i0 = wrap(⌊50 ± error⌋)
i0 = 50 (if error >= 0) or 49 (if error < 0)
So in fact it is not textureGather() that is behaving unpredictably. The unpredictable part is the rounding error when you try to divide by 100, which is explained in the GLSL spec and in the classic article, WhatEvery Computer Scientist Should Know About Floating-Point Numbers.
Or in other words, textureGather() always gives you the same result, but 50.5/100.0 does not.
Note that you could get exact results if your texture were a power of two, since you could use 50.5 * 0.0078125 to compute 50.5 / 128, and the result would be exactly correct, since multiplication is correctly rounded and 0.0078125 is a power of two.

Related

Precisely map World Position to UV Texture-Coordinates (OpenGL Compute Shader)

I need help to precisely sample from my 3D Texture in the OpenGL (4.5) Compute Shader given a world position (within the domain of the texture dimensions). More precisely, I need help with my uv() function which maps world coordinates to the exact corresponding texture coordinates.
I want linear interpolation of the data, so my current approach uses texture(). But this results in errors around 0.001 compared to the expected values.
However, if I use texelFetch() and mix() to manually mimick the linear interpolation of texture() as stated in the specification (p. 248), I can reduce the error to 0.0000001 (which is desired). You can see an example of how I implemented it below in the Code section.
This is the function which I currently use inside the Compute Shader to calculate my uv-coordinates:
vec3 uv(const vec3 position) {
return (position + 0.5) / textureSize(tex[0], 0);
}
Though this one is often suggested across the internet, my results are not perfectly aligned.
Example
To elaborate, I have floating point data stored in a Texture as GL_RGB32F. For simplicity my example here uses scalar GL_R32F. The data has dimensions of, e.g., 20x20x20 (but can be arbitrary). I operate in the data domain [0, 19]^3 and want to exactly map my current position to the texture domain [0, 1]^3 to index the data at this position.
I have a test texture which alternates between 0 and 1 on the x-axis and therefore should interpolate for vec3(2.2, 0, 0) to 0.2.
As stated above, I tested texture() and texelFetch() + mix(). My manual interpolation evaluates to 0.200000003 which is fine. But calling texture() evaluates to 0.199218750, a quite high error compared. Strangely, manual interpolation and automatic interpolation evaluate to the same (correct) value for integer positions and the mid between integer positions (e.g., for vec3(2.0, 0, 0), vec3(2.5, 0, 0) and vec3(3.0, 0, 0)).
A visual example with actual calculated values:
uv(x, y, z) = ((x, y, z) + 0.5) / (20, 20, 20)
19| 1 |
| |
..| uv ..|
| (2.2, 3.0) ===> | (0.135, 0.175)
1 | x | x
|___________ |___________
0 1 .. 19 0 1
Code
I use C++, OpenGL 4.5 and globjects as a wrapper for OpenGL. The texture buffers are created and configured as depicted below.
// Texture buffer creation
t = globjects::Texture::createDefault(gl::GLenum::GL_TEXTURE_3D);
t->setParameter(gl::GL_TEXTURE_WRAP_S, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_WRAP_T, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_WRAP_R, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_MIN_FILTER, gl::GL_LINEAR);
t->setParameter(gl::GL_TEXTURE_MAG_FILTER, gl::GL_LINEAR);
The Compute Shader is invocated.
// datatex holds image information
t->image3D(0, gl::GL_RGB32F, datatex->dimensions, 0, gl::GL_RGB, gl::GL_FLOAT, (const uint8_t*) datatex->data());
// ... (Make texture resident)
gl::glDispatchCompute(1, 1, 1);
// ... (Make texture not resident)
The Compute Shader, summarized to the important parts, is as follows:
#version 450
#extension GL_ARB_bindless_texture : enable
layout(local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
layout(binding=0) uniform samplers
{
sampler3D tex[1];
};
vec3 uv(const vec3 position) {
return (position + 0.5) / textureSize(tex[0], 0);
}
void main() {
// Automatic interpolation
vec4 correct1 = texture(tex[0], uv(vec3(2.0,0,0), 0);
vec4 correct2 = texture(tex[0], uv(vec3(2.5,0,0), 0);
vec4 correct3 = texture(tex[0], uv(vec3(3.0,0,0), 0);
vec4 wrong = texture(tex[0], uv(vec3(2.1,0,0), 0);
// Manual interpolation on x-axis
vec3 pos = vec3(2.1,0,0);
vec4 v0 = texelFetch(tex[0], ivec3(floor(pos.x), pos.yz), 0);
vec4 v1 = texelFetch(tex[0], ivec3(ceil(pos.x), pos.yz), 0);
vec4 correct4 = mix(v0, v1, fract(pos.x));
}
I'd love your input, I'm at my end.. Thanks!
System
Also, I'm trying to achieve this on an NVIDIA GPU.
The texture units of GPUs are only needed to sample with 8bit precision in the fraction as of the D3D11 specs. This explains the small error which does not happen on (normalized) integer or mid-integer coordinates.
The fractional precision can also be queried in Vulkan via subTexelPrecisionBits and the online Vulkan database shows that there is no GPU as of today which offers more than 8 bits of precision in the fraction during sampling.
Performing linear interpolation in the shader itself offers the full float32 precision.

What algorithm does GL_LINEAR use exactly?

The refpages say "Returns the weighted average of the four texture elements that are closest to the specified texture coordinates." How exactly are they weighted? And what about 3D textures, does it still only use 4 texels for interpolation or more?
in 2D textures are 4 samples used which means bi-linear interpolation so 3x linear interpolation. The weight is the normalized distance of target texel to its 4 neighbors.
So for example you want the texel at
(s,t)=(0.21,0.32)
but the texture nearby texels has coordinates:
(s0,t0)=(0.20,0.30)
(s0,t1)=(0.20,0.35)
(s1,t0)=(0.25,0.30)
(s1,t1)=(0.25,0.35)
the weights are:
ws = (s-s0)/(s1-s0) = 0.2
wt = (t-t0)/(t1-t0) = 0.4
so linear interpolate textels at s direction
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws
and finally in t direction:
c = c0 + (c1-c0)*wt
where texture(s,t) returns texel color at s,t while the coordinate corresponds to exact texel and c is the final interpolated texel color.
In reality the s,t coordinates are multiplied by the texture resolution (xs,ys) which converts them to texel units. after that s-s0 and t-t0 is already normalized so no need to divide by s1-s0 and t1-t0 as they are booth equal to one. so:
s=s*xs; s0=floor(s); s1=s0+1; ws=s-s0;
t=t*ys; t0=floor(t); t1=t0+1; wt=t-t0;
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws;
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws;
c = c0 + (c1-c0)*wt;
I never used 3D textures before but in such case it use 8 textels and it is called tri-linear interpolation which is 2x bi-linear interpolation simply take 2 nearest textures and compute each with bi-linear interpolation and the just compute the final texel by linear interpolation based on the u coordinate in the exact same way ... so
u=u*zs; u0=floor(u); u1=u0+1; wu=u-u0;
c = cu0 + (cu1-cu0)*wu;
where zs is count of textures, cu0 is result of bi-linear interpolation in texture at u0 and cu1 at u1. This same principle is used also for mipmaps...
All the coordinates may have been offseted by 0.5 texel and also the resolution multiplication can be done with xs-1 instead of xs based on your clamp settings ...
As well as the bilinear interpolation outlined in Spektre's answer, you should be aware of the precision of GL_LINEAR interpolation. Many GPUs (e.g. Nvidia, AMD) do the interpolation using fixed point arithmetic with only ~255 distinct values between the R,G,B,A values in the texture.
For example, here is pseudo code showing how GPUs might do the interpolation:
float interpolate_red(float red0, float red1, float f) {
int g = (int)(f*256)
return (red0*(256-g) + red1*g)/256;
}
If your texture is for coloring and contains GL_UNSIGNED_BYTE values then it is probably OK for you. But if your texture is a lookup table for some other calculation and it contains GL_UNSIGNED_SHORT or GL_FLOAT values then this loss of precision could be a problem for you. In which case you should make your lookup table bigger with in-between values calculated with (float) or (double) precision.

OpenGL: issues with converting floats from texture to integers in fragment shader

I render to a texture which is in the format GL_RGBA8.
When I render to this texture I have a fragment shader whose output is set to color = (1/255, 0, 0, 1). Triangles are overlapping each other and I set the blend mode to (GL_ONE, GL_ONE) so for example if 2 triangles overlap for a given fragment, the resulting pixel at that fragment position will have value (2/255.0).
I then use this texture in a second pass (applied to a quad filling up the screen). My goal at this point when I read the values back from the texture is to convert the values (which are in floating point format in the range [0:1]) back to integers in the range [0:255]. If I look at the pixel that add value (2.0/255.0) I should have the result (2.0/255.0) * 255.0 = 2.0 but I don't.
If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a == 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get a black image. If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a > 1.999 && a <= 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get the expected result. So in summary it seems like the convention back to [0:255] suffers from floating precision issues.
precision highp float;
Doesn't make a difference. I also turned filtering off (and no mipmaps).
This would work:
float a = ceil(texture(colorTexture, texCoord).x * 255);
Though in general that doesn't like very robust as a solution (why would ceil work and not floor for example, why is the value 1.999999 rather than 2.00001 and can I be sure it will always be that way?). People must have done that before so I am sure there's a much better way to guaranteeing you get an accurate result without doing too much fiddling with the numbers. Any hints would be greatly appreciated.
EDIT
As pointed in 2 comments, it's right from the way floating point numbers are encoded that you can't get a guarantee that you will get a "integer" number back even if the number is even (that's good to be reminded of this important point). So I reformulate my question which is then, is there a preferred way in GLSL to clamp number to its closest integer values?
And that would be round:
float a = round(texture(colorTexture, texCoord).x * 255);
Hope this can help other people in the future though.

Advanced moiré a pattern reduction in HLSL / GLSL procedural textures shader - antialiasing

I am working on a procedural texture, it looks fine, except very far away, the small texture pixels disintegrate into noise and moiré patterns.
I have set out to find a solution to average and quantise the scale of the pattern far away and close up, so that close by it is in full detail, and far away it is rounded off so that one pixel of a distant mountain only represents one colour found there, and not 10 or 20 colours at that point.
It is easy to do it by rounding the World_Position that the volumetric texture is based on using an if statement i.e.:
if( camera-pixel_distance > 1200 meters ) {wpos = round(wpos/3)*3;}//---round far away pixels
return texturefucntion(wpos);
the result of rounding far away textures is that they will look like this, except very far away:
the trouble with this is i have to make about 5 if conditions for the various distances, and i have to estimate a random good rounding value
I tried to make a function that cuts the distance of the pixel into distance steps, and applies a LOD devider to the pixel_worldposition value to make it progressively rounder at distance but i got nonsense results, actually the HLSL was totally flipping out. here is the attempt:
float cmra= floor(_WorldSpaceCameraPos/500)*500; //round camera distance by steps of 500m
float dst= (1-distance(cmra,pos)/4500)*1000 ; //maximum faraway view is 4500 meters
pos= floor(pos/dst)*dst;//close pixels are rounded by 1000, far ones rounded by 20,30 etc
it returned nonsense patterns that i could not understand.
Are there good documented algorithms for smoothing and rounding distance texture artifacts? can i use the scren pixel resolution, combined with the distance of the pixel, to round each pixel to one color that stays a stable color?
Are you familiar with the GLSL (and I would assume HLSL) functions dFdx() and dFdy() or fwidth()? They were made specifically to solve this problem. From the GLSL Spec:
genType dFdy (genType p)
Returns the derivative in y using local differencing for the input argument p.
These two functions are commonly used to estimate the filter width used to anti-alias procedural textures.
and
genType fwidth (genType p)
Returns the sum of the absolute derivative in x and y using local differencing for the input argument p, i.e.: abs (dFdx (p)) + abs (dFdy (p));
OK i found some great code and a tutorial for the solution, it's a simple code that can be tweaked by distance and many parameters.
from this tutorial:
http://www.yaldex.com/open-gl/ch17lev1sec4.html#ch17fig04
half4 frag (v2f i) : COLOR
{
float Frequency = 0.020;
float3 pos = mul (_Object2World, i.uv).xyz;
float V = pos.z;
float sawtooth = frac(V * Frequency);
float triangle = (abs(2.0 * sawtooth - 1.0));
//return triangle;
float dp = length(float2(ddx(V), ddy(V)));
float edge = dp * Frequency * 8.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
// gl_FragColor = vec4(vec3(square), 1.0);
if (pos.x>0.){return float4(float3(square), 1.0);}
if (pos.x<0.){return float4(float3(triangle), 1.0);}
}

Finding nearest RGB colour

I was told to use distance formula to find if the color matches the other one so I have,
struct RGB_SPACE
{
float R, G, B;
};
RGB_SPACE p = (255, 164, 32); //pre-defined
RGB_SPACE u = (192, 35, 111); //user defined
long distance = static_cast<long>(pow(u.R - p.R, 2) + pow(u.G - p.G, 2) + pow(u.B - p.B, 2));
this gives just a distance, but how would i know if the color matches the user-defined by at least 25%?
I'm not just sure but I have an idea to check each color value to see if the difference is 25%. for example.
float R = u.R/p.R * 100;
float G = u.G/p.G * 100;
float B = u.B/p.B * 100;
if (R <= 25 && G <= 25 && B <= 25)
{
//color matches with pre-defined color.
}
I would suggest not to check in RGB space. If you have (0,0,0) and (100,0,0) they are similar according to cababungas formula (as well as according to casablanca's which considers too many colors similar). However, they LOOK pretty different.
The HSL and HSV color models are based on human interpretation of colors and you can then easily specify a distance for hue, saturation and brightness independently of each other (depending on what "similar" means in your case).
"Matches by at least 25%" is not a well-defined problem. Matches by at least 25% of what, and according to what metric? There's tons of possible choices. If you compare RGB colors, the obvious ones are distance metrics derived from vector norms. The three most important ones are:
1-norm or "Manhattan distance": distance = abs(r1-r2) + abs(g1-g2) + abs(b1-b2)
2-norm or Euclidean distance: distance = sqrt(pow(r1-r2, 2) + pow(g1-g2, 2) + pow(b1-b2, 2)) (you compute the square of this, which is fine - you can avoid the sqrt if you're just checking against a threshold, by squaring the threshold too)
Infinity-norm: distance = max(abs(r1-r2), abs(g1-g2), abs(b1-b2))
There's lots of other possibilities, of course. You can check if they're within some distance of each other: If you want to allow up to 25% difference (over the range of possible RGB values) in one color channel, the thresholds to use for the 3 methods are 3/4*255, sqrt(3)/4*255 and 255/4, respectively. This is a very coarse metric though.
A better way to measure distances between colors is to convert your colors to a perceptually uniform color space like CIELAB and do the comparison there; there's a fairly good Wikipedia article on the subject, too. That might be overkill depending on your intended application, but those are the color spaces where measured distances have the best correlation with distances perceived by the human visual system.
Note that the maximum possible distance is between (255, 255, 255) and (0, 0, 0), which are at a distance of 3 * 255^2. Obviously these two colours match the least (0% match) and they are a distance 100% apart. Then at least a 25% match means a distance less than 75%, i.e. 3 / 4 * 3 * 255^2 = 9 / 4 * 255 * 255. So you could just check if:
distance <= 9 / 4 * 255 * 255