I need help to precisely sample from my 3D Texture in the OpenGL (4.5) Compute Shader given a world position (within the domain of the texture dimensions). More precisely, I need help with my uv() function which maps world coordinates to the exact corresponding texture coordinates.
I want linear interpolation of the data, so my current approach uses texture(). But this results in errors around 0.001 compared to the expected values.
However, if I use texelFetch() and mix() to manually mimick the linear interpolation of texture() as stated in the specification (p. 248), I can reduce the error to 0.0000001 (which is desired). You can see an example of how I implemented it below in the Code section.
This is the function which I currently use inside the Compute Shader to calculate my uv-coordinates:
vec3 uv(const vec3 position) {
return (position + 0.5) / textureSize(tex[0], 0);
}
Though this one is often suggested across the internet, my results are not perfectly aligned.
Example
To elaborate, I have floating point data stored in a Texture as GL_RGB32F. For simplicity my example here uses scalar GL_R32F. The data has dimensions of, e.g., 20x20x20 (but can be arbitrary). I operate in the data domain [0, 19]^3 and want to exactly map my current position to the texture domain [0, 1]^3 to index the data at this position.
I have a test texture which alternates between 0 and 1 on the x-axis and therefore should interpolate for vec3(2.2, 0, 0) to 0.2.
As stated above, I tested texture() and texelFetch() + mix(). My manual interpolation evaluates to 0.200000003 which is fine. But calling texture() evaluates to 0.199218750, a quite high error compared. Strangely, manual interpolation and automatic interpolation evaluate to the same (correct) value for integer positions and the mid between integer positions (e.g., for vec3(2.0, 0, 0), vec3(2.5, 0, 0) and vec3(3.0, 0, 0)).
A visual example with actual calculated values:
uv(x, y, z) = ((x, y, z) + 0.5) / (20, 20, 20)
19| 1 |
| |
..| uv ..|
| (2.2, 3.0) ===> | (0.135, 0.175)
1 | x | x
|___________ |___________
0 1 .. 19 0 1
Code
I use C++, OpenGL 4.5 and globjects as a wrapper for OpenGL. The texture buffers are created and configured as depicted below.
// Texture buffer creation
t = globjects::Texture::createDefault(gl::GLenum::GL_TEXTURE_3D);
t->setParameter(gl::GL_TEXTURE_WRAP_S, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_WRAP_T, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_WRAP_R, gl::GL_CLAMP_TO_EDGE);
t->setParameter(gl::GL_TEXTURE_MIN_FILTER, gl::GL_LINEAR);
t->setParameter(gl::GL_TEXTURE_MAG_FILTER, gl::GL_LINEAR);
The Compute Shader is invocated.
// datatex holds image information
t->image3D(0, gl::GL_RGB32F, datatex->dimensions, 0, gl::GL_RGB, gl::GL_FLOAT, (const uint8_t*) datatex->data());
// ... (Make texture resident)
gl::glDispatchCompute(1, 1, 1);
// ... (Make texture not resident)
The Compute Shader, summarized to the important parts, is as follows:
#version 450
#extension GL_ARB_bindless_texture : enable
layout(local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
layout(binding=0) uniform samplers
{
sampler3D tex[1];
};
vec3 uv(const vec3 position) {
return (position + 0.5) / textureSize(tex[0], 0);
}
void main() {
// Automatic interpolation
vec4 correct1 = texture(tex[0], uv(vec3(2.0,0,0), 0);
vec4 correct2 = texture(tex[0], uv(vec3(2.5,0,0), 0);
vec4 correct3 = texture(tex[0], uv(vec3(3.0,0,0), 0);
vec4 wrong = texture(tex[0], uv(vec3(2.1,0,0), 0);
// Manual interpolation on x-axis
vec3 pos = vec3(2.1,0,0);
vec4 v0 = texelFetch(tex[0], ivec3(floor(pos.x), pos.yz), 0);
vec4 v1 = texelFetch(tex[0], ivec3(ceil(pos.x), pos.yz), 0);
vec4 correct4 = mix(v0, v1, fract(pos.x));
}
I'd love your input, I'm at my end.. Thanks!
System
Also, I'm trying to achieve this on an NVIDIA GPU.
The texture units of GPUs are only needed to sample with 8bit precision in the fraction as of the D3D11 specs. This explains the small error which does not happen on (normalized) integer or mid-integer coordinates.
The fractional precision can also be queried in Vulkan via subTexelPrecisionBits and the online Vulkan database shows that there is no GPU as of today which offers more than 8 bits of precision in the fraction during sampling.
Performing linear interpolation in the shader itself offers the full float32 precision.
im trying to make something on shader toy: https://www.shadertoy.com/view/wsffDN
(original ref: https://www.shadertoy.com/view/3dtSD7)
bufferA line 18
i want to know why uv was declared as uv
vec2 uv = (fragCoord.xy - iResolution.xy*.5) / iResolution.y;
, but this line
sceneColor = vec3((uv[0] + stagger) / initpack + 0.05*0., -0, 0.05);
uv[0] is used as a float
how does this work, and what uv's value become?
It is perfectly legal to access the components of any vec type (or mat type for that matter) with array syntax. You can even use a non-constant array index (well, depending on the GLSL version, but 1.30+ versions allow it). uv[0] does exactly what it looks like: access the first element of the vector.
I render to a texture which is in the format GL_RGBA8.
When I render to this texture I have a fragment shader whose output is set to color = (1/255, 0, 0, 1). Triangles are overlapping each other and I set the blend mode to (GL_ONE, GL_ONE) so for example if 2 triangles overlap for a given fragment, the resulting pixel at that fragment position will have value (2/255.0).
I then use this texture in a second pass (applied to a quad filling up the screen). My goal at this point when I read the values back from the texture is to convert the values (which are in floating point format in the range [0:1]) back to integers in the range [0:255]. If I look at the pixel that add value (2.0/255.0) I should have the result (2.0/255.0) * 255.0 = 2.0 but I don't.
If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a == 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get a black image. If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a > 1.999 && a <= 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get the expected result. So in summary it seems like the convention back to [0:255] suffers from floating precision issues.
precision highp float;
Doesn't make a difference. I also turned filtering off (and no mipmaps).
This would work:
float a = ceil(texture(colorTexture, texCoord).x * 255);
Though in general that doesn't like very robust as a solution (why would ceil work and not floor for example, why is the value 1.999999 rather than 2.00001 and can I be sure it will always be that way?). People must have done that before so I am sure there's a much better way to guaranteeing you get an accurate result without doing too much fiddling with the numbers. Any hints would be greatly appreciated.
EDIT
As pointed in 2 comments, it's right from the way floating point numbers are encoded that you can't get a guarantee that you will get a "integer" number back even if the number is even (that's good to be reminded of this important point). So I reformulate my question which is then, is there a preferred way in GLSL to clamp number to its closest integer values?
And that would be round:
float a = round(texture(colorTexture, texCoord).x * 255);
Hope this can help other people in the future though.
Is it possible to calculate my mesh normal vector when I have just TANGENT and BINORMAL vectors ?
float4 Binormal : BINORMAL ;
float4 Tangent : TANGENT ;
float4 Position : POSITION ;
As far as I understand it, a binormal vector is defined from the normal and tangent vectors through a cross product :
Thus normal = binormal x tangent, that is, what you wrote is correct.
Since according to the doc, the cross product is defined for vectors of size 3, you can do the following :
normal = float4(cross(binormal.xyz, tangent.xyz), 1.0);
This is using the cross product from HLSL, which I recommend. But to get into more detail, you are not actually performing a real cross product.
The real formula should be the following, where u is binormal, v is tangent and s is normal :
Thus the code for a cross product should, instead, be :
normal.x = binormal.y*tangent.z - binormal.z*tangent.y;
normal.y = binormal.z*tangent.x - binormal.x*tangent.z;
normal.z = binormal.x*tangent.y - binormal.y*tangent.x;
And an alternate, swizzled version (that returns a vector of size 3, use float4(..., 1.0) if you want a 4 item vector) :
normal = binormal.yzx*tangent.zxy - binormal.zxy*tangent.yzx;
I have a situation in GLSL where I need to calculate the divergence of a vector in fragment shader
vec3 posVector;
Divergence is mathematically given by
It's a dot product between vector and Gradient.
Does anyone how to compute this ?
The divergence of the position vector is the the divergence of the identity vector field
F: ℝ³ -> ℝ³
F(r_) = r_
and div of that is both const and known:
div(r_) = 3.