smoothstep() returns different values for "identical" arguments - glsl

Running a Surface Laptop 3 with Intel® Iris® Plus Graphics (driver version 30.0.101.1191). Perhaps I'm facing a bug in Intel's shader compiler. Though, I have limited experience of shaders in general, so perhaps the behavior below is expected.
Head over to https://www.shadertoy.com/new and try the shader below. For some reason, defining a float d = 1.0 seems to produce different results compared to a compile-time constant of 1.0.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float d = 1.0;
// WHY this returns 1?
#if 1
float x = smoothstep(0.0, 0.0, d);
#else
// Whereas this returns 0?
float x = smoothstep(0.0, 0.0, 1.0);
// OR this for that matter:
// float x = smoothstep(0.000000000000001, 0.0, d);
#endif
fragColor = vec4(1.0, 0.0, 1.0, 1.0) * x;
}
Thus, smoothstep(0.0, 0.0, 1.0) returns 0 but the equivalent smoothstep(0.0, 0.0, d) returns 1. Why?

smoothstep requires that the first parameter edge0 is strictly less than the second parameter edge1. The results are undefined when edge0 >= edge1.
The reason behind this is pretty simple. From the GL docs (https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/smoothstep.xhtml)
The code looks something like this:
genType t; /* Or genDType t; */
t = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0);
return t * t * (3.0 - 2.0 * t);
Note the denominator,
(edge1 - edge0)
If those values are equal, there's a divide-by-zero. So, your implementation is returning a 0 or a 1 since it can't throw an exception, because it's C.
Edit:
On the question of "why is it returning 0 or 1:" The sample implementation from the GL docs clamps the result from 0 to 1. If edge0 > edge1, the resulting negative value will be clamped to 0. If edge0 == edge1, divide-by-zero results in positive infinity, which is clamped to 1.
But, that's all speculation, since the actual implmentation in your system's GL implementation is a black box. For cases where edge0 >= edge1, the result is undefined and should be ignored.

Related

Linearize depth

In OpenGL you can linearize a depth value like so:
float linearize_depth(float d,float zNear,float zFar)
{
float z_n = 2.0 * d - 1.0;
return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
(Source: https://stackoverflow.com/a/6657284/10011415)
However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a depth value with Vulkan?
The important difference between OpenGL and Vulkan here is that the normalized device coordinates (NDC) have a different range for z (the depth). In OpenGL it's -1 to 1 and in Vulkan it's 0 to 1.
However, in OpenGL when the depth is stored into a depth texture and you read from it, the value is further normalized to 0 to 1. This seems to be the case in your example, since the first line of your function maps it back to -1 to 1.
In Vulkan, your depth is always between 0 and 1, so the above function works in Vulkan as well. You can simplify it a bit though:
float linearize_depth(float d,float zNear,float zFar)
{
return zNear * zFar / (zFar + d * (zNear - zFar));
}

How does HLSL treat arbitrarily small numbers?

First of all, I'm rendering a unit square applying a world transformation to translate it to the middle of the screen. Up to this point everything works fine. Here's the result image.
However, when I'm applying a view transformation I start to run into some problems. This is the code where I'm creating the view matrix.
XMFLOAT4X4 viewMatrix;
XMVECTOR eye = XMLoadFloat3(&m_Eye);
XMVECTOR focus = XMLoadFloat3(&m_LookAt);
XMVECTOR up = XMLoadFloat3(&m_Up);
XMStoreFloat4x4(&viewMatrix, XMMatrixLookAtLH(eye, eye + focus, up));
And so I tracked the problem down to the creation of the lookAt vector in the following code.
float x = cos(XMConvertToRadians(m_Yaw)) * cos(XMConvertToRadians(m_Pitch));
float y = sin(XMConvertToRadians(m_Pitch));
float z = sin(XMConvertToRadians(m_Yaw)) * cos(XMConvertToRadians(m_Pitch));
XMVECTOR m_LookAt = -XMVector3Normalize(XMVectorSet(x, y, z, 0.0f));
Applying the following parameters...
m_Yaw = -90.0f
m_Pitch = 0.0f
m_Eye = (0.0f, 0.0f, -5.0f)
... result in a (-4.37113883e-008, 0.0, -1.0) lookAt vector, in a
{ 1.0, 0.0, 4.37113883e-008, 0.0}
{ 0.0, 1.0, 0.0, 0.0}
{-4.37113883e-008, 0.0, 1.0, 0.0}
{-2.18556949e-007, 0.0, 5.0, 1.0}
view matrix and the following image.
So, I tried to create a hardcoded lookAt vector with the same m_Yaw and m_Pitch parameters.
x = cos(-90) * cos(0) = 0 * 1 = 0
y = sin(0) = 0
z = sin(-90) * cos(0) = -1 * 1 = -1
The only difference is that the x value resulted in 0 instead of -4.37113883e-008, which I believe is due to the conversion to radians. This alone "fixes" the problem and the result image is identical to the first one.
However, since I cannot keep these values fixed. I created a small constant const float kEpsilon = 0.0001f; and clamped to zero any components from the lookAt vector that are smaller than kEpsilon and bigger than -kEpsilon.
XMVECTOR lookAt = XMVectorSet(
x < kEpsilon && x > -kEpsilon ? 0.0f : x,
y < kEpsilon && y > -kEpsilon ? 0.0f : y,
z < kEpsilon && z > -kEpsilon ? 0.0f : z,
0.0f
);
But this feels like a cheap solution and I still don't understand why this bug was ocurring. Is this due to any particularity of DirectX that I might be unaware of? Or the error is in my code?
Additional information:
Vertex shader: https://pastebin.com/Lrd79HMB

hlsl unexpected acos result

I found a few strange HLSL bugs - or Pix is telling nonsense:
I have 2 orthogonal Vectors: A = { 0.0f, -1.0f, 0.0f } and B { 0.0f, 0.0f, 1.0f }
If I use the HLSL dot function, the output is (-0.0f) which makes sense BUT now the acos of that output is -0.0000675917 (that's what Pix says - and what the shader outputs) which is not what I had expected;
Even if I compute the dotproduct myself (A.x*B.x + A.y * B.y + etc.) the result is still 0.0f but the acos of my result isn't zero.
I do need the result of acos to be as precisely as possible because i want to color my vertices according to the angle between the triangle normal and a given vector.
float4 PS_MyPS(VS_OUTPUT input) : COLOR
{
float Light = saturate(dot(input.Normal, g_LightDir)) + saturate(dot(-input.Normal, g_LightDir)); // compute the lighting
if (dot(input.Vector, CameraDirection) < 0) // if the angle between the normal and the camera direction is greater than 90 degrees
{
input.Vector = -input.Vector; // use a mirrored normal
}
float angle = acos(0.0f) - acos(dot(input.Vector, Vector));
float4 Color;
if (angle > Angles.x) // Set the color according to the Angle
{
Color = Color1;
}
else if (angle > Angles.y)
{
Color = Color2;
}
else if (angle >= -abs(Angles.y))
{
Color = Color3;
}
else if (angle >= Angles.z)
{
Color = Color4;
}
else
{
Color = Color5;
}
return Light * Color;
}
It works fine for angles above 0.01 degrees, but gives wrong results for smaller values.
The other bugs I found are: The "length"-function in hlsl returns 1 for the vector (0, -0, -0, 0) in Pix and the HLSL function "any" on that vector returns true as well. This would mean that -0.0f != 0.0f.
Has anyone else encountered these and maybe has a workaround for my problem?
I tested it on an Intel HD Graphics 4600 and a Nvidia card with the same results.
One of the primary reasons why acos may return bad results is because always remember that acos takes values between -1.0 and 1.0.
Hence if the value exceeds even slightly(1.00001 instead of 1.0) , it may return incorrect result.
I deal with this problem by forced capping i.e putting in a check for
if(something>1.0)
something = 1.0;
else if(something<-1.0)
something = -1.0;

x-coordinate modulo 2 == 1.0 needs different color

I need to write a shader where the color of the pixel are black when the following equation is true:
(x-coordinate of pixel) mod 2 == 1
If it is false, the pixel should be white. Therefore I searched the web but it did not work.
More information:
I've an OpenGL scene with 800 x 600 resolution and the teapot in it. The teapot is red. Now I need to create that zebra look.
Here is some code I've wrote, but it didn'T work:
FragmentShader:
void main(){
if (mod(gl_FragCoord[0].x * 800.0 , 2.0) == 0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
VertexShader:
void main(void)
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
}
As far as I know, gl_FragCood.x is in range(0,1) therefore I need to multiply with width.
Interesting you mention the need to multiply with the width, have you tried without the * 800.0 in there? The range of gl_FragCoord is such that the distance between adjacent pixels is 1.0, for example [0.0, 800.0] or possibly [0.5, 800.5].
Remove the width multiplication and see if it works.
Instead of comparing directly to 0, try doing a test against 1.0, e.g.
void main(){
if (mod(gl_FragCoord[0].x , 2.0) >= 1.0){
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}else{
gl_FragColor = vec4(0.0,0.0,0.0,1.0);
}
}
That'll avoid precision errors and the cost of rounding.
As emackey points out, gl_FragCoord is specified in window coordinates, which:
... result from scaling and translating Normalized
Device Coordinates by the viewport. The parameters to glViewport() and
glDepthRange() control this transformation. With the viewport, you can
map the Normalized Device Coordinate cube to any location in your
window and depth buffer.
So you also don't actually want to multiply by 800 — the incoming coordinates are already in pixels.

Why does this OpenGL shader use texture coordinates beyond 1.0?

I'm trying to get familiar with shaders in opengl. Here is some sample code that I found (working with openframeworks). The code simply blurs an image in two passes, first horizontally, then vertically. Here is the code from the horizontal shader. My only confusion is the texture coordinates. They exceed 1.
void main( void )
{
vec2 st = gl_TexCoord[0].st;
//horizontal blur
//from http://www.gamerendering.com/2008/10/11/gaussian-blur-filter-shader/
vec4 color;
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -4.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -3.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -2.0, 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * -1.0, 0));
color += 5.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt , 0));
color += 4.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 1.0, 0));
color += 3.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 2.0, 0));
color += 2.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 3.0, 0));
color += 1.0 * texture2DRect(src_tex_unit0, st + vec2(blurAmnt * 4.0, 0));
color /= 5.0;
gl_FragColor = color;
}
I can't make heads or tails out of this code. Texture coordinates are supposed to be between 0 and 1, and I've read a bit about what happens when they're greater than 1, but that's not the behavior I'm seeing (or I don't see the connection). blurAmnt varies between 0.0 and 6.4, so s can go from 0 to 25.6. The image just gets blurred more or less depending on the value, I don't see any repeating patterns.
My question boils down to this: what exactly is happening when the texture coordinate argument in the call to texture2DRect exceeds 1? And why does the blurring behavior still function perfectly despite this?
The [0, 1] texture coordinate range only applies to the GL_TEXTURE_2D texture target. Since that code uses texture2DRect (and a samplerRect), it's using the GL_TEXTURE_RECTANGLE_ARB texture target, that this target uses Unnormalized texture coordinates, in the range [0, width]x[0, height].
That's why you have "weird" texture coords. Don't worry, they work fine with this texture target.
Depends on the host code. If you saw something like
glTexParameteri (GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
Then the out of bounds s dimension will be zeros, IIRC. Similar for t.