OpenGL: issues with converting floats from texture to integers in fragment shader - opengl

I render to a texture which is in the format GL_RGBA8.
When I render to this texture I have a fragment shader whose output is set to color = (1/255, 0, 0, 1). Triangles are overlapping each other and I set the blend mode to (GL_ONE, GL_ONE) so for example if 2 triangles overlap for a given fragment, the resulting pixel at that fragment position will have value (2/255.0).
I then use this texture in a second pass (applied to a quad filling up the screen). My goal at this point when I read the values back from the texture is to convert the values (which are in floating point format in the range [0:1]) back to integers in the range [0:255]. If I look at the pixel that add value (2.0/255.0) I should have the result (2.0/255.0) * 255.0 = 2.0 but I don't.
If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a == 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get a black image. If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a > 1.999 && a <= 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get the expected result. So in summary it seems like the convention back to [0:255] suffers from floating precision issues.
precision highp float;
Doesn't make a difference. I also turned filtering off (and no mipmaps).
This would work:
float a = ceil(texture(colorTexture, texCoord).x * 255);
Though in general that doesn't like very robust as a solution (why would ceil work and not floor for example, why is the value 1.999999 rather than 2.00001 and can I be sure it will always be that way?). People must have done that before so I am sure there's a much better way to guaranteeing you get an accurate result without doing too much fiddling with the numbers. Any hints would be greatly appreciated.
EDIT
As pointed in 2 comments, it's right from the way floating point numbers are encoded that you can't get a guarantee that you will get a "integer" number back even if the number is even (that's good to be reminded of this important point). So I reformulate my question which is then, is there a preferred way in GLSL to clamp number to its closest integer values?
And that would be round:
float a = round(texture(colorTexture, texCoord).x * 255);
Hope this can help other people in the future though.

Related

What is the correct gamma correction function?

Currently I use the following formula to gamma correct colors (convert them from RGB to sRGB color space) after the lighting pass:
output = pow(color, vec3(1.0/2.2));
Is this formula the correct formula for gamma correction? I ask because I have encountered a few people saying that its not, and that the correct formula is more complicated and has something to do with power 2.4 rather than 2.2. I also heard something that the three color R, G and B should have different weights (something like 0.2126, 0.7152, 0.0722).
I am also curious which function does OpenGL use when GL_FRAMEBUFFER_SRGB is enabled.
Edit:
This is one of many topics covered in Guy Davidson's talk "Everything you know about color is wrong". The gamma correction function is covered here, but the whole talk is related to color spaces including sRGB and gamma correction.
Gamma correction may have any value, but considering linear RGB / non-linear sRGB conversion, 2.2 is an approximate, so that your formula may be considered both wrong and correct:
https://en.wikipedia.org/wiki/SRGB#Theory_of_the_transformation
Real sRGB transfer function is based on 2.4 gamma coefficient and has discontinuity at dark values like this:
float Convert_sRGB_FromLinear (float theLinearValue) {
return theLinearValue <= 0.0031308f
? theLinearValue * 12.92f
: powf (theLinearValue, 1.0f/2.4f) * 1.055f - 0.055f;
}
float Convert_sRGB_ToLinear (float thesRGBValue) {
return thesRGBValue <= 0.04045f
? thesRGBValue / 12.92f
: powf ((thesRGBValue + 0.055f) / 1.055f, 2.4f);
}
In fact, you may find even more rough approximations in some GLSL code using 2.0 coefficient instead of 2.2 and 2.4, so that to avoid usage of expensive pow() (x*x and sqrt() are used instead). This is to achieve maximum performance (in context of old graphics hardware) and code simplicity, while sacrificing color reproduction. Practically speaking, the sacrifice is not that noticeable, and most games apply additional tone-mapping and user-managed gamma correction coefficient, so that result is not directly correlated to sRGB standard.
GL_FRAMEBUFFER_SRGB and sampling from GL_SRGB8 textures are expected to use more correct formula (in case of texture sampling it is more likely pre-computed lookup table on GPU rather than real formula as there are only 256 values to convert). See, for instance, comments to GL_ARB_framebuffer_sRGB extension:
Given a linear RGB component, cl, convert it to an sRGB component, cs, in the range [0,1], with this pseudo-code:
if (isnan(cl)) {
/* Map IEEE-754 Not-a-number to zero. */
cs = 0.0;
} else if (cl > 1.0) {
cs = 1.0;
} else if (cl < 0.0) {
cs = 0.0;
} else if (cl < 0.0031308) {
cs = 12.92 * cl;
} else {
cs = 1.055 * pow(cl, 0.41666) - 0.055;
}
The NaN behavior in the pseudo-code is recommended but not specified in the actual specification language.
sRGB components are typically stored as unsigned 8-bit fixed-point values.
If cs is computed with the above pseudo-code, cs can be converted to a [0,255] integer with this formula:
csi = floor(255.0 * cs + 0.5)
Here is another article describing sRGB usage in OpenGL applications, which you may find useful: https://unlimited3d.wordpress.com/2020/01/08/srgb-color-space-in-opengl/

What algorithm does GL_LINEAR use exactly?

The refpages say "Returns the weighted average of the four texture elements that are closest to the specified texture coordinates." How exactly are they weighted? And what about 3D textures, does it still only use 4 texels for interpolation or more?
in 2D textures are 4 samples used which means bi-linear interpolation so 3x linear interpolation. The weight is the normalized distance of target texel to its 4 neighbors.
So for example you want the texel at
(s,t)=(0.21,0.32)
but the texture nearby texels has coordinates:
(s0,t0)=(0.20,0.30)
(s0,t1)=(0.20,0.35)
(s1,t0)=(0.25,0.30)
(s1,t1)=(0.25,0.35)
the weights are:
ws = (s-s0)/(s1-s0) = 0.2
wt = (t-t0)/(t1-t0) = 0.4
so linear interpolate textels at s direction
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws
and finally in t direction:
c = c0 + (c1-c0)*wt
where texture(s,t) returns texel color at s,t while the coordinate corresponds to exact texel and c is the final interpolated texel color.
In reality the s,t coordinates are multiplied by the texture resolution (xs,ys) which converts them to texel units. after that s-s0 and t-t0 is already normalized so no need to divide by s1-s0 and t1-t0 as they are booth equal to one. so:
s=s*xs; s0=floor(s); s1=s0+1; ws=s-s0;
t=t*ys; t0=floor(t); t1=t0+1; wt=t-t0;
c0 = texture(s0,t0) + (texture(s1,t0)-texture(s0,t0))*ws;
c1 = texture(s0,t1) + (texture(s1,t1)-texture(s0,t1))*ws;
c = c0 + (c1-c0)*wt;
I never used 3D textures before but in such case it use 8 textels and it is called tri-linear interpolation which is 2x bi-linear interpolation simply take 2 nearest textures and compute each with bi-linear interpolation and the just compute the final texel by linear interpolation based on the u coordinate in the exact same way ... so
u=u*zs; u0=floor(u); u1=u0+1; wu=u-u0;
c = cu0 + (cu1-cu0)*wu;
where zs is count of textures, cu0 is result of bi-linear interpolation in texture at u0 and cu1 at u1. This same principle is used also for mipmaps...
All the coordinates may have been offseted by 0.5 texel and also the resolution multiplication can be done with xs-1 instead of xs based on your clamp settings ...
As well as the bilinear interpolation outlined in Spektre's answer, you should be aware of the precision of GL_LINEAR interpolation. Many GPUs (e.g. Nvidia, AMD) do the interpolation using fixed point arithmetic with only ~255 distinct values between the R,G,B,A values in the texture.
For example, here is pseudo code showing how GPUs might do the interpolation:
float interpolate_red(float red0, float red1, float f) {
int g = (int)(f*256)
return (red0*(256-g) + red1*g)/256;
}
If your texture is for coloring and contains GL_UNSIGNED_BYTE values then it is probably OK for you. But if your texture is a lookup table for some other calculation and it contains GL_UNSIGNED_SHORT or GL_FLOAT values then this loss of precision could be a problem for you. In which case you should make your lookup table bigger with in-between values calculated with (float) or (double) precision.

Generate Color between two specific values

I have a lowest speed Color and a highest speed Color
I have another variable called currentSpeed which gives me the current speed. I'd like to generate a Color between the two extremes using the current speed. Any hints?
The easiest solution is probably to linearly interpolate each of RGB (because that is probably the format your colours are in). However it can lead to some strange results. If lowest is bright blue (0x0000FF) and highest is bright yellow (0xFFFF00), then mid way will be dark grey (0x808080).
A better solution is probably:
Convert both colours to HSL (Hue, saturation, lightness)
Linearly interpolate those components
Convert the result back to RGB.
See this answer for how to do the conversion to and from HSL.
To do linear interpolation you will need something like:
double low_speed = 20.0, high_speed = 40.0; // The end points.
int low_sat = 50, high_sat = 200; // The value at the end points.
double current_speed = 35;
const auto scale_factor = (high_sat-low_sat)/(high_speed-low_speed);
int result_sat = low_sat + scale_factor * (current_speed - low_speed);
Two problems:
You will need to be careful about integer rounding if speeds are not actually double.
When you come to interpolate hue, you need to know that they are represented as angles on a circle - so you have a choice whether to interpolate clockwise or anti-clockwise (and one of them will go through 360 back to 0).

Advanced moiré a pattern reduction in HLSL / GLSL procedural textures shader - antialiasing

I am working on a procedural texture, it looks fine, except very far away, the small texture pixels disintegrate into noise and moiré patterns.
I have set out to find a solution to average and quantise the scale of the pattern far away and close up, so that close by it is in full detail, and far away it is rounded off so that one pixel of a distant mountain only represents one colour found there, and not 10 or 20 colours at that point.
It is easy to do it by rounding the World_Position that the volumetric texture is based on using an if statement i.e.:
if( camera-pixel_distance > 1200 meters ) {wpos = round(wpos/3)*3;}//---round far away pixels
return texturefucntion(wpos);
the result of rounding far away textures is that they will look like this, except very far away:
the trouble with this is i have to make about 5 if conditions for the various distances, and i have to estimate a random good rounding value
I tried to make a function that cuts the distance of the pixel into distance steps, and applies a LOD devider to the pixel_worldposition value to make it progressively rounder at distance but i got nonsense results, actually the HLSL was totally flipping out. here is the attempt:
float cmra= floor(_WorldSpaceCameraPos/500)*500; //round camera distance by steps of 500m
float dst= (1-distance(cmra,pos)/4500)*1000 ; //maximum faraway view is 4500 meters
pos= floor(pos/dst)*dst;//close pixels are rounded by 1000, far ones rounded by 20,30 etc
it returned nonsense patterns that i could not understand.
Are there good documented algorithms for smoothing and rounding distance texture artifacts? can i use the scren pixel resolution, combined with the distance of the pixel, to round each pixel to one color that stays a stable color?
Are you familiar with the GLSL (and I would assume HLSL) functions dFdx() and dFdy() or fwidth()? They were made specifically to solve this problem. From the GLSL Spec:
genType dFdy (genType p)
Returns the derivative in y using local differencing for the input argument p.
These two functions are commonly used to estimate the filter width used to anti-alias procedural textures.
and
genType fwidth (genType p)
Returns the sum of the absolute derivative in x and y using local differencing for the input argument p, i.e.: abs (dFdx (p)) + abs (dFdy (p));
OK i found some great code and a tutorial for the solution, it's a simple code that can be tweaked by distance and many parameters.
from this tutorial:
http://www.yaldex.com/open-gl/ch17lev1sec4.html#ch17fig04
half4 frag (v2f i) : COLOR
{
float Frequency = 0.020;
float3 pos = mul (_Object2World, i.uv).xyz;
float V = pos.z;
float sawtooth = frac(V * Frequency);
float triangle = (abs(2.0 * sawtooth - 1.0));
//return triangle;
float dp = length(float2(ddx(V), ddy(V)));
float edge = dp * Frequency * 8.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
// gl_FragColor = vec4(vec3(square), 1.0);
if (pos.x>0.){return float4(float3(square), 1.0);}
if (pos.x<0.){return float4(float3(triangle), 1.0);}
}

normal for analytical surface

I'm calculating surface normal for my analytical surface.
Some parts of normal i'm getting are correct but not all.
Code is :
SurfaceVertices3f[pos] = i;
SurfaceVertices3f[pos+1] = j;
SurfaceVertices3f[pos+2] = (cos(i)*sin(j));
/*a and b hold the poutput of partial differentiation of vertices from above three lines.a is wrt i and b is wrt j */
a[0]=1;
a[1]=0;
a[2]=-sin(i)*sin(j);
b[0]=0;
b[1]=1;
b[2]=cos(i)*cos(j);
normal_var=Vec3Df::crossProduct( a, b);
normal_var.normalize();
My output looks like this, right image is mine and left one i'm using as refrence .
http://tinypic.com/view.php?pic=73l9co&s=5
Could anyone tell me what mistake i'm doing?
Your normal calculation is correct. The reference image has just a different way to map normals to colors.
If you have a look at the green ground color, you will see that the color's norm is not 1. But normals should have a norm of 1. If we assume another common mapping from normal to color like this one:
color.rgb = normal.xyz / 2 + 0.5
We see that this is no unit vector either. So either they used yet a different mapping or they just don't have unit length normals.