Both the GLSL 4.6 and GLSL ES 3.2 spec say:
The range and granularity of offsets supported by this function [interpolateAtOffset] is implementation-dependent.
This seems too open-ended to be useful. How am I expected to know what will actually work across multiple vendors? Is there at least a minimum supported range specified somewhere in the standards? If not, is there something de-facto supported by the major vendors that I can rely on?
For Vulkan, the 1.2 specification requires the range and granularity to be reported with minInterpolationOffset, maxInterpolationOffset, and subPixelInterpolationOffsetBits, where:
The values minInterpolationOffset and maxInterpolationOffset describe the closed interval of supported interpolation offsets: [minInterpolationOffset, maxInterpolationOffset]. The ULP is determined by subPixelInterpolationOffsetBits. If subPixelInterpolationOffsetBits is 4, this provides increments of (1/24) = 0.0625, and thus the range of supported interpolation offsets would be [-0.5, 0.4375].
Based on the minimum values of these required by the specification, you can rely on at least [-0.5, 0.4375] being available if sampleRateShading is supported.
For OpenGL, the 4.6 specification says:
The built-in function interpolateAtOffset will sample variables at a specified (x, y) offset relative to the center of the pixel. The range and granularity of offsets supported by this function is implementation-dependent. If either component of the specified offset is less than the value of MIN_FRAGMENT_INTERPOLATION_OFFSET or greater than the value of MAX_FRAGMENT_INTERPOLATION_OFFSET, the position used to interpolate the variable is undefined. Not all values of offset may be supported; x and y offsets may be rounded to fixed-point values with the number of fraction bits given by the value of the implementation-dependent constant FRAGMENT_INTERPOLATION_OFFSET_BITS.
The required minimums are the same as Vulkan.
Trying to search through PDFs is painful so I won't bother looking at the ES spec, I would assume it's the same.
Related
I just need a random uint, better ranging from 0-6, but there is no enumeration type in openGL. I learned that I can get a random float ranging 0-1 from the code below:
frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
I tried to do 1/above and get floor(), but it doesn't work. Then how can I get a random int? or is there a way to get the last digit of the float(so presumably still random)?
First, let's define what we mean by "random". In the context of this answer, a "random" variable is a variable whose values are unpredictable. That is, there is no function that determines/computes an outcome for the random variable when being evaluated (with any possible inputs). Or at least, no such function has been found (yet).
Obviously, when we are talking about computing here, there is no such thing as a true random variable as described above, because anything we do in computing (and by extension in a shader) is necessarily bound to the set of functions that are computable.
Your proposed function in the question:
f(uv) = frac(sin(dot(uv, float2(12.9898, 78.233))) * 43758.5453123)
is just a computable function. It takes as input a vector uv, which itself is a deterministic/computable value - such as derived from a built-in or custom varying variable giving you the "coordinates" of the current fragment.
After evaluation, the function's result itself was computable/deterministic and happens to be a value (which the input vector uv maps to). Taking different IEEE 754 rules and precisions aside (which may vary between different GPUs such as desktop ones and mobile ones), the function itself is purely deterministic/computable and therefore does not give you a random value.
We humans may think that the output is random, because we lack the intuition for the functions used to compute the result, such that when we "see" a number 0.623513632 followed by another number 0.9734126 for only slight variations in the input vector, we could draw the conclusion that "yeah, that looks pretty random", when it fact it obviously isn't. It is just what that function computed, given two input values.
So, when you already have a deterministic function like the above and wanted to obtain values in the closed range [0, 6] from it as a GLSL uint, you can simply scale the output of said function by multiplying the function's result with 7.0 and truncating the result:
g(uv) = uint(f(uv) * 7.0)
If you wanted to obtain true random numbers drawn from a random variable (whose deterministic function simply hasn't been found yet), you can obtain such values from universe background radiation (such as from random.org) and use that as an input to your shader (such as via textures or buffer objects).
But, from a computational perspective, a shader is just a function taking in values (ints, floats, ...) and computing (by means of computable functions) a deterministic result.
All we can do is to shuffle/scramble/diffuse the input bits in such a way, that the result "looks" like random to us. We then call these "pseudo-random" values.
Taking this a step further, we could now ask the question of the distribution quality of the obtained pseudo-random values. This has two qualities:
how evenly distributed are the pseudo-random values in their domain/interval? I.e. do all possible values have the same probability of occurring? Or: Do you even want to have uniformly-distributed values or should the values follow another distribution (like Guassian?)
how well are two values drawn from two sequential input values spaced apart? I.e. what is the frequency of the pseudo-random values?
There are different (deterministic) algorithms/functions depending on which distribution and which frequency spectrum your values should have. But first, you should define an answer to the two questions for your use-case.
And by the way, the commonly used function in your question to obtain pseudo-random numbers in a shader has a terrible distribution quality.
Last but not least, it should also be mentioned that true randomness (i.e. non-determinism), like when you do use an entropy source as input values, is oftentimes an undesirable property in computation, because it:
makes it difficult to repeat the same computation / output when needed, which is useful in various algorithms in the context of path tracing
makes it difficult to reproduce/debug/inspect your function for a particular run when every following execution/run will yield a different output
CUDA supports mathematical functions. But do they provide any guarantee like if I compute sin(x) the result would the closet representable value to the mathematical value of sin(x)? If the answer is no, is there any alternative if we want to stay in GPU? Something like it always return an upper bound or always a lower bound (but possibly not the closest possible one).
CUDA Appendix Suggests that the value provided by the API are not accurate. There is a difference of, 1/2 ulps in their value. (when compared between host and device).
However, for all practical purposes, these values are accurate.
Anyway if you want to perform symbolic operations with high-level precision, using float precision makes it inaccurate.
I use 1D, 2D and 3D OpenGL textures containing float 32 data (GL_RGBA32F format), in a desktop application (Windows/Linux, GLSL 4.2).
These textures contains results of precomputed physical data and could contains some NAN values where the precompute failed (this is "normal" in my application, some cases cannot be computed, this is rare but "normal")
I need to detect these values in the shader.
Is there any standard on handling NAN values in GLSL sampler ?
More specifically :
is it sure that a NAN value written in a texture a read as NAN by "textureXXX" GLSL methods ?
if a "textureXXX" method is called with non-NEAREST filtering and one of the interpolated values is a NAN, should I get a NAN value as result ?
Thanks.
Edit :
As the answers says, nothing is required about NaN support in OpenGL specifications.
So I will :
replace NaN values with a "special" value in case the NaN value in a texture is not read as NaN in GLSL
implement the mipmapping by myself
test all values against the "special" value
This is extra work, but seems to be necessary to ensure support of my invalid values on different platforms / different hardware.
Based on my interpretation of the spec, the answer is NO to both. The most conclusive evidence I found for this is in section 4.7.1 "Range and Precision" of recent GLSL specs (e.g. page 65 of the GLSL 4.20 spec):
Operations and built-in functions that operate on a NaN are not required to return a NaN as the result.
The texture() GLSL calls are built-in functions, so I believe this rule applies to them, and you can't count on them returning NaN for textures that contain NaN values.
Section 2.3.4.1 of the OpenGL 4.5 compatibility spec says that "Implementations are permitted,
but not required, to support Inf s and NaN s in their floating-point computations."
I didn't find any mention of NaN values in the texture minification section of the spec (8.14.2 in OpenGL 4.5 compatibility spec). However, the equations used in fragment minification seem fairly well specified, so I imagine implementations will respect NaN values in them. If you are afraid, however, you can use raw texture access functions to implement mipmapping yourself. As Reto Koradi notes, texture functions may not return NaN values for filtered textures.
The OpenGL documentation says very little about these two functions. When it would make sense to use glTexParameterIiv instead of glTexParameteriv or even glTexParameterfv?
If the values for GL_TEXTURE_BORDER_COLOR are specified with glTexParameterIiv or glTexParameterIuiv, the values are stored unmodified with an internal data type of integer. If specified with glTexParameteriv, they are converted to floating point with the following equation: f=(2c+1)/(2b−1). If specified with glTexParameterfv, they are stored unmodified as floating-point values.
You sort of answered your own question with the snippet you pasted. Traditional textures are fixed-point (unsigned normalized, where values like 255 are converted to 1.0 through normalization), but GL 3.0 introduced integral (signed / unsigned integer) texture types (where integer values stay integers).
If you had an integer texture and wanted to assign a border color (for use with the GL_CLAMP_TO_BORDER wrap mode), you would use one variant of those two functions (depending on whether you want signed or unsigned).
You cannot filter integer textures, but you can still have texture coordinate wrap behavior. Since said textures are integer and glTexParameteriv (...) normalizes the color values it is passed, an extra function had to be created to keep the color data integer.
You will find this same sort of thing with glVertexAttribIPointer (...) and so forth; adding support for integer data (as opposed to simply converting integer data to floating-point) to the GL pipeline required a lot of new commands.
I am curently playing with Dart and especially dart:typed_data. I stumbled across a class where I have no idea what its purpose/speciality is. I speak of Uint8ClampedList. The difference to the Uint8List in the documentation is the sentence
Indexed store clamps the value to range 0..0xFF.
What does that sentence actually mean? Why does this class exist? I am really curious.
"Clamping" means that values below 0 become 0 and values above 0xff become 0xff when stored into the Uint8ClampedList. Any value in the range 0..0xff can be stored into the list without change.
This differs from other typed lists where the value is instead just truncated to the low 8 (or 16 or 32) bits.
The clamped list (and the name itself) mirrors the Uint8ClampedArray of JavaScript.
One usage of clamping that I have seen is for RGB(A) color images, where really over-saturated colors (e.g., and R value > 255) would be capped at the maximum value instead of wrapping around and becoming dark. It allows you to make some transformations on the values without having to care about handling overflow. See the Uint8ClampedArray specification - it was introduced to have an array type matching the behavior of an existing canvas type.