Making a NaN on purpose in WebGL - glsl

I have a GLSL shader that's supposed to output NaNs when a condition is met. I'm having trouble actually making that happen.
Basically I want to do this:
float result = condition ? NaN : whatever;
But GLSL doesn't seem to have a constant for NaN, so that doesn't compile. How do I make a NaN?
I tried making the constant myself:
float NaN = 0.0/0.0; // doesn't work
That works on one of the machines I tested, but not on another. Also it causes warnings when compiling the shader.
Given that the obvious computation didn't work on one of the machines I tried, I get the feeling that doing this correctly is quite tricky and involves knowing a lot of real-world facts about the inconsistencies between various types of GPUs.

Don't use NaNs here.
Section 2.3.4.1 from the OpenGL ES 3.2 Spec states that
The special values Inf and −Inf encode values with magnitudes too large to be represented; the special value NaN encodes “Not A Number” values resulting from undefined arithmetic operations such as 0/0. Implementations are permitted, but not required, to support Inf's and NaN's in their floating-point computations.
So it seems to really depend on implementation. You should be outputing another value instead of NaN

Pass it in as a uniform
Instead of trying to make the NaN in glsl, make it in javascript then pass it in:
shader = ...
uniform float u_NaN
...
call shader with "u_NaN" set to NaN

Fool the Optimizer
It seems like the issue is the shader compiler performing an incorrect optimization. Basically, it replaces a NaN expression with 0.0. I have no idea why it would do that... but it does. Maybe the spec allows for undefined behavior?
Based on that assumption, I tried making an obfuscated method that produces a NaN:
float makeNaN(float nonneg) {
return sqrt(-nonneg-1.0);
}
...
float NaN = makeNaN(some_variable_I_know_isnt_negative);
The idea is that the optimizer isn't clever enough to see through this.
And, on the test machine that was failing, this works! I also tried simplifying the function to just return sqrt(-1.0), but that brought back the failure (further reinforcing my belief that the optimizer is at fault).
This is a workaround, not a solution.
A sufficiently clever optimizer could see through the obfuscation and start breaking things again.
I only tested it in a couple machines, and this is clearly something that varies a lot.

The Unity glsl compiler will convert 0.0f/0.0f to intBitsToFloat(int(0xFFC00000u) - since intBitsToFloat is supported from OpenGL ES 3.0 onwards, this is a solution that works in WebGL2 but not WebGL1

Related

C++ gcc does associative-math flag disable float NAN values?

I'm working with statistic functions with a lot of float data. I want it to run faster but Ofast disable NAN (fno-finite-math-only flag), which is not allowed in my case.
In this case, is it safe to turn on only associative-math ? I think this flag allows things like vectorized sum of vector array, even if the array contains NAN.
From the docs:
NOTE: re-ordering may change the sign of zero as well as ignore NaNs
So if you want correct handling of NaNs, you should not use -fassociative-math.

Adding double precision values yield different results between separate programs in C++

I have a question about floating point addition. I understand how compilers and processor architecture can lead to floating point arithmetic values. I have seen many questions on here similar to my question, but they all have some variation such as different compiler, different code, different machine, etc. However, I'm am running into an issue when adding doubles in the exact same way in two different programs calling the identical function with the same arguments and it is leading to different results. Both programs are compiled on the same machine with the same compiler/tags. The code looks similar to this:
void function(double tx, double ty, double tz){
double answer;
double x,y;
x = y = answer = 0;
x = tx - ty;
y = ty - tz;
answer = (tx + ty + tz) * (x*y)
}
The values of:
tx,ty,tz
are on the order of [10e-15,10e-30]. Obviously this is a very simplified version of the functions I am actually using, but, is it possible for two programs, running identical floating point arithmetic (not just the same function, the exact same code), on the same machine, with the same compiler/tags, to get different results for the function?
Some possibilities:
The source code of function is identical in the two programs, but it appears with different context, resulting in the compiler compiling it in different ways. For example, the compiler might inline it in one place and not another, and inlining might lead to some expression reduction due to combination with other expressions at the point of the inlined call, and hence different arithmetic is performed. (To test this, move function to a separate source file, compile it separately, and link it with a linker without cross-module optimization. Also, try compiling with optimization disabled.)
You think there are identical inputs to function because they appear the same when printed or viewed in the debugger, but they are actually different due to small differences in the low digits that are not printed. (To test this, print the full values using the hexadecimal floating-point format. To do that, insert std::hexfloat into the output stream, followed by floating-point values. Alternately, use a C printf using the %a format.)
Something else in the programs changes floating-point state, such as rounding mode.
You think you have used an identical compiler, identical sources, identical compilation switches, and so on, but actually have not.
David Schwartz notes that floating-point values can change when they are stored, as occurs when they are simply spilled to the stack. This occurs because some processors and C++ implementations may store floating-point values with extended precision in registers but less precision in memory. Technically, this fits into either 1. (different computation nominally inside function) or 2. (different values passed to function), but it is insidious enough to warrant separate mention.
Well the answer is quite easy. If your computer behaves deterministic it will always return the same results for the same input. That's the basic idea behind programming languages so far. (Unless we are talking about quantum computers, of course.)
So the question reduces to whether you really have the same input.
Although the above function looks strictly functional, there are often hidden inputs that are not that obvious. E.g. you might adjust the rounding mode of your FPU before calling the function. Or you might setup different exception behavior. In both cases the function may behave differently for certain inputs.
So even if your computer isn't non-deterministic (i.e. buggy) the above function might return different results. Although it is not that likely.

Initializing floats with integer immediate values & warnings

In my code, I initialize a lot of floats with 0, 1 and 2 values (or other small ints). While GCC produces no warnings for this, MSVC does. So I replaced all 0's by 0.f, 1's by 1.f, etc... Also initializing a float with 0.5 issues a warning, and I replaced it by 0.5f.
While I fully understand that doing float f=someInt or float f=someDouble should produce a warning as in some cases precision is lost, the compiler should be smart enough to know that 0, 1, 2 and 0.5 are exact float values. And my code is much less readable like that...
Is MSVC not using some standard? Should I let him complain or make my code less readable?
Thanks!
[...] the compiler should be smart enough to know that 0, 1, 2 and 0.5 are exact float values.
That may be the case, but do you really want the compiler to use that knowledge to suppress warnings? Consider the following code snippet:
double fun()
{
float calculated = UNIVERSAL_BASE_VALUE;
// Do some calculations.
return calculated;
}
Suppose UNIVERSAL_BASE_VALUE is a constant defined in a header file somewhere. And maybe its type is double. But maybe its value is 0.5, which is an exact float value, so the compiler could use its knowledge to suppress a warning in this case.
Now fast-forward a few years. This fun function has not been touched in the interim, but businesses change, and someone wants to try changing the definition of UNIVERSAL_BASE_VALUE from 0.5 to 0.51. Suddenly there is a compiler warning for that function that has been stable for years. Why is this? There was no logical change, just a small data change. Yet that data change gave UNIVERSAL_BASE_VALUE a value that cannot be exactly represented in a float. The compiler no longer stays quiet about the conversion. After investigating, it is discovered that the type of calculated had been wrong for all those years, causing fun() to return imprecise results. Time to blame the compiler for being too smart? :)
Note that you get a similar situation if you replace UNIVERSAL_BASE_VALUE with a literal 0.5. It just makes the ending less dramatic while the overall point still holds: being smart could let a bug slip through.
Compiler warnings are intended to alert you to potential bugs. This is not an exact science, as programmer intent can be hard to deduce, especially when coding styles vary. There is no comprehensive standard covering all warnings a compiler may choose to emit. When false positives arise, it is up to the programmers to make a judgement call for their specific case. I can think of four basic approaches to choose between.
Turn off a warning (because it doesn't really help your code).
Accept that warnings are generated when compiling (not a good choice).
Change the coding style to accommodate the warnings (time consuming).
Whine to the compiler's developers (err… ask nicely for changes; don't whine).
Do not choose option 2. This would mean accumulating warnings that humans learn to ignore. Once you accumulate enough "accepted" warnings, it becomes difficult to spot other warnings that pop up. They get lost in the crowd, hence fail to achieve their intended purpose.
Note that compilers tend to support suppressing certain warnings for just certain lines of certain files. This gives a compromise between options 1 and 2 that might be acceptable.
In your case, you would need to evaluate this warning for your code base. If it provides value by spotting bugs, you should go with what you call "less readable". (It's not really less readable, but it does take time to get used to it.) If the warning does not provide enough value to warrant a style change, turn the warning off.
I suggest you write the code to be warning free.
If that makes it hard to read, this points to a problem somewhere else, like an unhealthy mix of float and double, which easily leads to a loss of precision or arithmetically unstable results.
Once upon a time I had a program which crashed with a coredump when it finished. The compiler gave some warnings because I had an unhealthy mix of char* and char[].
When I fixed these warnings, the program was suddenly stable (no more memory corruption).
So, turn on all warnings and change the code to compile warning free.
The compiler just wants to help you!

OpenGL ES 2 glGetActiveAtrib and non floats

I'm porting an engine from DX9/10/11 over to OpenGL ES 2. I'm having a bit of a problem with glGetActiveAttrib though.
According to the docs the type returned can only be one of the following:
The symbolic constants GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3,
GL_FLOAT_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, or GL_FLOAT_MAT4 may be
returned.
This seems t imply that you cannot have an integer vertex attribute? Am I missing something? Does this really mean you HAVE to implement every thing as floats? Does this mean I can't implement a colour as 4 byte values?
If so, this seems very strange as this would be a horrific waste of memory ... if not, can someone explain where I'm going wrong?
Cheers!
Attributes must be declared as floats in GLSL ES shader. But you can pass to them SHORT's or other supported values listed here. The conversion will happen automatically.

How to store doubles in memory

Recently I changed some code
double d0, d1;
// ... assign things to d0/d1 ...
double result = f(d0, d1)
to
double d[2];
// ... assign things to d[0]/d[1]
double result = f(d[0], d[1]);
I did not change any of the assignments to d, nor the calculations in f, nor anything else apart from the fact that the doubles are now stored in a fixed-length array.
However when compiling in release mode, with optimizations on, result changed.
My question is, why, and what should I know about how I should store doubles? Is one way more efficient, or better, than the other? Are there memory alignment issues? I'm looking for any information that would help me understand what's going on.
EDIT: I will try to get some code demonstrating the problem, however this is quite hard as the process that these numbers go through is huge (a lot of maths, numerical solvers, etc.).
However there is no change when compiled in Debug. I will double check this again to make sure but this is almost certain, i.e. the double values are identical in Debug between version 1 and version 2.
Comparing Debug to Release, results have never ever been the same between the two compilation modes, for various optimization reasons.
You probably have a 'fast math' compiler switch turned on, or are doing something in the "assign things" (which we can't see) which allows the compiler to legally reorder calculations. Even though the sequences are equivalent, it's likely the optimizer is treating them differently, so you end up with slightly different code generation. If it's reordered, you end up with slight differences in the least significant bits. Such is life with floating point.
You can prevent this by not using 'fast math' (if that's turned on), or forcing ordering thru the way you construct the formulas and intermediate values. Even that's hard (impossible?) to guarantee. The question is really "Why is the compiler generating different code for arrays vs numbered variables?", but that's basically an analysis of the code generator.
no these are equivalent - you have something else wrong.
Check the /fp:precise flags (or equivalent) the processor floating point hardware can run in more accuracy or more speed mode - it may have a different default in an optimized build
With regard to floating-point semantics, these are equivalent. However, it is conceivable that the compiler might decide to generate slightly different code sequences for the two, and that could result in differences in the result.
Can you post a complete code example that illustrates the difference? Without that to go on, anything anyone posts as an answer is just speculation.
To your concerns: memory alignment cannot effect the value of a double, and a compiler should be able to generate equivalent code for either example, so you don't need to worry that you're doing something wrong (at least, not in the limited example you posted).
The first way is more efficient, in a very theoretical way. It gives the compiler slightly more leeway in assigning stack slots and registers. In the second example, the compiler has to pick 2 consecutive slots - except of course if the compiler is smart enough to realize that you'd never notice.
It's quite possible that the double[2] causes the array to be allocated as two adjacent stack slots where it wasn't before, and that in turn can cause code reordering to improve memory access efficiency. IEEE754 floating point math doesn't obey the regular math rules, i.e. a+b+c != c+b+a