Finding the maximum of a floating point counter - c++

My apologies if this has been asked before, but I cannot find it.
I was wondering if there is a way to calculate the point at which a single precision floating point number that is used as a counter will reach a 'maximum' (the point at which it is no longer able to add another value due to loss of precision).
For example, if I continuously add 0.1f to a float I will eventually reach a point where the value does not change:
const float INCREMENT = 0.1f;
float value = INCREMENT;
float prevVal = 0.0f;
do {
prevVal = value;
value += INCREMENT;
} while (value != prevVal);
cout << value << endl;
On GCC this outputs 2.09715e+06
Is there a way to compute this mathematically for different values of INCREMENT? I believe it should in theory be when the exponent portion of the float requires a shift beyond 23 bits, resulting in losing the mantissa and simply adding 0.

Given some positive y used as an increment, the smallest X for which adding y does not produce a result greater than X is the least power of 2 not less than y divided by half the “epsilon” of the floating-point format. It can be calculated by:
Float Y = y*2/std::numeric_limits<Float>::epsilon();
int e;
std::frexp(Y, &e);
Float X = std::ldexp(.5, e);
if (X < Y) X *= 2;
A proof follows. I assume IEEE-754 binary floating-point arithmetic using round-to-nearest-ties-to-even.
When two numbers are added in IEEE-754 floating-point arithmetic, the result is the exact mathematical result rounded to the nearest representable value in a selected direction.
A note about notation: Text in source code format represents floating-point values and operations. Other text is mathematical. So x+y is the exact mathematical sum of x and y, x is x in floating-point format, and x+y is the result of adding x and y in a floating-point operation. Also, I will use Float for the floating-point type in C++.
Given a floating-point number x, consider adding a positive value y using floating-point arithmetic, x+y. Under what conditions will the result exceed x?
Let x1 be the next value greater than x representable in the floating-point format, and let xm be the midpoint between x and x1. If the mathematical value of x+y is less than xm, then the floating-point calculation x+y rounds down, so it produces x. If x+y is greater than xm, either it rounds up and produces x1, or it produces some greater number because y is large enough to move the sum beyond x1. If x+y equals xm, the result is whichever of x or x1 has an even low digit. For reasons we will see, this is always x in the situations relevant to this question, so the calculation rounds down.
Therefore, x+y produces a result greater than x if and only if x+y exceeds xm, meaning that y exceeds half the distance from x to x1. Note that the distance from x to x1 is the value of 1 in the low digit of the significand of x.
In a binary floating-point format with p digits in its significand, the position value of the low digit is 21−p times the position value of the high digit. For example, if x is 2e, the highest bit in its significand represents 2e, and the lowest bit represents 2e+1−p.
The question asks, given a y, what is the least x for which x+y does not produce a result greater than x? It is the least x for which y does not exceed half the value of the low digit of the significand of x.
Let 2e be the position value of the high bit of the significand of x. Then y ≤ ½•2e+1−p = 2e−p, so y•2p ≤ 2e.
Therefore, given some positive y, the least x for which x+y does not produce a result greater than x has its leading bit, 2e, equal to or exceeding y•2p. And in fact it must be exactly 2e because all other floating-point numbers whose leading bit has position value 2e have other bits set in their significands, so they are greater. 2e is the least number for which the leading bit represents 2e.
Therefore, x is the least power of two that equals or exceeds y•2p.
In C++, std::numeric_limits<Float>::epsilon() (from the <limits> header) is the step from 1 to the next representable value, meaning it is 21−p. So y•2p equals y*2/std::numeric_limits<Float>::epsilon(). (This operation is exact unless it overflows to ∞.)
Let’s assign this to a variable:
Float Y = y*2/std::numeric_limits<Float>::epsilon();
We can find the position value represented by the highest bit of Y’s significand by using frexp (from the <cmath> header) to extract the exponent from the floating-point representation of Y and ldexp (also <cmath>) to apply that exponent to a new significand (.5 because of the scale that frexp and ldexp use):
int e;
std::frexp(Y, &e);
Float X = std::ldexp(.5, e);
Then X is a power of two, and it is less than or equal to Y. It is in fact the greatest power of two not greater than Y, since the next greater power of 2, 2X, is greater than Y. However, we want the least power of two not less than Y. We can find this with:
if (X < Y) X *= 2;
The resulting X is the number sought by the question.

Marek's Answer is pretty close, and a decent way to find it using a program (that is more efficient than the one I originally posted). However, I don't necessarily need the answer in a program form, just a mathematical one.
From what I can tell, the answer comes down to the exponent of the delta used, and the number of mantissa bits. We need to round to the nearest power of 2, which is kind of complicated. Basically if the mantissa is 0, we do nothing, otherwise we add 1 to the exponent. So, assuming we now have the delta as a power of 2, represented as 1.0 x 2exp, and a mantissa of N bits, the maximum value is 1.0 x 2(N + exp). Note that FLT_EPSILON in C is equal to 1.0 x 2-N. So we can also find this by dividing our nearest power of 2 by FLT_EPSILON.
For a delta of 0.1, the nearest power of 2 is 0.125, or 1.0 x 2-3. Therefore we want 1.0 x 2(23 + (-3)) or 1.0 x 221 which is equal to 2097152.

Yes it is possible.
there is std::numeric_limits::epsilon() which defines smallest value which can increase value 1.0.
Using this you can calculate this limit for any number.
In C there is DBL_EPSILON
So in your case this goes like this:
template<class T>
auto maximumWhenAdding(T delta) -> T
{
static_assert(std::is_floating_point_v<T>, "Works only for floating points.");
int power2= std::ilogb(delta);
float roudedDelta = ldexp(T { 1.0 }, power2);
if (roudedDelta != delta) {
roudedDelta *= 2;
}
return 2 * roudedDelta / std::numeric_limits<T>::epsilon();
}
live example C++
Note in live test examples delta fails to increase maxForDelta, but subtraction is successful, so this is exactly what you need.

Related

Fast inverse square root using fixed point instead of floating point

I am trying to implement Fast Inverse Square Root for a fixed point number, but I'm not getting anywhere.
I am trying to follow exactly the same principle as the article, except instead of writing the number in the floating point format x = (-1) ^ s * (1 + M) * 2 ^ (E-127), I am using the format x = M * 2 ^ -16, which is a 32-bit fixed point number with 16 decimal bits and 16 fractional bits.
The problem is that I cannot find the value of the "magic constant". According to my calculations, it doesn’t exist, but I’m not a mathematician and I think I’m doing everything wrong.
To solve Y = 1 / sqrt (x), I used the following reasoning (I don't know if it is correct).
In the original code we have that Y0 for approximation of newton is given by:
i = 0x5f3759df - (i >> 1);
Which means that we will have as a result a floating point number given by:
y0 = (1 + R2 - M / 2) * 2 ^ (R1 - E / 2);
This is because the operation >> divides exponent and mantissa by 2, and then we perform a subtraction of the numbers as integers.
Following the steps shown in the article, I set the format of x to:
x = M * 2 ^ -16
In an attempt to perform the same logic, I try to define Y0 for:
Y0 = (R2 - M / 2) * 2 ^ (R1 - (-16/2));
I'm trying to find a number, which can minimize the error given by:
error = (Y - Y0) / Y
Regardless of the value of R1, I can do shift operations to correct the exponent value of my final result, having the correct result at a fixed point.
Where am I wrong?
It can't be done.
The fast inverse sqrt is due to the floating point representation, that has already split the number into powers of two (exponent) and the significant.
It can be done.
With the same tricks as done for floating points, it's possible to convert your fixed point into 2^exp * x. Given uint32_t a, uint8_t exp = bias- builtin_count_leading_zeros(a); uint32_t b = a << exp, with the constants (and domain of a) so carefully chosen, that there will be no underflows or overflows.
Thus, you will actually have a custom floating point representation, which is tailored for this specific purpose, omitting the sign bit at least and having the best possible number of bits for the exponent, which might as well be 8.

How does Cpp work with large numbers in calculations?

I have a code that tries to solve an integral of a function in a given interval numerically, using the method of Trapezoidal Rule (see the formula in Trapezoid method ), now, for the function sin(x) in the interval [-pi/2.0,pi/2.0], the integral is waited to be zero.
In this case, I take the number of partitions 'n' equal to 4. The problem is that when I have pi with 20 decimal places it is zero, with 14 decimal places it is 8.72e^(-17), then with 11 decimal places, it is zero, with 8 decimal places it is 8.72e^(-17), with 3 decimal places it is zero. I mean, the integral is zero or a number near zero for different approximations of pi, but it doesn't have a clear trend.
I would appreciate your help in understanding why this happens. (I did run it in Dev-C++).
#include <iostream>
#include <math.h>
using namespace std;
#define pi 3.14159265358979323846
//Pi: 3.14159265358979323846
double func(double x){
return sin(x);
}
int main() {
double x0 = -pi/2.0, xf = pi/2.0;
int n = 4;
double delta_x = (xf-x0)/(n*1.0);
double sum = (func(x0)+func(xf))/2.0;
double integral;
for (int k = 1; k<n; k++){
// cout<<"func: "<<func(x0+(k*delta_x))<<" "<<"last sum: "<<sum<<endl;
sum = sum + func(x0+(k*delta_x));
// cout<<"func + last sum= "<<sum<<endl;
}
integral = delta_x*sum;
cout<<"The value for the integral is: "<<integral<<endl;
return 0;
}
OP is integrating y=sin(x) from -a to +a. The various tests use different values of a, all near pi/2.
The approach uses a linear summation of values near -1.0, down to 0 and then up to near 1.0.
This summation is sensitive to calculation error with the last terms as the final math sum is expected to be 0.0. Since the start/end a varies, the error varies.
A more stable result would be had adding the extreme f = sin(f(k)) values first. e.g. sum += sin(f(k=1)), then sum += sin(f(k=3)), then sum += sin(f(k=2)) rather than k=1,2,3. In particular the formation of term x=f(k=3) is likely a bit off from the negative of its x=f(k=1) earlier term, further compounding the issue.
Welcome to the world or numerical analysis.
Problem exists if code used all float or all long double, just different degrees.
Problem is not due to using an inexact value of pi (Exact value is impossible with FP as pi is irrational and all finite FP are rational).
Much is due to the formation of x. Could try the below to form the x symmetrically about 0.0. Compare exactly x generated this way to x the original way.
x = (x0-x1)/2 + ((k - n/2)*delta_x)
Print out the exact values computed for deeper understanding.
printf("x:%a y:%a\n", x0+(k*delta_x), func(x0+(k*delta_x)));

understanding std::fmod and std::remainder

Could someone please explain how the functions std::fmod and std::remainder work. In the case of the std::fmod, can someone explain the steps to show how:
std::fmod(+5.1, +3.0) = 2.1
Same thing goes for std::remainder which can produce negative results.
std::remainder(+5.1, +3.0) = -0.9
std::remainder(-5.1, +3.0) = 0.9
As the reference states for std::fmod:
The floating-point remainder of the division operation x/y calculated by this function is exactly the value x - n*y, where n is x/y with its fractional part truncated.
The returned value has the same sign as x and is less than y in magnitude.
So to take the example in the question, when x = +5.1 and y = +3.0,
x/y (5.1/3.0 = 1.7) with its fractional part truncated is 1. So n is 1. So the fmod will yield x - 1*y which is 5.1 - 1 * 3.0 which is 5.1 - 3.0 which is 2.1.
And the reference states for std::remainder:
The IEEE floating-point remainder of the division operation x/y calculated by this function is exactly the value x - n*y, where the value n is the integral value nearest the exact value x/y. When |n-x/y| = ½, the value n is chosen to be even.
So to take the example in the question, when x = +5.1 and y = +3.0
The nearest integral value to x/y (1.7) is 2. So n is 2. So the remainder will yield x - 2y which is 5.1 - 2 * 3.0 which is 5.1 - 6.0 which is -0.9.
But when x = -5.1 and y = +3.0
The nearest integral value to x/y (-1.7) is -2. So n is -2. So the remainder will yield x - 2y which is -5.1 - (-2) * 3.0 which is -5.1 + 6.0 which is +0.9
The reference also states that: In contrast to std::fmod(), the returned value is not guaranteed to have the same sign as x.
For those who may have a small difficulty understanding the good example by P.W., here is a slightly less mathematical approach.
fmod() function tells you how much remains after dividing your numerator evenly by your denominator.
remainder() function tells you how far your numerator is from the next closest number the denominator evenly divides into.
Examples:
fmod(10,3.5) = 3.
3.5 can fit twice into 10 (2*3.5 = 7), leaving a remainder of 3.
remainder(10,3.5) = -0.5.
3.5 cannot fit evenly into 10, but it can fit evenly into 7 (2*3.5) and 10.5 (3*3.5).
10.5 is closer to 10 than 7.
How far away is 10 from 10.5?
It is -0.5 away from 10.5.

The result of own double precision cos() implemention in a shader is NaN, but works well on the CPU. What is going wrong?

as i said, i want implement my own double precision cos() function in a compute shader with GLSL, because there is just a built-in version for float.
This is my code:
double faculty[41];//values are calculated at the beginning of main()
double myCOS(double x)
{
double sum,tempExp,sign;
sum = 1.0;
tempExp = 1.0;
sign = -1.0;
for(int i = 1; i <= 30; i++)
{
tempExp *= x;
if(i % 2 == 0){
sum = sum + (sign * (tempExp / faculty[i]));
sign *= -1.0;
}
}
return sum;
}
The result of this code is, that the sum turns out to be NaN on the shader, but on the CPU the algorithm is working well.
I tried to debug this code too and I got the following information:
faculty[i] is positive and not zero for all entries
tempExp is positive in each step
none of the other variables are NaN during each step
the first time sum is NaN is at the step with i=4
and now my question: What exactly can go wrong if each variable is a number and nothing is divided by zero especially when the algorithm works on the CPU?
Let me guess:
First you determined the problem is in the loop, and you use only the following operations: +, *, /.
The rules for generating NaN from these operations are:
The divisions 0/0 and ±∞/±∞
The multiplications 0×±∞ and ±∞×0
The additions ∞ + (−∞), (−∞) + ∞ and equivalent subtractions
You ruled out the possibility for 0/0 and ±∞/±∞ by stating that faculty[] is correctly initialized.
The variable sign is always 1.0 or -1.0 so it cannot generate the NaN through the * operation.
What remains is the + operation if tempExp ever become ±∞.
So probably tempExp is too high on entry of your function and becomes ±∞, this will make sum to be ±∞ too. At the next iteration you will trigger the NaN generating operation through: ∞ + (−∞). This is because you multiply one side of the addition by sign and sign switches between positive and negative at each iteration.
You're trying to approximate cos(x) around 0.0. So you should use the properties of the cos() function to reduce your input value to a value near 0.0. Ideally in the range [0, pi/4]. For instance, remove multiples of 2*pi, and get the values of cos() in [pi/4, pi/2] by computing sin(x) around 0.0 and so on.
What can go dramatically wrong is a loss of precision. cos(x) usually is implemented by range reduction followed by a dedicated implementation for the range [0, pi/2]. Range reduction uses cos(x+2*pi) = cos(x). But this range reduction isn't perfect. For starters, pi cannot be exactly represented in finite math.
Now what happens if you try something as absurd as cos(1<<30) ? It's quite possible that the range reduction algorithm introduces an error in x that's larger than 2*pi, in which case the outcome is meaningless. Returning NaN in such cases is reasonable.

Is x/a the same as x*(1/a) for floats?

With float a = ...; and float inva = 1/a; is x / a the same as x * inva?
And what is with this case:
unsigned i = ...;
float v1 = static_cast<float>(i) / 4294967295.0f;
float scl = 1.0f / 4294967295.0f;
float v2 = static_cast<float>(i) * scl;
Is v1 equal to v2 for all unsigned integers?
is v1 equal to v2 for all unsigned integers?
Yes, because 4294967295.0f is a power of two. Division and multiplication by the reciprocal are equivalent when the divisor is a power of two (assuming the computation of the reciprocal does not overflow or underflow to zero).
Division and multiplication by the reciprocal are not equivalent in general, only in the particular case of powers of two. The reason is that for (almost all) powers of two y, the computation of 1 / y is exact, so that x * (1 / y) only rounds once, just like x / y only rounds once.
No, the result will not always be the same. The way you group the operands in floating point multiplication, or division in this case, has an effect on the numerical accuracy of the answer. Thus, the product a*(1/b) might differ from a/b. Check the wikipedia article http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems.