Precision issues when converting a decimal number to its rational equivalent - c++

I have problem of converting a double (say N) to p/q form (rational form), for this I have the following strategy :
Multiply double N by a large number say $k = 10^{10}$
then p = y*k and q = k
Take gcd(p,q) and find p = p/gcd(p,q) and q = p/gcd(p,q)
when N = 8.2 , Answer is correct if we solve using pen and paper, but as 8.2 is represented as 8.19999999 in N (double), it causes problem in its rational form conversion.
I tried it doing other way as : (I used a large no. 10^k instead of 100)
if(abs(y*100 - round(y*100)) < 0.000001) y = round(y*100)/100
But this approach also doesn't give right representation all the time.
Is there any way I could carry out the equivalent conversion from double to p/q ?

Floating point arithmetic is very difficult. As has been mentioned in the comments, part of the difficulty is that you need to represent your numbers in binary.
For example, the number 0.125 can be represented exactly in binary:
0.125 = 2^-3 = 0b0.001
But the number 0.12 cannot.
To 11 significant figures:
0.12 = 0b0.00011110101
If this is converted back to a decimal then the error becomes obvious:
0b0.00011110101 = 0.11962890625
So if you write:
double a = 0.2;
What the machine actually does is find the closest binary representation of 0.2 that it can hold within a double data type. This is an approximation since as we saw above, 0.2 cannot be exactly represented in binary.
One possible approach is to define an 'epsilon' which determines how close your number can be to the nearest representable binary floating point.
Here is a good article on floating points:
https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/

have problem of converting a double (say N) to p/q form
... when N = 8.2
A typical double cannot encode 8.2 exactly. Instead the closest representable double is about
8.19999999999999928945726423989981412887573...
8.20000000000000106581410364015027880668640... // next closest
When code does
double N = 8.2;
It will be the 8.19999999999999928945726423989981412887573... that is converted into rational form.
Converting a double to p/q form:
Multiply double N by a large number say $k = 10^{10}$
This may overflow the double. First step should be to determine if the double is large, it which case, it is a whole number.
Do not multiple by some power of 10 as double certainly uses a binary encoding. Multiplication by 10, 100, etc. may introduce round-off error.
C implementations of double overwhelmingly use a binary encoding, so that FLT_RADIX == 2.
Then every finite double x has a significand that is a fraction of some integer over some power of 2: a binary fraction of DBL_MANT_DIG digits #Richard Critten. This is often 53 binary digits.
Determine the exponent of the double. If large enough or x == 0.0, the double is a whole number.
Otherwise, scale a numerator and denominator by DBL_MANT_DIG. While the numerator is even, halve both the numerator and denominator. As the denominator is a power-of-2, no other prime values are needed for simplification consideration.
#include <float.h>
#include <math.h>
#include <stdio.h>
void form_ratio(double x) {
double numerator = x;
double denominator = 1.0;
if (isfinite(numerator) && x != 0.0) {
int expo;
frexp(numerator, &expo);
if (expo < DBL_MANT_DIG) {
expo = DBL_MANT_DIG - expo;
numerator = ldexp(numerator, expo);
denominator = ldexp(1.0, expo);
while (fmod(numerator, 2.0) == 0.0 && denominator > 1.0) {
numerator /= 2.0;
denominator /= 2.0;
}
}
}
int pre = DBL_DECIMAL_DIG;
printf("%.*g --> %.*g/%.*g\n", pre, x, pre, numerator, pre, denominator);
}
int main(void) {
form_ratio(123456789012.0);
form_ratio(42.0);
form_ratio(1.0 / 7);
form_ratio(867.5309);
}
Output
123456789012 --> 123456789012/1
42 --> 42/1
0.14285714285714285 --> 2573485501354569/18014398509481984
867.53089999999997 --> 3815441248019913/4398046511104

Related

Loss of precision when casting float to double

I guess I'm hitting a precision issue with my c++ program. And I don't understand why I'm getting different results in my values.
res equals to 1321.0000001192093 if I write:
float sy = -0.207010582f;
double res = -1512.*((double)sy - (2. / 3.));
but res2 equals to 1320.9999999839999 if I write:
double res2 = -1512.*(-0.207010582 - (2. / 3.));
Why even syd is different from syd2 when I write this:
double syd = -0.207010582f;
double syd2 = -0.207010582000000000;
Can somebody give me a hand, to cast my float into a double properly and to understand what's going on ?
-0.207010582f is a decimal floating-point literal. But your computer doesn't use decimal floating point, it uses binary floating point. So the value of that literal will be rounded to float precision.
Similarly, -0.207010582 is rounded to double precision. While that's closer, it still is not equal to -0.207010582 decimal.
Since double has more precision than float, you will not lose precision by casting from float to double. Any rounding will have happened earlier.
Single-Precision
As others have said, float sy = -0.207010582f; initializes a single-precision (32-bit) floating point variable from a single-precision floating point literal.
This will be treated (in storage and calculations) as the nearest representable number in that format. This number is -0.20701058208942413330078125
You code is effectively then float sy = -0.20701058208942413330078125;
You can confirm that this is the nearest representable value by looking at the adjacent single-precision floating point numbers.
-0.20701059699058532714843750 // std::nextafter( sy, std::numeric_limits<float>::lowest() )
-0.20701058208942413330078125 // sy
-0.20701056718826293945312500 // std::nextafter( sy, std::numeric_limits<float>::max() )
Double-Precision
Exactly the same occurs with double-precision floating point numbers, it's just their increased resolution means the differences are small.
e.g double dy = -0.207010582; actually represents the value 0.20701058199999999853702092877938412129878997802734375
Similarly, the adjacent values that can be represented are -
-0.2070105820000000262925965444082976318895816802978515625 // std::nextafter( dy, std::numeric_limits<double>::lowest() )
-0.2070105819999999985370209287793841212987899780273437500 // dy
-0.2070105819999999707814453131504706107079982757568359375 // std::nextafter( dy, std::numeric_limits<double>::max() )
Single to Double Conversion
All single precision floating point values are exactly representable in double-precision. Hence, nothing is lost in conversions from single to double precision.
All the above assumes IEEE754 floating-point representation.

How does Cpp work with large numbers in calculations?

I have a code that tries to solve an integral of a function in a given interval numerically, using the method of Trapezoidal Rule (see the formula in Trapezoid method ), now, for the function sin(x) in the interval [-pi/2.0,pi/2.0], the integral is waited to be zero.
In this case, I take the number of partitions 'n' equal to 4. The problem is that when I have pi with 20 decimal places it is zero, with 14 decimal places it is 8.72e^(-17), then with 11 decimal places, it is zero, with 8 decimal places it is 8.72e^(-17), with 3 decimal places it is zero. I mean, the integral is zero or a number near zero for different approximations of pi, but it doesn't have a clear trend.
I would appreciate your help in understanding why this happens. (I did run it in Dev-C++).
#include <iostream>
#include <math.h>
using namespace std;
#define pi 3.14159265358979323846
//Pi: 3.14159265358979323846
double func(double x){
return sin(x);
}
int main() {
double x0 = -pi/2.0, xf = pi/2.0;
int n = 4;
double delta_x = (xf-x0)/(n*1.0);
double sum = (func(x0)+func(xf))/2.0;
double integral;
for (int k = 1; k<n; k++){
// cout<<"func: "<<func(x0+(k*delta_x))<<" "<<"last sum: "<<sum<<endl;
sum = sum + func(x0+(k*delta_x));
// cout<<"func + last sum= "<<sum<<endl;
}
integral = delta_x*sum;
cout<<"The value for the integral is: "<<integral<<endl;
return 0;
}
OP is integrating y=sin(x) from -a to +a. The various tests use different values of a, all near pi/2.
The approach uses a linear summation of values near -1.0, down to 0 and then up to near 1.0.
This summation is sensitive to calculation error with the last terms as the final math sum is expected to be 0.0. Since the start/end a varies, the error varies.
A more stable result would be had adding the extreme f = sin(f(k)) values first. e.g. sum += sin(f(k=1)), then sum += sin(f(k=3)), then sum += sin(f(k=2)) rather than k=1,2,3. In particular the formation of term x=f(k=3) is likely a bit off from the negative of its x=f(k=1) earlier term, further compounding the issue.
Welcome to the world or numerical analysis.
Problem exists if code used all float or all long double, just different degrees.
Problem is not due to using an inexact value of pi (Exact value is impossible with FP as pi is irrational and all finite FP are rational).
Much is due to the formation of x. Could try the below to form the x symmetrically about 0.0. Compare exactly x generated this way to x the original way.
x = (x0-x1)/2 + ((k - n/2)*delta_x)
Print out the exact values computed for deeper understanding.
printf("x:%a y:%a\n", x0+(k*delta_x), func(x0+(k*delta_x)));

Finding the maximum of a floating point counter

My apologies if this has been asked before, but I cannot find it.
I was wondering if there is a way to calculate the point at which a single precision floating point number that is used as a counter will reach a 'maximum' (the point at which it is no longer able to add another value due to loss of precision).
For example, if I continuously add 0.1f to a float I will eventually reach a point where the value does not change:
const float INCREMENT = 0.1f;
float value = INCREMENT;
float prevVal = 0.0f;
do {
prevVal = value;
value += INCREMENT;
} while (value != prevVal);
cout << value << endl;
On GCC this outputs 2.09715e+06
Is there a way to compute this mathematically for different values of INCREMENT? I believe it should in theory be when the exponent portion of the float requires a shift beyond 23 bits, resulting in losing the mantissa and simply adding 0.
Given some positive y used as an increment, the smallest X for which adding y does not produce a result greater than X is the least power of 2 not less than y divided by half the “epsilon” of the floating-point format. It can be calculated by:
Float Y = y*2/std::numeric_limits<Float>::epsilon();
int e;
std::frexp(Y, &e);
Float X = std::ldexp(.5, e);
if (X < Y) X *= 2;
A proof follows. I assume IEEE-754 binary floating-point arithmetic using round-to-nearest-ties-to-even.
When two numbers are added in IEEE-754 floating-point arithmetic, the result is the exact mathematical result rounded to the nearest representable value in a selected direction.
A note about notation: Text in source code format represents floating-point values and operations. Other text is mathematical. So x+y is the exact mathematical sum of x and y, x is x in floating-point format, and x+y is the result of adding x and y in a floating-point operation. Also, I will use Float for the floating-point type in C++.
Given a floating-point number x, consider adding a positive value y using floating-point arithmetic, x+y. Under what conditions will the result exceed x?
Let x1 be the next value greater than x representable in the floating-point format, and let xm be the midpoint between x and x1. If the mathematical value of x+y is less than xm, then the floating-point calculation x+y rounds down, so it produces x. If x+y is greater than xm, either it rounds up and produces x1, or it produces some greater number because y is large enough to move the sum beyond x1. If x+y equals xm, the result is whichever of x or x1 has an even low digit. For reasons we will see, this is always x in the situations relevant to this question, so the calculation rounds down.
Therefore, x+y produces a result greater than x if and only if x+y exceeds xm, meaning that y exceeds half the distance from x to x1. Note that the distance from x to x1 is the value of 1 in the low digit of the significand of x.
In a binary floating-point format with p digits in its significand, the position value of the low digit is 21−p times the position value of the high digit. For example, if x is 2e, the highest bit in its significand represents 2e, and the lowest bit represents 2e+1−p.
The question asks, given a y, what is the least x for which x+y does not produce a result greater than x? It is the least x for which y does not exceed half the value of the low digit of the significand of x.
Let 2e be the position value of the high bit of the significand of x. Then y ≤ ½•2e+1−p = 2e−p, so y•2p ≤ 2e.
Therefore, given some positive y, the least x for which x+y does not produce a result greater than x has its leading bit, 2e, equal to or exceeding y•2p. And in fact it must be exactly 2e because all other floating-point numbers whose leading bit has position value 2e have other bits set in their significands, so they are greater. 2e is the least number for which the leading bit represents 2e.
Therefore, x is the least power of two that equals or exceeds y•2p.
In C++, std::numeric_limits<Float>::epsilon() (from the <limits> header) is the step from 1 to the next representable value, meaning it is 21−p. So y•2p equals y*2/std::numeric_limits<Float>::epsilon(). (This operation is exact unless it overflows to ∞.)
Let’s assign this to a variable:
Float Y = y*2/std::numeric_limits<Float>::epsilon();
We can find the position value represented by the highest bit of Y’s significand by using frexp (from the <cmath> header) to extract the exponent from the floating-point representation of Y and ldexp (also <cmath>) to apply that exponent to a new significand (.5 because of the scale that frexp and ldexp use):
int e;
std::frexp(Y, &e);
Float X = std::ldexp(.5, e);
Then X is a power of two, and it is less than or equal to Y. It is in fact the greatest power of two not greater than Y, since the next greater power of 2, 2X, is greater than Y. However, we want the least power of two not less than Y. We can find this with:
if (X < Y) X *= 2;
The resulting X is the number sought by the question.
Marek's Answer is pretty close, and a decent way to find it using a program (that is more efficient than the one I originally posted). However, I don't necessarily need the answer in a program form, just a mathematical one.
From what I can tell, the answer comes down to the exponent of the delta used, and the number of mantissa bits. We need to round to the nearest power of 2, which is kind of complicated. Basically if the mantissa is 0, we do nothing, otherwise we add 1 to the exponent. So, assuming we now have the delta as a power of 2, represented as 1.0 x 2exp, and a mantissa of N bits, the maximum value is 1.0 x 2(N + exp). Note that FLT_EPSILON in C is equal to 1.0 x 2-N. So we can also find this by dividing our nearest power of 2 by FLT_EPSILON.
For a delta of 0.1, the nearest power of 2 is 0.125, or 1.0 x 2-3. Therefore we want 1.0 x 2(23 + (-3)) or 1.0 x 221 which is equal to 2097152.
Yes it is possible.
there is std::numeric_limits::epsilon() which defines smallest value which can increase value 1.0.
Using this you can calculate this limit for any number.
In C there is DBL_EPSILON
So in your case this goes like this:
template<class T>
auto maximumWhenAdding(T delta) -> T
{
static_assert(std::is_floating_point_v<T>, "Works only for floating points.");
int power2= std::ilogb(delta);
float roudedDelta = ldexp(T { 1.0 }, power2);
if (roudedDelta != delta) {
roudedDelta *= 2;
}
return 2 * roudedDelta / std::numeric_limits<T>::epsilon();
}
live example C++
Note in live test examples delta fails to increase maxForDelta, but subtraction is successful, so this is exactly what you need.

C program outputting different values with different numbers? [duplicate]

This question already has answers here:
strange output in comparison of float with float literal
(8 answers)
Closed 6 years ago.
See the program below
#include<stdio.h>
int main()
{
float x = 0.1;
if (x == 0.1)
printf("IF");
else if (x == 0.1f)
printf("ELSE IF");
else
printf("ELSE");
}
And another program here
#include<stdio.h>
int main()
{
float x = 0.5;
if (x == 0.5)
printf("IF");
else if (x == 0.5f)
printf("ELSE IF");
else
printf("ELSE");
}
From the both programs we expect similar results because nothing has literally changed in both changed, everything is same and also comparison terms are changed correspondingly.
BUT 2 above programs produce different results
1st Program
ELSE
2nd Program
IF
Why is this 2 programs behaving differently
The behavior of these two programs will vary between computers and operating systems - you are testing for exact equality of floats.
In memory, floats are stored as a string of bits in binary - i.e. 0.1 in binary (0.1b) represents 0.5 in decimal (0.5d).
Similarly,
Binary | Decimal
0.1 | 2^-1 = 1/2
0.01 | 2^-2 = 1/4
0.001 | 2^-3 = 1/8
0.11 | 2^-1 + 2^-2 = 3/4
The problem is that some decimals don't have nice floating point representations.
0.1d = 0.0001100110011001100110011...
which is infinitely long.
So, 0.5 is really nice in binary
0.5d = 0.1000000000000000...b
but 0.1 is really nasty
0.1d = 0.00011001100110011...
Now depending on your compiler, it may assume that 0.1f is a double type, which stores more of the infinite sequence of 0.0001100110011001100110011001100110011...
so it is not equal to the float version, which truncates the sequence much earlier.
On the other hand, 0.5f is the same regardless of how many decimal places are stored, since it has all zeroes after the first place.
The accepted way to compare floats or doubles in C++ or C is to #define a very small number (I like to call it EPS, short for EPSILON) and replace
float a = 0.1f
if (a == 0.1f) {
printf("IF\n")
} else {
printf("ELSE\n")
}
with
#include <math.h>
#define EPS 0.0000001f
float a = 0.1f
if (abs(a - 0.1f) < EPS) {
printf("IF\n")
} else {
printf("ELSE\n")
}
Effectively, this tests if a is 'close enough' to 0.1f instead of exact equality. For 99% of applications, this approach works just fine, but for super-sensitive calculations some stranger tricks are needed that involve using long double, or defining a custom data type.
You are using two data types: double,automaticly in if(x=0.1))(0.1 is double) and x is float. these types differ how they store the value. 0.1 is not 0.1f, it is 0.100000000001 (double) or 0.09388383(something)

Why am I getting a different result from std::fmod and std::remainder

In the below example app I calculate the floating point remainder from dividing 953 by 0.1, using std::fmod
What I was expecting is that since 953.0 / 0.1 == 9530, that std::fmod(953, 0.1) == 0
I'm getting 0.1 - why is this the case?
Note that with std::remainder I get the correct result.
That is:
std::fmod (953, 0.1) == 0.1 // unexpected
std::remainder(953, 0.1) == 0 // expected
Difference between the two functions:
According to cppreference.com
std::fmod calculates the following:
exactly the value x - n*y, where n is x/y with its fractional part truncated
std::remainder calculates the following:
exactly the value x - n*y, where n is the integral value nearest the exact value x/y
Given my inputs I would expect both functions to have the same output. Why is this not the case?
Exemplar app:
#include <iostream>
#include <cmath>
bool is_zero(double in)
{
return std::fabs(in) < 0.0000001;
}
int main()
{
double numerator = 953;
double denominator = 0.1;
double quotient = numerator / denominator;
double fmod = std::fmod (numerator, denominator);
double rem = std::remainder(numerator, denominator);
if (is_zero(fmod))
fmod = 0;
if (is_zero(rem))
rem = 0;
std::cout << "quotient: " << quotient << ", fmod: " << fmod << ", rem: " << rem << std::endl;
return 0;
}
Output:
quotient: 9530, fmod: 0.1, rem: 0
Because they are different functions.
std::remainder(x, y) calculates IEEE remainder which is x - (round(x/y)*y) where round is rounding half to even (so in particular round(1.0/2.0) == 0)
std::fmod(x, y) calculates x - trunc(x/y)*y. When you divide 953 by 0.1 you may get a number slightly smaller than 9530, so truncation gives 9529. So as the result you get 953.0 - 952.9 = 0.1
Welcome to floating point math. Here's what happens: One tenth cannot be represented exactly in binary, just as one third cannot be represented exactly in decimal. As a result, the division produces a result slightly below 9530. The floor operation produces the integer 9529 instead of 9530. And then this leaves 0.1 left over.