Do we need epsilon value for lesser or greater comparison for float value? [duplicate] - c++

This question already has an answer here:
Floating point less-than-equal comparisons after addition and substraction
(1 answer)
Closed 9 months ago.
I have gone through different threads for comparing lesser or greater float value not equal comparison but not clear do we need epsilon value logic to compare lesser or greater float value as well?
e.g ->
float a, b;
if (a < b) // is this correct way to compare two float value or we need epsilon value for lesser comparator
{
}
if (a > b) // is this correct way to compare two float value for greater comparator
{
}
I know for comparing for equality of float, we need some epsilon value
bool AreSame(double a, double b)
{
return fabs(a - b) < EPSILON;
}

It really depends on what should happen when both value are close enough to be seen as equal, meaning fabs(a - b) < EPSILON. In some use cases (for example for computing statistics), it is not very important if the comparison between 2 close values gives or not equality.
If it matters, you should first determine the uncertainty of the values. It really depends on the use case (where the input values come from and how they are processed), and then 2 value differing by less than that uncertainty should be considered as equal. But that equality is not longer a true mathematical equivalence relation: you can easily imagine how to build a chain a close values between 2 truely different values. In math words, the relation is not transitive (or is almost transitive is current language words).
I am sorry but as soon as you have to process approximations there cannot be any precise and consistent way: you have to think of the real world use case to determine how you should handle the approximation.

When you are working with floats, it's inevitable that you will run into precision errors.
In order to mitigate this, when checking for the equality two floats we often check if their difference is small enough.
For lesser and greater, however, there is no way to tell with full certainty which float is larger. The best (presumably for your intentions) approach is to first check if the two floats are the same, using the areSame function. If so return false (as a = b implies that a < b and a > b are both false).
Otherwise, return the value of either a < b or a > b.

The answer is application dependent.
If you are sure that a and b are sufficiently different that numerical errors will not reverse the order, then a < b is good enough.
But if a and b are dangerously close, you might require a < b + EPSILON. In such a case, it should be clear to you that < and ≤ are not distinguishable.
Needless to say, EPSILON should be chosen with the greatest care (which is often pretty difficult).

It ultimately depends on your application, but I would say generally no.
The problem, very simplified, is that if you calculate: (1/3) * 3 and get the answer 0.999999, then you want that to compare equal to 1. This is why we use epsilon values for equal comparisons (and the epsilon should be chosen according to the application and expected precision).
On the other hand, if you want to sort a list of floats then by default the 0.999999 value will sort before 1. But then again what would the correct behavior be? If they both are sorted as 1, then it will be somewhat random which one is actually sorted first (depending on the initial order of the list and the sorting algorithm you use).
The problem with floating point numbers is not that they are "random" and that it is impossible to predict their exact values. The problem is that base-10 fractions don't translate cleanly into base-2 fractions, and that non-repeating decimals in one system can translate into repeating one in the other - which then result in rounding errors when truncated to a finite number of decimals. We use epsilon values for equal comparisons to handle rounding errors that arise from these back and forth translations.
But do be aware that the nice relations that ==, < and <= have for integers, don't always translate over to floating points exactly because of the epsilons involved. Example:
a = x
b = a + epsilon/2
c = b + epsilon/2
d = c + epsilon/2
Now: a == b, b == c, c == d, BUT a != d, a < d. In fact, you can continue the sequence keeping num(n) == num(n+1) and at the same time get an arbitrarily large difference between a and the last number in the sequence.

As others have stated, there would always be precision errors when dealing with floats.
Thus, you should have an epsilon value even for comparing less than / greater than.
We know that in order for a to be less than b, firstly, a must be different from b. Checking this is a simple NOT equals, which uses the epsilon.
Then, once you already know a != b, the operator < is sufficient.

Related

Calculating positive non-integer power of negative base

To my knowledge
(-1)^1.8 = [(-1)^18]^0.1 = [1]^0.1 = 1
Hope I am not making a silly mistake.
std::pow(-1, 1.8) results in nan. Also, due to this link:
If base is finite and negative and exp is finite and non-integer, a domain error occurs and a range error may occur.
Is there a workaround to calculate the above operation with C++?
std::pow from <cmath> is for real numbers. The exponentiation (power) function of real numbers is not defined for negative bases.
Wikipedia says:
Real exponents with negative bases
Neither the logarithm method nor the rational exponent method can be used to define br as a real number for a negative real number b and an arbitrary real number r. Indeed, er is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0.
The rational exponent method cannot be used for negative values of b
because it relies on continuity. The function f(r) = br has a unique
continuous extension from the rational numbers to the real numbers
for each b > 0. But when b < 0, the function f is not even continuous
on the set of rational numbers r for which it is defined.
For example, consider b = −1. The nth root of −1 is −1 for every odd
natural number n. So if n is an odd positive integer, (−1)(m/n) = −1
if m is odd, and (−1)(m/n) = 1 if m is even. Thus the set of rational
numbers q for which (−1)q = 1 is dense in the rational numbers, as is
the set of q for which (−1)q = −1. This means that the function (−1)q
is not continuous at any rational number q where it is defined.
On the other hand, arbitrary complex powers of negative numbers b can
be defined by choosing a complex logarithm of b.
Powers of complex numbers
Complex powers of positive reals are defined via ex as in section
Complex exponents with positive real bases above [omitted from this quote]. These are continuous
functions.
Trying to extend these functions to the general case of noninteger
powers of complex numbers that are not positive reals leads to
difficulties. Either we define discontinuous functions or multivalued
functions. Neither of these options is entirely satisfactory.
The rational power of a complex number must be the solution to an
algebraic equation. Therefore, it always has a finite number of
possible values. For example, w = z1/2 must be a solution to the
equation w2 = z. But if w is a solution, then so is −w, because (−1)2
= 1. A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for
nonrational powers.
Complex powers and logarithms are more naturally handled as single
valued functions on a Riemann surface. Single valued versions are
defined by choosing a sheet. The value has a discontinuity along a
branch cut. Choosing one out of many solutions as the principal value
leaves us with functions that are not continuous, and the usual rules
for manipulating powers can lead us astray.
So, before calculating the result, you must first choose what you are calculating. The C++ standard library has in <complex> a function template std::complex<T> pow(const complex<T>& x, const T& y), which is specified to calculate (through definition of cpow in C standard):
The cpow functions compute the complex power function xy, with a branch cut for the first parameter along the negative real axis.
For (-1)1.8, the result would be e-(iπ)/5 ≈ 0.809017 + 0.587785i.
This is not what you expected as result. There is no exponentiation function in the C++ standard library that would calculate the result that you want.

Given (a, b) compute the maximum value of k such that a^{1/k} and b^{1/k} are whole numbers

I'm writing a program that tries to find the minimum value of k > 1 such that the kth root of a and b (which are both given) equals a whole number.
Here's a snippet of my code, which I've commented for clarification.
int main()
{
// Declare the variables a and b.
double a;
double b;
// Read in variables a and b.
while (cin >> a >> b) {
int k = 2;
// We require the kth root of a and b to both be whole numbers.
// "while a^{1/k} and b^{1/k} are not both whole numbers..."
while ((fmod(pow(a, 1.0/k), 1) != 1.0) || (fmod(pow(b, 1.0/k), 1) != 0)) {
k++;
}
Pretty much, I read in (a, b), and I start from k = 2 and increment k until the kth roots of a and b are both congruent to 0 mod 1 (meaning that they are divisible by 1 and thus whole numbers).
But, the loop runs infinitely. I've tried researching, and I think it might have to do with precision error; however, I'm not too sure.
Another approach I've tried is changing the loop condition to check whether the floor of a^{1/k} equals a^{1/k} itself. But again, this runs infinitely, likely due to precision error.
Does anyone know how I can fix this issue?
EDIT: for example, when (a, b) = (216, 125), I want to have k = 3 because 216^(1/3) and 125^(1/3) are both integers (namely, 5 and 6).
That is not a programming problem but a mathematical one:
if a is a real, and k a positive integer, and if a^(1./k) is an integer, then a is an integer. (otherwise the aim is to toy with approximation error)
So the fastest approach may be to first check if a and b are integer, then do a prime decomposition such that a=p0e0 * p1e1 * ..., where pi are distinct primes.
Notice that, for a1/k to be an integer, each ei must also be divisible by k. In other words, k must be a common divisor of the ei. The same must be true for the prime powers of b if b1/k is to be an integer.
Thus the largest k is the greatest common divisor of all ei of both a and b.
With your approach you will have problem with large numbers. All IIEEE 754 binary64 floating points (the case of double on x86) have 53 significant bits. That means that all double larger than 253 are integer.
The function pow(x,1./k) will result in the same value for two different x, so that with your approach you will necessary have false answer, for example the numbers 55*290 and 35*2120 are exactly representable with double. The result of the algorithm is k=5. You may find this value of k with these number but you will also find k=5 for 55*290-249 and 35*2120, because pow(55*290-249,1./5)==pow(55*290). Demo here
On the other hand, as there are only 53 significant bits, prime number decomposition of double is trivial.
Floating numbers are not mathematical real numbers. The computation is "approximate". See http://floating-point-gui.de/
You could replace the test fmod(pow(a, 1.0/k), 1) != 1.0 with something like fabs(fmod(pow(a, 1.0/k), 1) - 1.0) > 0.0000001 (and play with various such 𝛆 instead of 0.0000001; see also std::numeric_limits::epsilon but use it carefully, since pow might give some error in its computations, and 1.0/k also inject imprecisions - details are very complex, dive into IEEE754 specifications).
Of course, you could (and probably should) define your bool almost_equal(double x, double y) function (and use it instead of ==, and use its negation instead of !=).
As a rule of thumb, never test floating numbers for equality (i.e. ==), but consider instead some small enough distance between them; that is, replace a test like x == y (respectively x != y) with something like fabs(x-y) < EPSILON (respectively fabs(x-y) > EPSILON) where EPSILON is a small positive number, hence testing for a small L1 distance (for equality, and a large enough distance for inequality).
And avoid floating point in integer problems.
Actually, predicting or estimating floating point accuracy is very difficult. You might want to consider tools like CADNA. My colleague Franck Védrine is an expert on static program analyzers to estimate numerical errors (see e.g. his TERATEC 2017 presentation on Fluctuat). It is a difficult research topic, see also D.Monniaux's paper the pitfalls of verifying floating-point computations etc.
And floating point errors did in some cases cost human lives (or loss of billions of dollars). Search the web for details. There are some cases where all the digits of a computed number are wrong (because the errors may accumulate, and the final result was obtained by combining thousands of operations)! There is some indirect relationship with chaos theory, because many programs might have some numerical instability.
As others have mentioned, comparing floating point values for equality is problematic. If you find a way to work directly with integers, you can avoid this problem. One way to do so is to raise integers to the k power instead of taking the kth root. The details are left as an exercise for the reader.

Is to round a correct way for making float-double comparison

With this question as base, it is well known that we should not apply equals comparison operation to decimal variables, due numeric erros (it is not bound to programming language):
bool CompareDoubles1 (double A, double B)
{
return A == B;
}
The abouve code it is not right.
My questions are:
It is right to round to both numbers and then compare?
It is more efficient?
For instance:
bool CompareDoubles1 (double A, double B)
{
double a = round(A,4);
double b = round(B,4)
return a == b;
}
It is correct?
EDIT
I'm considering round is a method that take a double (number) and int (precition):
bool round (float number, int precision);
EDIT
I consider that a better idea of what I mean with this question will be expressed with this compare method:
bool CompareDoubles1 (double A, double B, int precision)
{
//precition could be the error expected when rounding
double a = round(A,precision);
double b = round(B,precision)
return a == b;
}
Usually, if you really have to compare floating values, you'd specify a tolerance:
bool CompareDoubles1 (double A, double B, double tolerance)
{
return std::abs(A - B) < tolerance;
}
Choosing an appropriate tolerance will depend on the nature of the values and the calculations that produce them.
Rounding is not appropriate: two very close values, which you'd want to compare equal, might round in different directions and appear unequal. For example, when rounding to the nearest integer, 0.3 and 0.4 would compare equal, but 0.499999 and 0.500001 wouldn't.
A common comparison for doubles is implemented as
bool CompareDoubles2 (double A, double B)
{
return std::abs(A - B) < 1e-6; // small magic constant here
}
It is clearly not as efficient as the check A == B, because it involves more steps, namely subtraction, calling std::abs and finally comparison with a constant.
The same argument about efficiency holds for you proposed solution:
bool CompareDoubles1 (double A, double B)
{
double a = round(A,4); // the magic constant hides in the 4
double b = round(B,4); // and here again
return a == b;
}
Again, this won't be as efficient as direct comparison, but -- again -- it doesn't even try to do the same.
Whether CompareDoubles2 or CompareDoubles1 is faster depends on your machine and the choice of magic constants. Just measure it. You need to make sure to supply matching magic constants, otherwise you are checking for equality with a different trust region which yields different results.
I think comparing the difference with a fixed tolerance is a bad idea.
Say what happens if you set the tolerance to 1e-6, but the two numbers you compare are
1.11e-9 and 1.19e-9?
These would be considered equal, even if they differ after the second significant digit. This may not what you want.
I think a better way to do the comparison is
equal = ( fabs(A - B) <= tol*max(fabs(A), fabs(B)) )
Note, the <= (and not <), because the above must also work for 0==0. If you set tol=1e-14, two numbers will be considered equal when they are equal up to 14 significant digits.
Sidenote: When you want to test if a number is zero, then the above test might not be ideal and then one indeed should use an absolute threshold.
If the round function used in your example means to round to 4th decimal digit, this is not correct at all. For example, if A and B are 0.000003 and 0.000004 they would be rounded to 0.0 and would therefore be compared to be equal.
A general purpose compairison function must not work with a constant tolarance but with a relative one. But it is all explained in the post you cite in your question.
There is no 'correct' way to compare floating point values (Even a f == 0.0 might be correct). Different comparison may be suitable. Have a look at http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
Similar to other posts, but introducing scale-invariance: If you are doing something like adding two sets of numbers together and then you want to know if the two set sums are equal, you can take the absolute value of the log-ratio (difference of logarithms) and test to see if this is less than your prescribed tolerance. That way, e.g. if you multiply all your numbers by 10 or 100 in summation calculations, it won't affect the result about whether the answers are equal or not. You should have a separate test to determine if two numbers are equal because they are close enough to 0.

Floating Point, how much can I trust less than / greater than comparisons?

Let's say I have two floating point numbers, and I want to compare them. If one is greater than the other, the program should take one fork. If the opposite is true, it should take another path. And it should do the same thing, if the value being compared is nudged very slightly in a direction that should still make it compare true.
It's a difficult question to phrase, so I wrote this to demonstrate it -
float a = random();
float b = random(); // always returns a number (no infinity or NaNs)
if(a < b){
if( !(a < b + FLOAT_EPISILON) ) launchTheMissiles();
buildHospitals();
}else if(a >= b){
if( !(a >= b - FLOAT_EPISILON) ) launchTheMissiles();
buildOrphanages();
}else{
launchTheMissiles(); // This should never be called, in any branch
}
Given this code, is launchTheMissiles() guaranteed to never be called?
If you can guarantee that a and b are not NaNs or infinities, then you can just do:
if (a<b) {
…
} else {
…
}
The set of all floating point values except for infinities and NaNs comprise a total ordering (with a glitch with two representations of zero, but that shouldn't matter for you), which is not unlike working with normal set of integers — the only difference is that the magnitude of intervals between subsequent values is not constant, like it is with integers.
In fact, the IEEE 754 has been designed so that comparisons of non-NaN non-infinity values of the same sign can be done with the same operations as normal integers (again, with a glitch with zero). So, in this specific case, you can think of these numbers as of “better integers”.
Short answer, it is guaranteed never to be called.
If a<b then a will always be less than b plus a positive amount, however small. In which case, testing if a is less than b + an amount will be true.
The third case won't get reached.
Tests for inequality are exact, as are tests for equality. People get confused because they don't realize that the values they are working with might not be exactly what they think they are. So, yes, the comment on the final function call is correct. That branch will never be taken.
The IEEE 754 (floating point) standard states that addition or subtraction can result in a positive or negative infinity, so b + FLOAT_EPSILON and b - FLOAT_EPSILON can result in positive or negative infinity if b is FLT_MAX or -FLT_MAX. The floating point standard also states that infinity compares as you would expect, with FLT_MAX < +infinity returning true and -FLT_MAX > -infinity.
For a closer look at the floating point format and precision issues from a practical standpoint, I recommend taking a look at Christer Ericson's book Real Time Collision Detection or Bruce Dawson's blog posts on the subject, the latest of which (with a nice table of contents!) is at http://randomascii.wordpress.com/2013/02/07/float-precision-revisited-nine-digit-float-portability/.
What about less than check with an epsilon window ? if a is less than b then a can not be equal to b
/**
* checks whether a <= b with epsilon window
*/
template <typename T>
bool eq(T a, T b){
T e = std::numeric_limits<T>::epsilon();
return std::fabs(a-b) <= e;
}
/**
* checks whether a < b with epsilon window
*/
template <typename T>
bool lt(T a, T b){
if(!eq(a,b)){ // if a < b then a != b
return a < b;
}
return false;
}
/**
* checks whether a <= b with epsilon window
*/
template <typename T>
bool lte(T a, T b){
if(eq(a,b)){
return true;
}
return a < b;
}

When comparing for equality is it okay to use `==`?

When comparing for equality is it okay to use ==?
For example:
int a = 3;
int b = 4;
If checking for equality should you use:
if (a == b)
{
. . .
}
Would the situation change if floating point numbers were used?
'==' is perfectly good for integer values.
You should not compare floats for equality; use an tolerance approach:
if (fabs(a - b) < tolerance)
{
// a and b are equal to within tolerance
}
Re floating points: yes. Don't use == for floats (or know EXACTLY what you're doing if you do). Rather use something like
if (fabs(a - b) < SOME_DELTA) {
...
}
EDIT: changed abs() to fabs()
Doing < and > comparisons doesn't really help you with rounding errors. Use the solution given by Mark Shearar. Direct equality comparisons for floats are not always bad, though. You can use them if some specific value (e.g. 0.0 or 1.0) is directly assigned to a variable, to check if the variable still has that value. It is only after calculations where the rounding errors screw up equality checks.
Notice that comparing a NaN value to anything (also another NaN) with <, >, <=, >= or == returns false. != returns true.
In many classes, operator== is typically implemented as (!(a < b || b < a)), so you should go ahead and use ==. Except for floats, as Mitch Wheat said above.
While comparing ints, use ==. Using "<" and ">" at the same time to check equality on a int results in slower code because it takes two comparisons instead of one, taking the double amount of time. (altough probably the compiler will fix it for you, but you should not get used to writing bad code).
Remember, early optimization is bad, but early inefficient code is just as bad.
EDIT: Fixed some english...
For integers, == does just what you expect. If they're equal, they're equal.
For floats, it's another story. Operations produce imprecise results and errors accumulate. You need to be a little fuzzy when dealing with numbers. I use
if ( std::abs( a - b )
< std::abs( a ) * ( std::numeric_limits<float_t>::epsilon() * error_margin ) )
where float_t is a typedef; this gives me as much precision as possible (assuming error_margin was calculated correctly) and allows easy adjustment to another type.
Furthermore, some floating-point values are not numbers: there's infinity, minus infinity, and of course not-a-number. == does funny things with those. Infinity equals infinity, but not-a-number does not equal not-a-number.
Finally, there are positive and negative zero, which are distinct but equal to each other! To separate them, you need to do something like check whether the inverse is positive or negative infinity. (Just make sure you won't get a divide-by-zero exception.)
So, unless you have a more specific question, I hope that handles it…