Sorting a list containing NaNs - list

If I have a list of floating point numbers containing: Infinity, -Infinity, Other random decimal number and one NaN. Where should the NaN be after the list has been sorted?? I'm using bubble sort if that helps

In order to sort you need a consistent order, which means, for example, making an ordering rule for NaN.
Fortunately, the work has already been done in Java. java.lang.Double is Comparable, and its compareTo uses extended rules including "Double.NaN is considered by this method to be equal to itself and greater than all other double values (including Double.POSITIVE_INFINITY)."
It also has a compare method that compares two double primitives using those extended rules, rather than the <= etc. rules.
If you are programming in Java you can use this directly in your sort. If you are using float rather than double, see the corresponding method in java.lang.Float. If you are programming in another language, you can still read and copy the rules from Java, and use them in your comparison function.
If you use this in your sort you should expect NaN to be at the very end of the sorted list, after all finite values and positive infinity.

You cannot sort a list of floating-point values including NaN using <= as comparison because <= is not an order on floating-point values including NaN: it is not reflexive (NaN <= NaN would have to hold for <= to be reflexive. It doesn't).
You are breaking the pre-requisites of the sorting algorithm. Anything can happen.

The NaN would be put at the beginning or the end of the sorted array depending on the implementation of Bubble Sort in question.
It depends entirely on how you define your ordering criterion.

Related

How does sorting algorithms sort containers and ranges of floats?

Since comparing floats is evil then if I have a container of floats and I sort it using some standard library sorting algorithm like std::sort then how does the algorithm sort them?
std::vector<float> vf{2.4f, 1.05f, 1.05f, 2.39f};
std::sort( vf.begin(), vf.end() );
So does the algorithm compare 1.05f and 1.05f?
Does it internally uses something like: std::fabs( 1.05f - 1.05f ) < 0.1;?
Does this apply too to containers of doubles? Thank you!
So does the algorithm compare 1.05f and 1.05f?
Yes
Does it internally uses something like: std::fabs( 1.05f - 1.05f ) < 0.1;?
No, it uses operator <, e.g. 1.05f < 1.05f. It doesn't ever need to compare for equality, so the comparison using epsilon value is not needed.
Does this apply too to containers of doubles?
Yes, it applies to containers of any type (unless you provide your own comparison function to std::sort)
Since comparing floats is evil…
That is a myth and is false. Related to it is the myth that floating-point numbers approximate real numbers.
<, <=, >, and >= work fine for sorting numbers, and == and != also correctly compare whether two numbers are equal. <, <=, >, and >= will not serve when sorting data containing NaNs (the “Not a Number” datum).
Per the IEEE 754 Standard for Floating-Point Arithmetic and other common specifications of floating-point formats, any floating-point representation ±F•be represents one number exactly. Even +∞ and −∞ are considered to be exact. It is the operations in floating-point arithmetic, not the numbers, that are specified to approximate real-number arithmetic. When a computational operation is performed, its result is the real-number results rounded to the nearest representable number according to a chosen rounding rule, except that operations with domain errors may produce a NaN. (Round-to-nearest-ties-to-even is the most common rule, and there are several others.)
Thus, when you add or subtract numbers, multiply or divide numbers, take square roots, or convert numbers from one base to another (such as decimal character input to internal floating-point format), there may be a rounding error.
Some operations have no error. Comparison operations have no error. <, <=, >, >=, ==, and != always produce the true mathematical result, true or false, with no error. And therefore they may be used for sorting numbers. (With NaNs, they always produce false, because a NaN is never less than, equal to, or greater than a number, nor is a NaN less than, equal to, or greater than a NaN. So false is the correct result, but it means these comparisons are not useful for sorting with NaNs.)
Understanding this distinction, that numbers are exact and operations may approximate, is essential for analyzing, designing, and proving algorithms involving floating-point arithmetic.
Of course, it may happen that in an array of numbers, the numbers contain errors from earlier operations: They differ from the numbers that would have been obtained by using real-number arithmetic. In this case, the numbers will be sorted into order based on their actual computed values, not on the values you would ideally like them to have. This is not a barrier to std::sort correctly sorting the numbers.
To sort data including NaNs, you need a total order predicate that reports whether one datum is earlier than another in the desired sort order, such as one that reports a NaN is later than any non-NaN. IEEE-754 defines a total order predicate, but I cannot speak to its availability in C++. (std::less does not seem to provide it, based on a quick test I tried.)

Is `std::atof` guaranteed to produce identical output when given identical string input?

I'm reading double values from file as strings and parsing them with std::atof. Afterwards, I'm using the values as keys in a unordered map. It's seems to be working correctly, but is it guaranteed to work in 100% of the cases?
I'm asking the question because it's extremely hard to produce identical double value if you do any arithmetic operations with it.
Is std::atof guaranteed to produce exactly the same double value if given the same string value multiple times?
You can round trip a number with DBL_DIG significant digits or fewer via a std::string. Typically DBL_DIG is 15 but that depends on the floating point scheme used on your platform.
That's not quite the same as what you are asking. For example, it's possible on some platforms to change the floating point rounding mode at runtime, so you could end up with different results even during the execution of a program. Then you have signed zeros, subnormal numbers, and NaN (in its various guises) to worry about.
There are just too many pitfalls. I would not be comfortable using floating point types as map keys. It would be far, far better to use the std::string as the key in your map.

Comparing double in C++, peer review

I have always had the problem of comparing double values for equality. There are functions around like some fuzzy_compare(double a, double b), but I often enough did not manage to find them in time. So I thought on building a wrapper class for double just for the comparison operator:
typedef union {
uint64_t i;
double d;
} number64;
bool Double::operator==(const double value) const {
number64 a, b;
a.d = this->value;
b.d = value;
if ((a.i & 0x8000000000000000) != (b.i & 0x8000000000000000)) {
if ((a.i & 0x7FFFFFFFFFFFFFFF) == 0 && (b.i & 0x7FFFFFFFFFFFFFFF) == 0)
return true;
return false;
}
if ((a.i & 0x7FF0000000000000) != (b.i & 0x7FF0000000000000))
return false;
uint64_t diff = (a.i & 0x000FFFFFFFFFFFF) - (b.i & 0x000FFFFFFFFFFFF) & 0x000FFFFFFFFFFFF;
return diff < 2; // 2 here is kind of some epsilon, but integer and independent of value range
}
The idea behind it is:
First, compare the sign. If it's different, the numbers are different. Except if all other bits are zero. That is comparing +0.0 with -0.0, which should be equal. Next, compare the exponent. If these are different, the numbers are different. Last, compare the mantissa. If the difference is low enough, the values are equal.
It seems to work, but just to be sure, I'd like a peer review. It could well be that I overlooked something.
And yes, this wrapper class needs all the operator overloading stuff. I skipped that because they're all trivial. The equality operator is the main purpose of this wrapper class.
This code has several problems:
Small values on different sides of zero always compare unequal, no matter how (not) far apart.
More importantly, -0.0 compares unequal with +epsilon but +0.0 compares equal with +epsilon (for some epsilon). That's really bad.
What about NaNs?
Values with different exponents compare unequal, even if one floating point "step" apart (e.g. the double before 1 compares unequal to 1, but the one after 1 compares equal...).
The last point could ironically be fixed by not distinguishing between exponent and mantissa: The binary representations of all positive floats are exactly in the order of their magnitude!
It appears that you want to just check whether two floats are a certain number of "steps" apart. If so, maybe this boost function might help. But I would also question whether that's actually reasonable:
Should the smallest positive non-denormal compare equal to zero? There are still many (denormal) floats between them. I doubt this is what you want.
If you operate on values that are expected to be of magnitude 1e16, then 1 should compare equal to 0, even though half of all positive doubles are between 0 and 1.
It is usually most practical to use a relative + absolute epsilon. But I think it will be most worthwhile to check out this article, which discusses the topic of comparing floats more extensively than I could fit into this answer:
https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
To cite its conclusion:
Know what you’re doing
There is no silver bullet. You have to choose wisely.
If you are comparing against zero, then relative epsilons and ULPs based comparisons are usually meaningless. You’ll need to use an absolute epsilon, whose value might be some small multiple of FLT_EPSILON and the inputs to your calculation. Maybe.
If you are comparing against a non-zero number then relative epsilons or ULPs based comparisons are probably what you want. You’ll probably want some small multiple of FLT_EPSILON for your relative epsilon, or some small number of ULPs. An absolute epsilon could be used if you knew exactly what number you were comparing against.
If you are comparing two arbitrary numbers that could be zero or non-zero then you need the kitchen sink. Good luck and God speed.
Above all you need to understand what you are calculating, how stable the algorithms are, and what you should do if the error is larger than expected. Floating-point math can be stunningly accurate but you also need to understand what it is that you are actually calculating.
You store into one union member and then read from another. That causes aliasing problem (undefined behaviour) because the C++ language requires that objects of different types do not alias.
There are a few ways to remove the undefined behaviour:
Get rid of the union and just memcpy the double into uint64_t. The portable way.
Mark union member i type with [[gnu::may_alias]].
Insert a compiler memory barrier between storing into union member d and reading from member i.
Frame the question this way:
We have two numbers, a and b, that have been computed with floating-point arithmetic.
If they had been computed exactly with real-number mathematics, we would have a and b.
We want to compare a and b and get an answer that tells us whether a equals b.
In other words, you are trying to correct for errors that occurred while computing a and b. In general, that is impossible, of course, because we do not know what a and b are. We only have the approximations a and b.
The code you propose falls back to another strategy:
If a and b are close to each other, we will accept that a equals b. (In other words: If a is close to b, it is possible that a equals b, and the differences we have are only because of calculation errors, so we will accept that a equals b without further evidence.)
There are two problems with this strategy:
This strategy will incorrectly accept that a equals b even when it is not true, just because a and b are close.
We need to decide how close to require a and b to be.
Your code attempts to address the latter: It is establishing some tests about whether a and b are close enough. As others have pointed out, it is severely flawed:
It treats numbers as different if they have different signs, but floating-point arithmetic can cause a to be negative even if a is positive, and vice versa.
It treats numbers as different if they have different exponents, but floating-point arithmetic can cause a to have a different exponent from a.
It treats numbers as different if they differ by more than a fixed number of ULP (units of least precision), but floating-point arithmetic can, in general, cause a to differ from a by any amount.
It assumes an IEEE-754 format and needlessly uses aliasing with behavior not defined by the C++ standard.
The approach is fundamentally flawed because it needlessly fiddles with the floating-point representation. The actual way to determine from a and b whether a and b might be equal is to figure out, given a and b, what sets of values a and b have and whether there is any value in common in those sets.
In other words, given a, the value of a might be in some interval, (a−eal, a+ear) (that is, all the numbers from a minus some error on the left to a plus some error on the right), and, given b, the value of b might be in some interval, (b−ebl, b+ebr). If so, what you want to test is not some floating-point representation properties but whether the two intervals (a−eal, a+ear) and (b−ebl, b+ebr) overlap.
To do that, you need to know, or at least have bounds on, the errors eal, ear, ebl, and ebr. But those errors are not fixed by the floating-point format. They are not 2 ULP or 1 ULP or any number of ULP scaled by the exponent. They depend on how a and b were computed. In general, the errors can range from 0 to infinity, and they can also be NaN.
So, to test whether a and b might be equal, you need to analyze the floating-point arithmetic errors that could have occurred. In general, this is difficult. There is an entire field of mathematics for it, numerical analysis.
If you have computed bounds on the errors, then you can just compare the intervals using ordinary arithmetic. There is no need to take apart the floating-point representation and work with the bits. Just use the normal add, subtract, and comparison operations.
(The problem is actually more complicated than I allowed above. Given a computed value a, the potential values of a do not always lie in a single interval. They could be an arbitrary set of points.)
As I have written previously, there is no general solution for comparing numbers containing arithmetic errors: 0 1 2 3.
Once you figure out error bounds and write a test that returns true if a and b might be equal, you still have the problem that the test also accepts false negatives: It will return true even in cases where a and b are not equal. In other words, you have just replaced a program that is wrong because it rejects equality even though a and b would be equal with a program that is wrong in other cases because it accepts equality in cases where a and b are not equal. This is another reason there is no general solution: In some applications, accepting as equal numbers that are not equal is okay, at least for some situations. In other applications, that is not okay, and using a test like this will break the program.

Ways around using a double as a key in a std set/map

The problem of using doubles as keys in maps/sets is floating point precision.
Some people have suggested adding an epsilon in your compare function, but that means your keys will no longer fulfil the necessary strict weak ordering criterion. This means that you will get a different set/map depending on the order of inserting your elements.
In the case where you want to aggregate/combine/merge data based on double values, and are willing to allow a certain level of rounding/epsilon (clearly, you'll have to), is the following solution a good idea?
Convert all the doubles (where we intended as keys) into integers by multiplying them by the precision factor (e.g. 1e8) and rounding to the nearest integer (int)i+0.5(if i>0), then create a set/map that keys off these integers. When extracting the final values of the keys, divide the ints by the precision factor to get the double value back (albeit rounded).
"Convert all the doubles (where we intended as keys) into integers by multiplying them by the precision factor (e.g. 1e8) and rounding to the nearest integer (int)i+0.5(if i>0), then create a set/map that keys off these integers. When extracting the final values of the keys, divide the ints by the precision factor to get the double value back (albeit rounded)."
I would recommend using integer type keys (e.g. long long) for the map in first place, and trim them for double representation using a fixed precision for division.
But that depends, if you are able to apply fix point math for your actual use case. If you need to cover a wide range of value precisions (like e.g. +-1e-7 - +-1e7), such approach won't work.
Convert all the doubles (where we intended as keys) into integers by
multiplying them by the precision factor (e.g. 1e8) and rounding to
the nearest integer (int)i+0.5(if i>0), then create a set/map that
keys off these integers. When extracting the final values of the keys,
divide the ints by the precision factor to get the double value back
(albeit rounded).
Instead of dividing by the precision factor to get the doubles back, simply store the double together with the associated value in a struct, and put that struct in the dictionary as the "value" for that integer key. That way, the original double value is still around and can be used for calculations. Just not for the key search.
If, however, you can live with slightly rounded values (due to the fact you simply divide an integer by an epsilon), your suggested approach is already good enough.
As the other answer says, it very much depends on the range of the values. If some are extremely huge and others are extremely small, then your approach to get integer keys won't work. If they are only a few digits apart, then it might.

Hashing floating point values

Recently, I was curious how hash algorithms for floating points worked, so I looked at the source code for boost::hash_value. It turns out to be fairly complicated. The actual implementation loops over each digit in the radix and accumulates a hash value. Compared to the integer hash functions, it's much more involved.
My question is: why should a floating-point hash algorithm be any more complicated? Why not just hash the binary representation of the floating point value as if it was an integer?
Like:
std::size_t hash_value(float f)
{
return hash_value(*(reinterpret_cast<int*>(&f)));
}
I realize that float is not guaranteed to be the same size as int on all systems, but that sort of thing could be handled with a few template meta-programs to deduce an integral type that is the same size as float. So what is the advantage of introducing an entirely different hash function that specifically operates on floating point types?
Take a look at https://svn.boost.org/trac/boost/ticket/4038
In essence it boils down to two things:
Portability: when you take the binary representation of a float, then on some platform it could be possible that a float with a same value has multiple representations in binary. I don't know if there is actually a platform where such an issue exists, but with the complication of denormelized numbers, I'm not sure if this might actually happen.
the second issue is what you proposed, it might be that sizeof(float) does not equal sizeof(int).
I did not find anyone mentioning that the boost hash indeed avoids fewer collisions. Although I assume that separating the mantissa from the exponent might help, but the above link does not suggest that this was the driving design decision.
One reason not to just use the bit pattern is that some different bit patterns must be considered equals and thus have the same hashcode, namely
positive and negative zero
possibly denormalized numbers (I don't think this can occur with IEEE 754, but C allows other float representations).
possibly NANs (there are many, at least in IEEE 754. It actually requires NAN patterns to be considered unequal to themselves, which arguably means the cannot be meaningfully used in a hashtable)
Why are you wanting to hash floating point values? For the same reason that comparing floating point values for equality has a number of pitfalls, hashing them can have similar (negative) consequences.
However given that you really do want to do this, I suspect that the boost algorithm is complicated because when you take into account denormalized numbers different bit patterns can represent the same number (and should probably have the same hash). In IEEE 754 there are also both positive and negative 0 values that compare equal but have different bit patterns.
This probably wouldn't come up in the hashing if it wouldn't have come up otherwise in your algorithm but you still need to take care about signaling NaN values.
Additionally what would be the meaning of hashing +/- infinity and/or NaN? Specifically NaN can have many representations, should they all result in the same hash? Infinity seems to have just two representations so it seems like it would work out ok.
I imagine it's so that two machines with incompatible floating-point formats hash to the same value.