C++ - Odd Reciprocal Inequivalence - c++

I've come across a surprising oddity with floating point reciprocals that only seems to occur sometimes.
Why is it that, at unexpectable times, given two floats,
float a = ..., b = ...;
testing their equivalence at one time shows they're equal,
cout <&lt (a == b ? "T" : "F") &lt< endl; // prints T
yet, when adjusting the same line to test the equality of the reciprocals, and running the program no differently, they are suddenly not equal:
cout << (1/a == 1/b ? "T" : "F") << endl; // prints F
Here, a and b are not NaN, they're neither -INF nor +INF, and they're also not 0 (they're typically in the 3000 range with decimal values). I also noticed that, when compiling and running with both cout expressions somehow they then both print T.
Why would this be the case? I am very familiar with floating point numbers and related precision issues, but I would expect an operation on the same value to generate the same result. Or can division at times be a guesstimation/approximation instruction on certain CPUs?
Any clarification would be very appreciated.
EDIT:
As a side note, since I am starting to think this is something my compiler is doing, I am using MinGW32 GCC version 4.8.1. For targeting C++14 I'm using the flag -std=c++1y, as this version doesn't seem to support the -std=c++14 flag.
EDIT
Could this be a compiler error? I've determined that this is an issue in the GCC 4.8.1 compiler that occurs only when compiling with optimizations (-O2 or -O3).

Related

sine result depends on C++ compiler used

I use the two following C++ compilers:
cl.exe : Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24210 for x86
g++ : g++ (Ubuntu 5.2.1-22ubuntu2) 5.2.1 20151010
When using the built-in sine function, I get different results. This is not critical, but sometimes results are too significants for my use. Here is an example with a 'hard-coded' value:
printf("%f\n", sin(5451939907183506432.0));
Result with cl.exe:
0.528463
Result with g++:
0.522491
I know that g++'s result is more accurate and that I could use an additional library to get this same result, but that's not my point here. I would really understand what happens here: why is cl.exe that wrong?
Funny thing, if I apply a modulo of (2 * pi) on the param, then I get the same result than g++...
[EDIT] Just because my example looks crazy for some of you: this is a part of a pseudorandom number generator. It is not important to know if the result of the sine is accurate or not: we just need it to give some result.
You have a 19-digit literal, but double usually has 15-17 digit precision. As a result, you can get a small relative error (when converting to double), but big enough (in the context of sine calculation) absolute error.
Actually, different implementations of the standard library have differences in treating such large numbers. For example, in my environment, if we execute
std::cout << std::fixed << 5451939907183506432.0;
g++ result would be 5451939907183506432.000000
cl result would be 5451939907183506400.000000
The difference is because versions of cl earlier than 19 have a formatting algorithm that uses only a limited number of digits and fills the remaining decimal places with zero.
Furthermore, let's look at this code:
double a[1000];
for (int i = 0; i < 1000; ++i) {
a[i] = sin(5451939907183506432.0);
}
double d = sin(5451939907183506432.0);
cout << a[500] << endl;
cout << d << endl;
When executed with my x86 VC++ compiler the output is:
0.522491
0.528463
It appears that when filling the array sin is compiled to the call of __vdecl_sin2, and when there is a single operation, it is compiled to the call of __libm_sse2_sin_precise (with /fp:precise).
In my opinion, your number is too large for sin calculation to expect the same behavior from different compilers and to expect the correct behavior in general.
I think Sam's comment is closest to the mark. Whereas you're using a recentish version of GCC/glibc, which implements sin() in software (calculated at compile time for the literal in question), cl.exe for x86 likely uses the fsin instruction. The latter can be very imprecise, as described in the Random ASCII blog post, "Intel Underestimates Error Bounds by 1.3 quintillion".
Part of the problem with your example in particular is that Intel uses an imprecise approximation of pi when doing range reduction:
When doing range reduction from double-precision (53-bit mantissa) pi the results will have about 13 bits of precision (66 minus 53), for an error of up to 2^40 ULPs (53 minus 13).
According to cppreference:
The result may have little or no significance if the magnitude of arg is large
(until C++11)
It's possible that this is the cause of the problem, in which case you will want to manually do the modulo so that arg is not large.

std::isinf does not work with -ffast-math. how to check for infinity

Sample code:
#include <iostream>
#include <cmath>
#include <stdint.h>
using namespace std;
static bool my_isnan(double val) {
union { double f; uint64_t x; } u = { val };
return (u.x << 1) > (0x7ff0000000000000u << 1);
}
int main() {
cout << std::isinf(std::log(0.0)) << endl;
cout << std::isnan(std::sqrt(-1.0)) << endl;
cout << my_isnan(std::sqrt(-1.0)) << endl;
cout << __isnan(std::sqrt(-1.0)) << endl;
return 0;
}
Online compiler.
With -ffast-math, that code prints "0, 0, 1, 1" -- without, it prints "1, 1, 1, 1".
Is that correct? I thought that std::isinf/std::isnan should still work with -ffast-math in these cases.
Also, how can I check for infinity/NaN with -ffast-math? You can see the my_isnan doing this, and it actually works, but that solution is of course very architecture dependent. Also, why does my_isnan work here and std::isnan does not? What about __isnan and __isinf. Do they always work?
With -ffast-math, what is the result of std::sqrt(-1.0) and std::log(0.0). Does it become undefined, or should it be NaN / -Inf?
Related discussions: (GCC) [Bug libstdc++/50724] New: isnan broken by -ffinite-math-only in g++, (Mozilla) Bug 416287 - performance improvement opportunity with isNaN
Note that -ffast-math may make the compiler ignore/violate IEEE specifications, see http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/Optimize-Options.html#Optimize-Options :
This option is not turned on by any -O option besides -Ofast since it
can result in incorrect output for programs that depend on an exact
implementation of IEEE or ISO rules/specifications for math functions.
It may, however, yield faster code for programs that do not require
the guarantees of these specifications.
Thus, using -ffast-math you are not guaranteed to see infinity where you should.
In particular, -ffast-math turns on -ffinite-math-only, see http://gcc.gnu.org/wiki/FloatingPointMath which means (from http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/Optimize-Options.html#Optimize-Options )
[...] optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs
This means, by enabling the -ffast-math you make a promise to the compiler that your code will never use infinity or NaN, which in turn allows the compiler to optimize the code by, e.g., replacing any calls to isinf or isnan by the constant false (and further optimize from there). If you break your promise to the compiler, the compiler is not required to create correct programs.
Thus the answer quite simple, if your code may have infinities or NaN (which is strongly implied by the fact that you use isinf and isnan), you cannot enable -ffast-math as else you might get incorrect code.
Your implementation of my_isnan works (on some systems) because it directly checks the binary representation of the floating point number. Of course, the processor still might do (some) actual calculations (depending on which optimizations the compiler does), and thus actual NaNs might appear in memory and you can check their binary representation, but as explained above, std::isnan might have been replaced by the constant false. It might equally well happen that the compiler replaces, e.g., sqrt, by some version that doesn't even produce a NaN for input -1. In order to see which optimisations your compiler does, compile to assembler and look at that code.
To make a (not completely unrelated) analogy, if you're telling your compiler your code is in C++ you can not expect it to compile C code correctly and vice-versa (there are actual examples for this, e.g. Can code that is valid in both C and C++ produce different behavior when compiled in each language? ).
It is a bad idea to enable -ffast-math and use my_isnan because this will make everything very machine- and compiler-dependent you don't know what optimizations the compiler does overall, so there might be other hidden problems related to the fact that you are using non-finite maths but tell the compiler otherwise.
A simple fix is to use -ffast-math -fno-finite-math-only which would still give some optimizations.
It also might be that your code looks something like this:
filter out all infinities and NaNs
do some finite maths on the filtered values (by this I mean maths that is guaranteed to never create infinities or NaNs, this has to be very, very carefully checked)
In this case, you could split up your code and either use optimize #pragma or __attribute__ to turn -ffast-math (respectively -ffinite-math-only and -fno-finite-math-only) on and off selectively for the given pieces of code (however, I remember there being some trouble with some version of GCC related to this) or just split your code into separate files and compile them with different flags. Of course, this also works in more general settings if you can isolate the parts where infinities and NaNs might occur. If you can not isolate these parts, this is a strong indication that you can not use -ffinite-math-only for this code.
Finally, it's important to understand that -ffast-math is not a harmless optimization that simply makes your program faster. It does not only affect the performance of your code but also its correctness (and this on top of all the issues surrounding floating point numbers already, if I remember right William Kahan has a collection of horror stories on his homepage, see also What every programmer should know about floating point arithmetic). In short, you might get faster code, but also wrong or unexpected results (see below for an example). Hence, you should only use such optimizations when you really know what you are doing and you have made absolutely sure, that either
the optimizations don't affect the correctness of that particular code, or
the errors introduced by the optimization are not critical to the code.
Program code can actually behave quite differently depending on whether this optimization is used or not. In particular it can behave wrong (or at least very contrary to your expectations) when optimizations such as -ffast-math are enabled. Take the following program for example:
#include <iostream>
#include <limits>
int main() {
double d = 1.0;
double max = std::numeric_limits<double>::max();
d /= max;
d *= max;
std::cout << d << std::endl;
return 0;
}
will produce output 1 as expected when compiled without any optimization flag, but using -ffast-math, it will output 0.

Transfer programs from one architecture to another

Immediately warn you that this is a difficult task.
There is a test. The test was the result of parsing a large problem to a bug in which we encountered at work. Construction __ attribute__((noinline)) prohibits the compiler to do the substitution function (for optimizations to something there not imploded). This is the easiest way to optimize guaranteed not to kill an interesting situation.
#include <stdio.h>
double d = 5436277361664796672.000000;
long long ll = 5436277361664796253LL;
int __attribute__((noinline))
func1 (void)
{
double d1 = (double)ll;
if (d > d1)
return 1;
else
return 0;
}
int __attribute__((noinline))
func2 (void)
{
if (d > (double)ll)
return 1;
else
return 0;
}
int
main (void)
{
printf ("%d %d\n", func1(), func2());
return 0;
}
I ran this test on intel and sparc. Gcc used in a mode with optimizations and without optimizations. Obtained the following results:
sparc: "gcc" printed "0 0"
sparc: "gcc -O2" printed "0 0"
intel: "gcc" printed "0 1"
intel: "gcc -O2" printed "1 1"
What is the cause differences? Anyway in the analysis situation would be useful to be able to repeat it all myself, but, of course, almost no one has the possibility to run this code on sparc. Instead sparc can try to run under Windows using microsoft or borland C compiler. I do not know what they will be given the results, but in any case something does not match with anything (because we see three different results)
Edit 1
_attribute_ ((noinline)) - an extension of the compiler gcc (forgot to write about it). Therefore VisualStudio can not compile it.
I note that the declaration of the double constant has 19 significant figures which is more precision than can be represented by a IEEE double (which allows 15 to 17 significant figures). So d cannot hold 5436277361664796672.000000 exactly.
The two constant definition strings become different at the 16th digit, so you are in the region where the inaccuracies in the double are of the same magnitude as the difference between these two numbers. Hence the comparison cannot be relied upon.
I do not know if the C++ standard specifies what happens when an over-precise string is converted to a double, but I would not be surprised if the exact result was either undefined or implementation-dependent.
Seems solved the problem. In general, all written correctly. But actually works correctly sparc version. Because standard to convert int64-> float64 must be a loss of precision. And in the code when you convert (for intel) int64-> float80 loss occurs. Ie intel-based code works with higher accuracy , but it is in contradiction with the standard.
Perhaps it is some sort of agreement for the platform Intel, which is permissible by default to work this way. Surely there are some options on which the code runs in strict accordance with the standard (but becomes slower)

Can two doubles be equal and not equal at the same time?

I have a very strange bug in my program. I was not able to isolate the error in a reproducible code but at a certain place in my code there is:
double distance, criticalDistance;
...
if (distance > criticalDistance)
{
std::cout << "first branch" << std::endl;
}
if (distance == criticalDistance)
{
std::cout << "second branch" << std::endl;
}
In debug build everything is fine. Only one branch gets executed.
But in release build all hell breaks loose and sometimes both branches get executed.
This is very strange, since if I add the else conditional:
if (distance > criticalDistance)
{
std::cout << "first branch" << std::endl;
}
else if (distance == criticalDistance)
{
std::cout << "second branch" << std::endl;
}
This does not happen.
Please, what can be the cause of this? I am using gcc 4.8.1 on Ubuntu 13.10 on a 32 bit computer.
EDIT1:
I am using precompiler flags
-std=gnu++11
-gdwarf-3
EDIT2:
I do not think this is caused by a memory leak. I analyzed both release and debug builds with valgrind memory analyzer with tracking of unitialized memory and detection of self-modifiyng code and I found no errors.
EDIT3:
Changing the declaration to
volatile double distance, criticalDistance;
makes the problem go away. Does this confirm woolstar's answer? Is this a compiler bug?
EDIT4:
using the gcc option -ffloat-store also fixes the problem. If I understand this correctly this is caused by gcc.
if (distance > criticalDistance)
// true
if (distance == criticalDistance)
// also true
I have seen this behavior before in my own code. It is due to the mismatch between the standard 64 bit value stored in memory, and the 80 bit internal values that intel processors use for floating point calculation.
Basically, when truncated to 64 bits, your values are equal, but when tested at 80 bit values, one is slightly larger than the other. In DEBUG mode, the values are always stored to memory and then reloaded so they are always truncated. In optimized mode, the compiler reuses the value in the floating point register and it doesn't get truncated.
Please, what can be the cause of this?
Undefined behavior, aka. bugs in your code.
There is no IEEE floating point value which exhibits this behavior. So what's happening is that you are doing something wrong, which violates an assumption made by your compiler.
When optimizing your code, the compiler assumes that your code can be described by the C++ standard. If you do anything that is left undefined by the C++ standard, then these assumptions are violated, resulting in "weird" execution. It could be something "simple" like an uninitialized variable or a buffer overrun resulting in parts of the stack or heap being overwritten with garbage data, or it could be something more subtle, where you rely on a specific ordering between two operations, which is not guaranteed by the standard.
That is probably why you were not able to reproduce the problem in a small test case (the smaller test code does not contain the erroneous code), or and why you only see the error in optimized builds.
Of course, it is also possible that you've stumbled across a compiler bug, but a bug in your code is quite a bit more likely. :)
And best of all, it means that we don't really have a chance to debug the problem from the code snippet you've shown. We can say "the code shouldn't behave like that", but that's about all.
You are not initializing your doubles, are you sure that they always get a value?
I have found that uninitilized variables in debug is allways 0, but in release they can be pretty much anything.

Understanding floating point variables and operators in c++ (Also a possible book error)

I am working through a beginning C++ class and my book(Starting Out with C++ Early Objects 7th edition) has a very poor example of how to check the value of a floating point variable.
The book example in question(filename pr4-04.cpp):
// This program demonstrates how to safely test a floating-point number
// to see if it is, for all practical purposes, equal to some value.
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
double result = .666667 * 6.0;
// 2/3 of 6 should be 4 and, if you print result, 4 is displayed.
cout << "result = " << result << endl;
// However, internally result is NOT precisely equal to 4.
// So test to see if it is "close" to 4.
if (abs(result - 4.0 < .0001))
cout << "result DOES equal 4!" << endl;
else
cout << "result DOES NOT equal 4!" << endl;
return 0;
}
And I use g++ in Ubuntu to compile my code like this:
g++ pr4-04.cpp -o pr4-04 && ./pr4-04
And I get this error:
error: call of overloaded ‘abs(bool)’ is ambiguous
I am able to fix this by changing abs() to fabs(), but this is still super confusing! Why is the book giving us things which won't compile, or is this just me? Why does the cout of 'result' give 4 instead of 4.000002? Why does this value seem to change when it is used in the if{} statement?
I get that we can't just use == to check for equivalence, but why do I need to use the absolute value? I get the same answer whether or not I use it. So what is the point?
Not to mention, this seems like a very poor way to check for floating point equivalence. Is there a better way to do this? This topic seems awfully important.
I found this topic here on stackoverflow, but their solution:
fabs(f1 - f2) < precision-requirement
fabs(f1 - f2) < max(fabs(f1), fabs(f2)) * percentage-precision-requirement
Doesn't make much sense to me in the context of my 4 chapters worth of C++ experience. I would greatly appreciate some help. Our book has given me a whopping 6 sentences of text to explain all of this.
Edit: As suggested by some I tried to find an errata page, but after 30mins of searching the textbook, internet, and my course website I was only able to find this downloadable zip file, which required a login -_-
I also copied the code perfectly. That was not MY typo, I copied it directly from a CD with the code on it. It is also typed that way in the book.
if (abs(result - 4.0 < .0001))
The parenthesis are wrong, you probably mean: if (abs(result-4.0) < .0001).
As to why it did not compile, the standard determines in §26.8p8 that
In addition to the double versions of the math functions in , C++ adds float and long double overloaded versions of these functions, with the same semantics.
The expression (result-4.0 < .0001) yields a bool, and there is no overload of abs that takes a bool argument, but there are multiple versions of abs for which the argument is implicitly convertible from bool. The compiler does not find one of the conversion sequences better than the rest and bails out with the ambiguity error.
The problem is clearly the line
if (abs(result - 4.0 < .0001))
which should be written as
if (abs(result - 4.0) < .0001)
I would assume that this is a simple typo. Report the error to the author of the book!
BTW, the original code does compile on my system without any problem, giving the expected result! That is, even if the author tested the code he may not have noticed that it is problematic!
Also answering the question on why abs() is needed: some decimal numbers are rounded to a floating point value which is slightly smaller than the expected result while others are rounded to number which are slightly bigger. In which direction the values are round (if at all: some decimal numbers can be represented exactly using binary floating points) is somewhat hard to predict. Thus, the result may be slightly bigger or slightly smaller than the expectation and the difference, thus, positive or negative, respectively.