-Ofast produces incorrect code while using long double - c++

#include <cstdio>
int main(void)
{
int val = 500;
printf("%d\n", (int)((long double)val / 500));
printf("%d\n", (int)((long double)500 / 500));
}
Obviously it should output 1 1. But if you compile it with -Ofast, it will output 0 1, why?
And if you change 500 to other values (such as 400) and compile with -Ofast, it will still output 1 1.
Compiler explorer with -Ofast: https://gcc.godbolt.org/z/YkX7fB
It seems this line causes the problem.

-Ofast
Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on -ffast-math, -fallow-store-data-races and the Fortran-specific [...]
-ffast-math
Sets the options -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans, -fcx-limited-range and -fexcess-precision=fast.
This option causes the preprocessor macro __FAST_MATH__ to be defined.
This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
Conclusion: Don't use -ffast-math unless you are willing to get surprises like the one you've gotten now.

With -Ofast, -ffast-math is enabled, which can cause some operations to be calculated in a different and faster way. In your case, (long double)val / 500) can be calculated as (long double)val * (1.0L / 500)). This can be seen in the generated assembly when you compare -O2 and -Ofast for the following function:
long double f(long double a)
{
return a / 500.0L;
}
The assembly generated with -O2 involves fdiv instruction, while the assembly generated with -Ofast involves fmul instruction, see https://gcc.godbolt.org/z/58VHxb.
Next, 1/500, that is, 0.002, is not representable by long double exactly. Therefore, some rounding occurs and, seemingly, in your case, this rounding happens to be down. This can be checked by the following expression:
500.0L * (1.0L / 500.0L) < 1.0L
which is evaluated as true: https://gcc.godbolt.org/z/zMcjxJ. So, the exact stored multiplier is 0.002 - some very small delta.
Finally, the result of the multiplication is 500 * (0.002 - delta) = 1 - some small value. And when this value in converted into int, it's truncated, therefore the result in int is 0.

Even if the shown program snippet has a 'problem', it is the wrong way to work with floating point numbers anyway.
You, more or less, ask the program if a floating point number has an 'exact value' - in this case of '1'. Ok, to be more precise - if the value is '< 1' or '>= 1' for a value which is 'around' 1 - so exactly around the dividing limit of the two answers. But as the others have already written (or can easily be found in wikipedia, ...) floating point numbers have just limited precision. So such deviations can and will happen.
So, coming to a conclusion: You should always use rounding when doing floating point to integer conversions, i.e. '(int) round (floating_point_value)'.
PS. Contrary what others maybe say or recommend - I don't see any problem with -ffast-math calculations at all. The only 'problem' would be when (bitwise) comparing the results of some program after letting it run on different computers.
I do all my (scientific) calculations with -ffast-math (actually -Ofast). But that has never been a problem so far - since I expect floating point numbers to have some rounding errors (this is true, regardless if using -ffast-math or not) - but that's all, as far as I know. Since I typically use 64bit floating points (double) this means, the calculations are precise to around 15 to 17 decimal digits - and the last (few) of them are inflicted with these inaccuracies - still giving me lots of 'accurate' digits - say - more than 13, depending on how complicate my calculations are.

Related

Slight acos precision difference between Clang and Visual C++

I have some cross platform code I'm working with. On the Mac it's compiled with Clang, on Windows it's compiled with Visual C++.
There is a calculation that can be sensitive, and there was a difference between Mac and Windows that was triggering asserts. It ends up there is a difference between acos results, but I'm not clear why.
On both platforms, the input to acos is exactly -1.0f. In Visual C++, acos(-1.0f) is 3.14159274. That's the value of pi as a float, which is what I'd expect.
But on macOS:
float value = acos(-1.0f);
...evaluates to 3.1415925. Thats just enough of an accuracy difference to trigger issues in the code. acos is an operation that can be imprecise with float - I understand that. And different compilers can have different implementations of acos. I'm just unclear why Clang seems to have issues with such a simple acos result while Visual C++ doesn't. A float is capable of representing 3.14159274, but that's not the result I'm getting.
It is possible to get an accurate/Visual C++ aligned value out of Xcode's version of Clang with:
float value = (float)acos((double)-1.0f);
So I can fix the issue by moving to higher accuracy, and then down casting the value back to float to preserve the same rounding as Windows. I'm just looking for a justification as to why the extra precision is necessary when the VC++ compiler doesn't seem to have a precision issue. It could be differences between the Clang/Xcode/VC++ math libraries as well. I just assumed that acos(-1.0) might be more settled across the compilers. I couldn't find any difference in round modes (even though rounding should not be necessary) and fresh projects in Xcode and Visual Studio show the same difference. Both machines are Intel.
If you look at the binary representation of these floating point values you can see that the mac/clang's value A is the next lowest floating-point number than windows/msvc's value B
A 3.14159250 0x40490FDA
B 3.14159274 0x40490FDB
Whilst B is closest to the true value of π, it is actually greater than π as #njuffa points out in their comment.
Reading the specification, it looks like acosf is supposed to return a value in the closed range [0,π]. Technically A meets this criteria whilst B doesn't.
In summary -
A is the closest value to, but less than, π
B is the closest value to π
The difference in these may be as a result of a deliberate decision of the respective standard library implementors.
I'd also observe that both values are true inverses of cosf as both cosf(A) and cosf(B) equal -1.0f.
Generally speaking, though, it is unwise to rely on exact bit-level accuracy with any floating point calculations. If you are not already aware of it, the document What Every Computer Scientist Should
Know About Floating-Point Arithmetic explains why.
Edit: I was curious and found what might be relevant Apple source code here.
Return value:
...
Otherwise:
...
Returns a value in [0, pi] (C 7.12.4.1 3). Note that
this prohibits returning a correctly rounded value for acosf(-1),
since pi rounded to a float lies outside that interval.

Truncate Floats and Doubles after user defined points in X87 and SSE FPUs

I have made a function g that is able to approximate a function to a certain degree, this function gives accurate results up to 5 decimals ( 1,23456xxxxxxxxxxxx where the x positions are just rounding errors / junk ) .
To avoid spreading error to other computations that will use the results of g I would like to just set all the x positions to zero, better yet, just set to 0 everything after the 5th decimal .
I haven't found anything in X87 and SSE literature that let's me play with IEEE 754 bits or their representation the way I would like to .
There is an old reference to the FISTP instruction for X87 that is apparently mirrored in the SSE world with FISTTP, with the benefit that FISTTP doesn't necesserily modify the control word and is therefore faster .
I have noticed that FISTTP was called "chopping mode", but now in more modern literature is just "rounding toward zero" or "truncate" and this confuse me because "chopping" means removing something altogether where "rounding toward zero" doesn't necessarily means the same thing to me .
I don't need to round to zero, I only need to preserve up to 5 decimals as the last step in my function before storing the result in memory; how do I do this in X87 ( scalar FPU ) and SSE ( vector FPU ) ?
As several people commented, more early rounding doesn't help the final result be more accurate. If you want to read more about floating point comparisons and weirdness / gotchas, I highly recommend Bruce Dawson's series of articles on floating point. Here's a quote from the one with the index
We’ve finally reached the point in this series that I’ve been waiting
for. In this post I am going to share the most crucial piece of
floating-point math knowledge that I have. Here it is:
[Floating-point] math is hard.
You just won’t believe how vastly, hugely, mind-bogglingly hard it is.
I mean, you may think it’s difficult to calculate when trains from
Chicago and Los Angeles will collide, but that’s just peanuts to
floating-point math.
(Bonus points if you recognize that last paragraph as a paraphrase of a famous line about space.)
How you could actually implement your bad idea:
There aren't any machine instructions or C standard library functions to truncate or round to anything other than integer.
Note that there are machine instructions (and C functions) that round a double to nearest (representable) integer without converting it to intmax_t or anything, just double->double. So no round-trip through a fixed-width 2's complement integer.
So to use them, you could scale your float up by some factor, round to nearest integer, then scale back down. (like chux's round()-based function, but I'd recommend C99 double rint(double) instead of round(). round has weird rounding semantics that don't match any of the available rounding modes on x86, so it compiles to worse code.
The x86 asm instructions you keep mentioning are nothing special, and don't do anything that you can't ask the compiler to do with pure C.
FISTP (Float Integer STore (and Pop the x87 stack) is one way for a compiler or asm programmer to implement long lrint(double) or (int)nearbyint(double). Some compilers make better code for one or the other. It rounds using the current x87 rounding mode (default: round to nearest), which is the same semantics as those ISO C standard functions.
FISTTP (Float Integer STore with Truncation (and Pop the x87 stack) is part of SSE3, even though it operates on the x87 stack. It lets compilers avoid setting the rounding mode to truncation (round-towards-zero) to implement the C truncation semantics of (long)x, and then restoring the old rounding mode.
This is what the "not modify the control word" stuff is about. Neither instruction does that, but to implement (int)x without FISTTP, the compiler has to use other instructions to modify and restore the rounding mode around a FIST instruction. Or just use SSE2 CVTTSD2SI to convert a double in an xmm register with truncation, instead of an FP value on the legacy x87 stack.
Since FISTTP is only available with SSE3, you'd only use it for long double, or in 32-bit code that had FP values in x87 registers anyway because of the crusty old 32-bit ABI which returns FP values on the x87 stack.
PS. if you didn't recognize Bruce's HHGTG reference, the original is:
Space is big. Really big. You just won’t believe how vastly hugely
mindbogglingly big it is. I mean you may think it’s a long way down
the road to the chemist’s, but that’s just peanuts to space.
how do I do this in X87 ( scalar FPU ) and SSE ( vector FPU ) ?
The following does not use X87 nor SSE. I've included it as a community reference for general purpose code. If anything, it can be used to test a X87 solution.
Any "chopping" of the result of g() will at least marginally increase error, hopefully tolerable as OP said "To avoid spreading error to other computations ..."
It is unclear if OP wants "accurate results up to 5 decimals" to reflect absolute precision (+/- 0.000005) or relative precision (+/- 0.000005 * result). Will assume "absolute precision".
Since float, double are far often a binary floating point, any "chop" will reflect a FP number nearest to a multiple of 0.00001.
Text Method:
// - x xxx...xxx . xxxxx \0
char buf[1+1+ DBL_MAX_10_EXP+1 +5 +1];
sprintf(buf, "%.5f", x);
x = atof(buf);
round() rint() method:
#define SCALE 100000.0
if (fabs(x) < DBL_MAX/SCALE) {
x = x*SCALE;
x = rint(x)/SCALE;
}
Direct bit manipulation of x. Simply zero select bits in the significand.
TBD code.

c++ sqrt guaranteed precision, upper/lower bound

I have to check an inequality containing square roots. To avoid incorrect results due to floating point inaccuracy and rounding, I use std::nextafter() to get an upper/lower bound:
#include <cfloat> // DBL_MAX
#include <cmath> // std::nextafter, std::sqrt
double x = 42.0; //just an example number
double y = std::nextafter(std::sqrt(x), DBL_MAX);
a) Is y*y >= x guaranteed using GCC compiler?
b) Will this work for other operations like + - * / or even std::cos() and std::acos()?
c) Are there better ways to get upper/lower bounds?
Update:
I read this is not guaranteed by the C++ Standard, but should work according to IEEE-754. Will this work with the GCC compiler?
In general, floating point operations will result in some ULP error. IEEE 754 requires that results for most operations be correct to within 0.5 ULP, but errors can accumulate, which means a result may not be within one ULP of the the exact result. There are limits to precision as well, so depending on the number of digits there are in resulting values, you also may not be working with values of the same magnitudes. Transcendental functions are also somewhat notorious for introducing error into calculations.
However, if you're using GNU glibc, sqrt will be correct to within 0.5 ULP (rounded), so you're specific example would work (neglecting NaN, +/-0, +/-Inf). Although, it's probably better to define some epsilon as your error tolerance and use that as your bound. For exmaple,
bool gt(double a, double b, double eps) {
return (a > b - eps);
}
Depending on the level of precision you need in calculations, you also may want to use long double instead.
So, to answer your questions...
a) Is y*y >= x guaranteed using GCC compiler?
Assuming you use GNU glibc or SSE2 intrinsics, yes.
b) Will this work for other operations like + - * / or even std::cos() and std::acos()?
Assuming you use GNU glibc and one operation, yes. Although some transcendentals are not guaranteed correctly rounded.
c) Are there better ways to get upper/lower bounds?
You need to know what your error tolerance in calculations is, and use that as an epsilon (which may be larger than one ULP).
For GCC this page suggests that it will work if you use the GCC builtin sqrt function __builtin_sqrt.
Additionally this behavior will be dependent on how you compile your code and the machine that it is run on
If the processor supports SSE2 then you should compile your code with the flags -mfpmath=sse -msse2 to ensure that all floating point operations are done using the SSE registers.
If the processor doesn't support SSE2 then you should use the long double type for the floating point values and compile with the flag -ffloat-store to force GCC to not use registers to store floating point values (you'll have a performance penalty for doing this)
Concerning
c) Are there better ways to get upper/lower bounds?
Another way is to use a different rounding mode, i.e. FE_UPWARD or FE_DOWNWARD instead of the default FE_TONEAREST. See https://stackoverflow.com/a/6867722 This may be slower, but is a better upper/lower bound.

Differences in rounded result when calling pow()

OK, I know that there was many question about pow function and casting it's result to int, but I couldn't find answer to this a bit specific question.
OK, this is the C code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
int i = 5;
int j = 2;
double d1 = pow(i,j);
double d2 = pow(5,2);
int i1 = (int)d1;
int i2 = (int)d2;
int i3 = (int)pow(i,j);
int i4 = (int)pow(5,2);
printf("%d %d %d %d",i1,i2,i3,i4);
return 0;
}
And this is the output: "25 25 24 25". Notice that only in third case where arguments to pow are not literals we have that wrong result, probably caused by rounding errors. Same thing happends without explicit casting. Could somebody explain what happens in this four cases?
Im using CodeBlocks in Windows 7, and MinGW gcc compiler that came with it.
The result of the pow operation is 25.0000 plus or minus some bit of rounding error. If the rounding error is positive or zero, 25 will result from the conversion to an integer. If the rounding error is negative, 24 will result. Both answers are correct.
What is most likely happening internally is that in one case a higher-precision, 80-bit FPU value is being used directly and in the other case, the result is being written from the FPU to memory (as a 64-bit double) and then read back in (converting it to a slightly different 80-bit value). This can make a microscopic difference in the final result, which is all it takes to change a 25.0000000001 to a 24.999999997
Another possibility is that your compiler recognizes the constants passed to pow and does the calculation itself, substituting the result for the call to pow. Your compiler may use an internal arbitrary-precision math library or it may just use one that's different.
This is caused by a combination of two problems:
The implementation of pow you are using is not high quality. Floating-point arithmetic is necessarily approximate in many cases, but good implementations take care to ensure that simple cases such as pow(5, 2) return exact results. The pow you are using is returning a result that is less than 25 by an amount greater than 0 but less than or equal to 2–49. For example, it might be returning 25–2-50.
The C implementation you are using sometimes uses a 64-bit floating-point format and sometimes uses an 80-bit floating-point format. As long as the number is kept in the 80-bit format, it retains the complete value that pow returned. If you convert this value to an integer, it produces 24, because the value is less than 25 and conversion to integer truncates; it does not round. When the number is converted to the 64-bit format, it is rounded. Converting between floating-point formats rounds, so the result is rounded to the nearest representable value, 25. After that, conversion to integer produces 25.
The compiler may switch formats whenever it is “convenient” in some sense. For example, there are a limited number of registers with the 80-bit format. When they are full, the compiler may convert some values to the 64-bit format and store them in memory. The compiler may also rearrange expressions or perform parts of them at compile-time instead of run-time, and these can affect the arithmetic performed and the format used.
It is troublesome when a C implementation mixes floating-point formats, because users generally cannot predict or control when the conversions between formats occur. This leads to results that are not easily reproducible and interferes with deriving or controlling numerical properties of software. C implementations can be designed to use a single format throughout and avoid some of these problems, but your C implementation is apparently not so designed.
To add to the other answers here: just generally be very careful when working with floating point values.
I highly recommend reading this paper (even though it is a long read):
http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf
Skip to section 3 for practical examples, but don't neglect the previous chapters!
I'm fairly sure this can be explained by "intermediate rounding" and the fact that pow is not simply looping around j times multiplying by i, but calculating using exp(log(i)*j) as a floating point calculation. Intermediate rounding may well convert 24.999999999996 into 25.000000000 - even arbitrary storing and reloading of the value may cause differences in this sort of behaviuor, so depending on how the code is generated, it may make a difference to the exact result.
And of course, in some cases, the compiler may even "know" what pow actually achieves, and replace the calculation with a constant result.

Computer precision: when should I have to worry about it?

In C++ programming, when do I need to worry about the precision issue? To take a small example (it might not be a perfect one though),
std::vector<double> first (50000, 0.0);
std::vector<double> second (first);
Could it be possible that second[619] = 0.00000000000000000000000000001234 (I mean a very small value). Or SUM = second[0]+second[1]+...+second[49999] => 1e-31? Or SUM = second[0]-second[1]-...-second[49999] => -7.987654321e-12?
My questions:
Could it be some small disturbances in working with the double type numbers?
What may cause these kind of small disturbances? i.e. rounding errors become large? Could you please list them? How to take precautions?
If there could be small disturbance in certain operations, does it then mean after these operations, using if (SUM == 0) is dangerous? One should then always use if (SUM < SMALL) instead, where SMALL is defined as a very small value, such as 1E-30?
Lastly, could the small disturbances result into a negative value? Because if it is possible, then I should be better use if (abs(SUM) < SMALL) instead.
Any experiences?
This is a good reference document for floating point precision: What Every Computer Scientist Should Know About Floating-Point Arithmetic
One of the more important parts is catastrophic cancellation
Catastrophic cancellation occurs when the operands are subject to
rounding errors. For example in the quadratic formula, the expression
b2 - 4ac occurs. The quantities b2 and 4ac are subject to rounding
errors since they are the results of floating-point multiplications.
Suppose that they are rounded to the nearest floating-point number,
and so are accurate to within .5 ulp. When they are subtracted,
cancellation can cause many of the accurate digits to disappear,
leaving behind mainly digits contaminated by rounding error. Hence the
difference might have an error of many ulps. For example, consider b =
3.34, a = 1.22, and c = 2.28. The exact value of b2 - 4ac is .0292. But b2 rounds to 11.2 and 4ac rounds to 11.1, hence the final answer
is .1 which is an error by 70 ulps, even though 11.2 - 11.1 is exactly
equal to .16. The subtraction did not introduce any error, but rather
exposed the error introduced in the earlier multiplications.
Benign cancellation occurs when subtracting exactly known quantities.
If x and y have no rounding error, then by Theorem 2 if the
subtraction is done with a guard digit, the difference x-y has a very
small relative error (less than 2).
A formula that exhibits catastrophic cancellation can sometimes be
rearranged to eliminate the problem. Again consider the quadratic
formula
For your specific example, 0 has an exact representation as a double, and adding exactly 0 to a double does not change its value.
Also, like any other values you put in variables, numbers that you initialize in the array are not going to mysteriously change. You only get rounding when the result of a calculation cannot be exactly represented as a floating point number.
To give a better opinion about "disturbances" I would need to know the kinds of calculations that your code performs.