GCC equivalent to VC's floating point model switch? - c++

Does GCC have an equivalent compiler switch to VC's floating point model switch (/fp)?
In particular, my application benefits from compiling with /fp:fast and precision is not a big deal, how should I compile it with GCC?

Try -ffast-math. On gcc 4.4.1, this turns on:
-fno-math-errno - Don't set errno for single instruction math functions.
-funsafe-math-optimizations - Assume arguments and result of math operations are valid, and potentially violate standards
-ffinite-math-only - Assume arguments and results are finite.
-fno-rounding-math - Enable optimizations that assume default rounding. This is the default, but it could be overridden by something else.
-fno-signaling-nans - Enable optimizations that can change number of math exceptions.; also default
-fcx-limited-range - Assume range reduction is not needed for complex number division:
__FAST_MATH__ macro.
You could also enable these individually.

Related

Fast floating point model broken on next-generation intel compiler

Description
I'm trying to switch over from using the classic intel compiler from the Intel OneAPI toolkit to the next-generation DPC/C++ compiler, but the default behaviour for handling floating point operations appears broken or different, in that comparison with infinity always evaluates to false in fast floating point modes. The above is both a compiler warning and the behaviour I now experience with ICX, but not a behaviour experienced with the classic compiler (for the same minimal set of compiler flags used).
Minimally reproducible example
#include <iostream>
#include <cmath>
int main()
{
double a = 1.0/0.0;
if (std::isinf(a))
std::cout << "is infinite";
else
std::cout << "is not infinite;";
}
Compiler Flags:
-O3 -Wall -fp-model=fast
ICC 2021.5.0 Output:
is infinite
(also tested on several older versions)
ICX 2022.0.0 Output:
is not infinite
(also tested on 2022.0.1)
Live demo on compiler-explorer:
https://godbolt.org/z/vzeYj1Wa3
By default -fp-model=fast is enabled on both compilers. If I manually specify -fp-model=precise I can recover the behaviour but not the performance.
Does anyone know of a potential solution to both maintain the previous behaviour & performance of the fast floating point model using the next-gen compiler?
If you add -fp-speculation=safe to -fp-model=fast, you will still get the warning that you shouldn't use -fp-model=fast if you want to check for infinity, but the condition will evaluate correctly: godbolt.
In the Intel Porting Guide for ICC Users to DPCPP or ICX it is stated that:
FP Strictness: Nothing stricter than the default is supported. There is no support for -fp-model strict, -fp-speculation=safe, #pragma fenv_access, etc. Implementing support for these is a work-in-progress in the open source community.
Even though it works for the current version of the tested compiler (icx 2022.0.0), there is a discrepancy: either the documentation is outdated (more probable), or this feature is working by accident (less probable).

asin produces different answers on different platforms using Clang

#include <cmath>
#include <cstdio>
int main() {
float a = std::asin(-1.f);
printf("%.10f\n", a);
return 0;
}
I ran the code above on multiple platforms using clang, g++ and Visual studio. They all gave me the same answer: -1.5707963705
If I run it on macOS using clang it gives me -1.5707962513.
Clang on macOS is supposed to use libc++, but does macOS has its own implementation of libc++?
If I run clang --verison I get:
Apple LLVM version 10.0.0 (clang-1000.11.45.5)
Target: x86_64-apple-darwin18.0.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
asin is implemented in libm, which is part of the standard C library, not the C++ standard library. (Technically, the C++ standard library includes C library functions, but in practice both the Gnu and the LLVM C++ library implementations rely on the underlying platform math library.) The three platforms -- Linux, OS X and Windows -- each have their own implementation of the math library, so if the library function were being used, it could certainly be a different library function and the result might differ in the last bit position (which is what your test shows).
However, it is quite possible that the library function is never being called in all cases. This will depend on the compilers and the optimization options you pass them (and maybe some other options as well). Since the asin function is part of the standard library and thus has known behaviour, it is entirely legitimate for a compiler to compute the value of std::asin(-1.0F) at compile time, as it would for any other constant expression (like 1.0 + 1.0, which almost any compiler will constant-fold to 2.0 at compile time).
Since you don't mention what optimization settings you are using, it's hard to tell exactly what's going on, but I did a few tests with http://gcc.godbolt.org to get a basic idea:
GCC constant folds the call to asin without any optimisation flags, but it does not precompute the argument promotion in printf (which converts a to a double in order to pass it to printf) unless you specify at least -O1. (Tested with GCC 8.3).
Clang (7.0) calls the standard library function unless you specify at least -O2. However, if you explicitly call asinf, it constant folds at -O1. Go figure.
MSVC (v19.16) does not constant fold. It either calls a std::asin wrapper or directly calls asinf, depending on optimisation settings. I don't really understand what the wrapper does, and I didn't spend much time investigating.
Both GCC and Clang constant fold the expression to precisely the same binary value (0xBFF921FB60000000 as a double), which is the binary value -1.10010010000111111011011 (trailing zeros truncated).
Note that there is also a difference between the printf implementations on the three platforms (printf is also part of the platform C library). In theory, you could see a different decimal output from the same binary value, but since the argument to printf is promoted to double before printf is called and the promotion is precisely defined and not value-altering, it is extremely unlikely that this has any impact in this particular case.
As a side note, if you really care about the seventh decimal point, use double instead of float. Indeed, you should only use float in very specific applications in which precision is unimportant; the normal floating point type is double.
The mathematically exact value of asin(-1) would be -pi/2, which of course is irrational and not possible to represent exactly as a float. The binary digits of pi/2 start with
1.1001001000011111101101010100010001000010110100011000010001101..._2
Your first three libraries round this (correctly) to
1.10010010000111111011011_2 = 1.57079637050628662109375_10
On MacOS it appears to get truncated to:
1.10010010000111111011010_2 = 1.57079625129699707031250_10
This is an error of less than 1 ULP (unit in the last place). This could be caused either by a different implementation, or your FPU is set to a different rounding mode, or perhaps in some cases the compiler computes the value at compile-time.
I don't think the C++ standard really gives any guarantees on the accuracy of transcendental functions.
If you have code which really depends on having (platform/hardware independent) accuracy, I suggest to use a library, like e.g., MPFR. Otherwise, just live with the difference. Or have a look at the source of the asin function which is called in each case.

MSVC equivalent to GCC's -fno-finite-math-only?

On GCC, we enable -ffast-math to speed up floating point calculations. But as we rely on proper behavior of NaN and Inf floating point values, we also turn on -fno-finite-math-only, so that optimization which assume values aren't NaN/Inf
For MSVC, the "equivalent" to -ffast-math is apparently /fp:fast. However, like GCC's -ffast-math, it also includes the optimizations which assume that Nan/Inf aren't present. (Critically, is appears tests like std::isnan() aren't guaranteed to give "accurate" results.)
Is there an option for MSVC C++ compilation which allows you to take advantage of most of the /fp:fast optimizations, but still treats NaN and Inf values "properly"? (Or at the very least, guarantees that tests like std::isnan()/std::isinf() will detect NaN/Inf if they do happen to be generated.)
guarantees that tests like std::isnan()/std::isinf()
Unlike GCC, MSVC (CL RC 19) doesn't actually optimize out std::isnan on the /fp:fast setting:
Another alternative that will never be optimized out is to call the C99 isnan or the MSVC intrinsic _isnanf. Or roll your own nan test against a known bit-mask, which can be generated with std::numeric_limits::quiet_NaN.
See:
https://godbolt.org/g/YdZJq5

Extent of G++ compiler optimization on non-commutative operations

I am concerned about the G++ optimizer's effect on arithmetic operations, specifically integer operations that are not necessarily commutative, eg * and /. This concern arose when I looked at a simple function in gdb that had been compiled with the -O3 flag set; it was all in all a better function, but it's form was completely different then what it had been with no optimization, operations had been removed, and some had been relocated. Here is a simple function with which I will demonstrate the crux of my concern;
int ClipLower(int num, int dig){
int Mult10 = 1;
while (dig != 0){
Mult10 *= 10, dig--;
}
return ((num / Mult10) * Mult10);
}
This function simply clips off the base10 digits below digit 'dig'. My concern is, does the compiler consider things like the fact that math on integers is non-commutative? So, will the compiler try to reduce (num / mult10) * mult10 into num * 1, and of course discard the one?
I am aware that volatile will avoid this situation, but I would still like my code optimized as much as possible. So in essence I am asking if the gnu optimizer will understand that integer math is non-communicative, and further more how much of a concern optimization-gone-awry really is.
also
here is the disassembly for the function at -O4, as you can see, the order of operations is fine
13 return ((num / Mult10) * Mult10);
cltd
idiv %ecx
imul %ecx,%eax
ret
amusingly, the compiler generated a load of no-operations following the function, presumably as padding because it ended up so small.
Here is the list of flags that -O3 in g++ is equivalent to: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
Now if you look carefully, there is also -Ofast which is defined as -O3 + some other, especially -ffast-math. In description of -ffast-math you can read:
This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
This is done precisely to ensure default compiler flags do not violate rounding error and other floating point standard specifications.
There is also a related question on SO, why don't compilers optimize a*a*a*a*a*a to (a*a*a)^2, the answer is the same. (I cannot find the link atm =/)
Btw, Mult10 *= 10, dig--; are you trying to lose people following your code? =D
EDIT: Another by the way, going over -O3 has no effect. Except that some people say you might overflow some internal variable. I didn't test the overflow but I'm sure -O4 and -O100 are equivalent to -O3 at this point of writing this.
Try it and look at the assembly
Optimization should not effect output, only speed. Rounding should be maintained. But bugs can occur, although much more rarely nowadays.
Generally issues are more likely with floating point. 2/7 with floats might vary slightly.
With ints it should always be 0, no matter what optimization, even if it is multiplied by 7.

C++ handling of excess precision

I'm currently looking at code which does multi-precision floating-point arithmetic. To work correctly, that code requires values to be reduced to their final precision at well-defined points. So even if an intermediate result was computed to an 80 bit extended precision floating point register, at some point it has to be rounded to 64 bit double for subsequent operations.
The code uses a macro INEXACT to describe this requirement, but doesn't have a perfect definition. The gcc manual mentions -fexcess-precision=standard as a way to force well-defined precision for cast and assignment operations. However, it also writes:
‘-fexcess-precision=standard’ is not implemented for languages other than C
Now I'm thinking about porting those ideas to C++ (comments welcome if anyone knows an existing implementation). So it seems I can't use that switch for C++. But what is the g++ default behavior in absence of any switch? Are there more C++-like ways to control the handling of excess precision?
I guess that for my current use case, I'll probably use -mfpmath=sse in any case, which should not incur any excess precision as far as I know. But I'm still curious.
Are there more C++-like ways to control the handling of excess precision?
The C99 standard defines FLT_EVAL_METHOD, a compiler-set macro that defines how excess precision should happen in a C program (many C compilers still behave in a way that does not exactly conform to the most reasonable interpretation of the value of FP_EVAL_METHOD that they define: older GCC versions generating 387 code, Clang when generating 387 code, …). Subtle points in relation with the effects of FLT_EVAL_METHOD were clarified in the C11 standard.
Since the 2011 standard, C++ defers to C99 for the definition of FLT_EVAL_METHOD (header cfloat).
So GCC should simply allow -fexcess-precision=standard for C++, and hopefully it eventually will. The same semantics as that of C are already in the C++ standard, they only need to be implemented in C++ compilers.
I guess that for my current use case, I'll probably use -mfpmath=sse in any case, which should not incur any excess precision as far as I know.
That is the usual solution.
Be aware that C99 also defines FP_CONTRACT in math.h that you may want to look at: it relates to the same problem of some expressions being computed at a higher precision, striking from a completely different side (the modern fused-multiply-add instruction instead of the old 387 instruction set). This is a pragma to decide whether the compiler is allowed to replace source-level additions and multiplications with FMA instructions (this has the effect that the multiplication is virtually computed at infinite precision, because this is how this instruction works, instead of being rounded to the precision of the type as it would be with separate multiplication and addition instructions). This pragma has apparently not been incorporated in the C++ standard (as far as I can see).
The default value for this option is implementation-defined and some people argue for the default to be to allow FMA instructions to be generated (for C compilers that otherwise define FLT_EVAL_METHOD as 0).
You should, in C, future-proof
your code with:
#include <math.h>
#pragma STDC FP_CONTRACT off
And the equivalent incantation in C++ if your compiler documents one.
what is the g++ default behavior in absence of any switch?
I am afraid that the answer to this question is that GCC's behavior, say, when generating 387 code, is nonsensical. See the description of the situation that motivated Joseph Myers to fix the situation for C. If g++ does not implement -fexcess-precision=standard, it probably means that 80-bit computations are randomly rounded to the precision of the type when the compiler happened to have to spill some floating-point registers to memory, leading the program below to print "foo" in some circumstances outside the programmer's control:
if (x == 0.0) return;
... // code that does not modify x
if (x == 0.0) printf("foo\n");
… because the code in the ellipsis caused x, that was held in an 80-bit floating-point register, to be spilt to a 64-bit slot on the stack.
But what is the g++ default behavior in absence of any switch?
I found one answer myself via an experiment, using the following code:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
double a = atof("1.2345678");
double b = a*a;
printf("%.20e\n", b - 1.52415765279683990130);
return 0;
}
If b is rounded (-fexcess-precision=standard), then the result is zero. Otherwise (-fexcess-precision=fast) it is something like 8e-17. Compiling with -mfpmath=387 -O3, I could reproduce both cases for gcc-4.8.2. For g++-4.8.2 I get an error for -fexcess-precision=standard if I try that, and without a flag I get the same behavior as -fexcess-precision=fast gives for C. Adding -std=c++11 does not help. So now the suspicion already voiced by Pascal is official: g++ does not necessarily round everywhere it should.