While answering a question where I suggested -ffast-math, a comment pointed out that it is dangerous.
My personal feeling is that outside scientific calculations, it is OK. I also asume that serious financial applications use fixed point instead of floating point.
Of course if you want to use it in your project the ultimate answer is to test it on your project and see how much it affects it. But I think a general answer can be given by people who tried and have experience with such optimizations:
Can ffast-math be used safely on a normal project?
Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.
This answer was particular illuminating on the fact that -ffast-math does much more than reordering operations that would result in a slightly different result (does not check for NaN or zero, disables signed zero just to name a few), but I fail to see what the effects of these would ultimately be in a real code.
I tried to think of typical uses of floating points, and this is what I came up with:
GUI (2D, 3D, physics engine, animations)
automation (e.g. car electronics)
robotics
industrial measurements (e.g. voltage)
and school projects, but those don't really matter here.
One of the especially dangerous things it does is imply -ffinite-math-only, which allows explicit NaN tests to pretend that no NaNs ever exist. That's bad news for any code that explicitly handles NaNs. It would try to test for NaN, but the test will lie through its teeth and claim that nothing is ever NaN, even when it is.
This can have really obvious results, such as letting NaN bubble up to the user when previously they would have been filtered out at some point. That's bad of course, but probably you'll notice and fix it.
A more insidious problem arises when NaN checks were there for error checking, for something that really isn't supposed to ever be NaN. But perhaps through some bug, bad data, or through other effects of -ffast-math, it becomes NaN anyway. And now you're not checking for it, because by assumption nothing is ever NaN, so isnan is a synonym of false. Things will go wrong, spuriously and long after you've already shipped your software, and you will get an "impossible" error report - you did check for NaN, it's right there in the code, it cannot be failing! But it is, because someone someday added -ffast-math to the flags, maybe you even did it yourself, not knowing fully what it would do or having forgotten that you used a NaN check.
So then we might ask, is that normal? That's getting quite subjective, but I would not say that checking for NaN is especially abnormal. Going fully circular and asserting that it isn't normal because -ffast-math breaks it is probably a bad idea.
It does a lot of other scary things as well, as detailed in other answers.
I wouldn't recommend to avoid using this option, but I remind one instance where unexpected floating-point behavior struck back.
The code was saying like this innocent construct:
float X, XMin, Y;
if (X < XMin)
{
Y= 1 / (XMin - X);
}
This was sometimes raising a division by zero error, because when the comparison was carried out, the full 80 bits representation (Intel FPU) was used, while later when the subtraction was performed, values were truncated to the 32 bits representation, possibly being equal.
The short answer: No, you cannot safely use -ffast-math except on code designed to be used with it. There are all sorts of important constructs for which it generates completely wrong results. In particular, for arbitrarily large x, there are expressions with correct value x but which will evaluate to 0 with -ffast-math, or vice versa.
As a more relaxed rule, if you're certain the code you're compiling was written by someone who doesn't actually understand floating point math, using -ffast-math probably won't make the results any more wrong (vs. the programmer's intent) than they already were. Such a programmer will not be performing intentional rounding or other operations that badly break, probably won't be using nans and infinities, etc. The most likely negative consequence is having computations that already had precision problems blow up and get worse. I would argue that this kind of code is already bad enough that you should not be using it in production to begin with, with or without -ffast-math.
From personal experience, I've had enough spurious bug reports from users trying to use -ffast-math (or even who have it buried in their default CFLAGS, uhg!) that I'm strongly leaning towards putting the following fragment in any code with floating point math:
#ifdef __FAST_MATH__
#error "-ffast-math is broken, don't use it"
#endif
If you still want to use -ffast-math in production, you need to actually spend the effort (lots of code review hours) to determine if it's safe. Before doing that, you probably want to first measure whether there's any benefit that would be worth spending those hours, and the answer is likely no.
Update several years later: As it turns out, -ffast-math gives GCC license to make transformations that effectively introduced undefined behavior into your program, causing miscompilation with arbitraryily-large fallout. See for example PR 93806 and related bugs. So really, no, it's never safe to use.
Given that IEEE 754 floating point has rounding errors, the assumption is that you are already living with inexact calculations.
The question you should answer is not whether the program expects inexact computations (it had better expect them, or it will break with or without -ffast-math), but whether the program expects approximations to be exactly those predicted by IEEE 754, and special values that behave exactly as predicted by IEEE 754 as well; or whether the program is designed to work fine with the weaker hypothesis that each operation introduces a small unpredictable relative error.
Many algorithms do not make use of special values (infinities, NaN) and are designed to work well in a computation model in which each operation introduces a small nondeterministic relative error. These algorithms work well with -ffast-math, because they do not use the hypothesis that the error of each operation is exactly the error predicted by IEEE 754. The algorithms also work fine when the rounding mode is other than the default round-to-nearest: the error in the end may be larger (or smaller), but a FPU in round-upwards mode also implements the computation model that these algorithms expect, so they work more or less identically well in these conditions.
Other algorithms (for instance Kahan summation, “double-double” libraries in which numbers are represented as the sum of two doubles) expect the rules to be respected to the letter, because they contain smart shortcuts based on subtle behaviors of IEEE 754 arithmetic. You can recognize these algorithms by the fact that they do not work when the rounding mode is other than expected either. I once asked a question about designing double-double operations that would work in all rounding modes (for library functions that may be pre-empted without a chance to restore the rounding mode): it is extra work, and these adapted implementations still do not work with -ffast-math.
Yes, you can use -ffast-math on normal projects, for an appropriate definition of "normal projects." That includes probably 95% of all programs written.
But then again, 95% of all programs written would not benefit much from -ffast-math either, because they don't do enough floating point math for it to be important.
Yes, they can be used safely, provided that you know what you are doing. This implies that you understand that they represent magnitudes, not exact values. This means:
You always do a sanity check on any external fp input.
You never divide by 0.
You never check for equality, unless you know it is an integer with an absolute value below the max value of the mantissa.
etc.
In fact, I would argue the converse. Unless you are working in very specific applications where NaNs and denormals have meaning, or if you really need that tiny incremental bit of reproduceability, then -ffast-math should be on by default. That way, your unit tests have a better chance of flushing out errors. Basically, whenever you think fp calculations have either reproduceability or precision, even under ieee, you are wrong.
Related
I'm trying to find a good use for changing rounding mode in IEEE-754 floating-point environment.
I'm looking mostly from perspective of C and C++ code. There, it can be done using
int fesetround(int round)
function from "fenv.x" (or <cfenv> in C++).
The best use I could find is setting the environment to FE_TONEAREST, if for any godforsaken reason it wasn't the default.
In this question someone suggested using it to get some implementation-dependent behavior of string formatting. I personally find this answer completely wrong, and I believe changing rounding modes around formatting functions can only result in unexpected, or simply wrong behavior.
Another thing that is changed by rounding mode, are the functions: nearbyint and rint. But why would you ever want to use them, instead of dispatching to floor, ceil and trunc functions?
That leaves us with the only useful part being local difference in floating point arithmetic. Where could that be useful? I was trying to find some niche use for that, but so far I couldn't find any.
The rouding modes have little to do with the floor and ceiling (the latter round to integers). I don't know about a trunk function.
Changing the rounding rule might be useful in some sophisticated cases where theoretical study shows that a particular rule has advantages over other or gives some desirable guarantee.
I want to do some calculations in C++ using several scientific constants like,
effective mass of electron(m) 9.109e-31 kg
charge of electron 1.602e-19 C
Boltzman constant(k) 1.38×10−23
Time 8.92e-13
And I have calculations like, sqrt((2kT)/m)
Is it safe to use double for these constants and for results?
floating point arithmetic and accuracy is a very tricky subject. Read absolutely the floating-point-gui.de site.
Errors of many floating point operations can accumulate to the point of giving meaningless results. Several catastrophic events (loss of life, billions of dollars crashes) happened because of this. More will happen in the future.
There are some static source analyzers dedicated to detect them, for example Fluctuat (by my CEA colleagues, several now at Ecole Polytechnique, Palaiseau, France) and others. But Rice's theorem applies so that static analysis problem is unsolvable in general.
(but static analysis of floating point accuracy could sometimes practically work on some small programs of a few thousand lines, and do not scale well to large programs)
There are also some programs instrumenting calculations, for example CADNA from LIP6 in Paris, France.
(but instrumention may give a huge over-approximation of the error)
You could design your numerical algorithms to be less sensitive to floating point errors. This is very difficult (and you'll need years of work to acquire the relevant skills and expertise).
(you need both numerical, mathematical, and computer science skills, PhD-level)
You could also use arbitrary-precision arithmetic, or extended precision one (e.g. 128 bit floats or quad-precision). This slows down the computations.
An important consideration is how much effort (time and money) you can allocate to hunt floating point errors, and how much do they matter to your particular problem. But there is No Silver Bullet, and the question of floating point accurary remains a very difficult issue (you could work your entire life on it).
PS. I am not a floating point expert. I just happen to know some.
With the particular example you gave (constants and calculations) : YES
You didn't define 'safe' in your problem. I will assume that you want to keep the same number of correct significant digits.
doubles are correct to 15 significant digits
you have constants that have 4 significant digits
the operations involves use multiplication, division, and one square root
it doesn't seem that your results are going to the 'edge' cases of doubles (for very small or large exponent value, where mantissa loses precision)
In this particular order, the result would be correct to 4 significant digits.
In the general case, however, you have to be careful. (probably not, and this depend on your definition of 'safe' of course).
This is a large and complicated subject. In particular, your result might not be correct to the same number of significant digits if you have :
a lot more operations,
if you have substractions of numbers close to each other
other problematic operations
Obligatory reading : What Every Computer Scientist Should Know About Floating-Point Arithmetic
See the good answer of #Basile Starynkevitch for other references.
Also, for complex calculations, it is relevant to have some notion of the Condition number of a problem.
If you need a yes or no answer, No.
First, I'll just say that I know floating point calculations are approximate - subject to rounding errors - so normally you can't test for exact equality and expect it to work. However, floating point calculations are still deterministic - run the same code with the same inputs and you should get the same result. [see edit at end] I've just had a surprise where that didn't work out.
I'm writing a utility to extract some information from Photoshop PSD files. Paths containing cubic Bezier curves are part of that, and I needed to compute axis-aligned bounding boxes for Bezier curves. For the theory, I found A Primer on Bezier Curves via another SO question.
Quick summary of method...
Determine the derivative of the cubic bezier - a quadratic bezier.
Use the quadratic formula to solve the quadratic for each dimension, giving the parameter values for the stopping points (including maxima and minima) of the curve.
Evaluate the cubic bezier positions at those parameters (and the start and end points), expanding the bounding box.
Because I want the actual on-the-boundary points as well as the bounding box, make a second pass, computing positions and rejecting those points that aren't on the final bounding box.
The fourth step was the problem. Expanding the bounding box shouldn't change floating-point values - it just selects largest/smallest values and stores them. I recompute the same points using the same curve control points and the same parameter, but comparing for exact equality with the bounding box failed where it should pass.
Adding some debug output made it work. Removing the debug code but compiling for debug mode made it work. No possibility of memory corruption.
I figured that the stored value (in the form of the bounding box) was somehow lower precision than the newly recomputed position, and that seems to be correct. When I add debug code or compile in debug mode, some inlining doesn't happen, less optimization happens, and as a result the newly computed values get the same precision-loss that the stored bounding box boundary has.
In the end, I resolved the problem by using a volatile variable. The newly recomputed value gets written into the volatile variable, then read back out, thus forcing the compiler to compare a precision-reduced stored version with another precision-reduced stored version. That seems to work.
However, I really have no idea whether that should work - it was just an idea and I know how sensitive things like that can be to compiler-writers interpretation of technicalities in the standard. I don't know precisely what relevant guarantees the standard makes, and I don't know if there's a conventional solution to this issue. I'm not even sure what inspired me to try volatile which IIRC I've not used before since I was working in pure C on embedded systems around 1996-ish.
Of course I could compute the set of position vectors once and store them ready for both the bounding rectangle and the filtering, but I'm curious about this specific issue. As you may guess, I don't do much floating point work, so I'm not that familiar with the issues.
So - is this use of volatile correct? Is it, as I suspect, unnecessarily inefficient? Is there an idiomatic way to force extra precision (beyond the limits of the type) to be discarded?
Also, is there a reason why comparing for exact equality doesn't force the precision of the type first? Is comparing for exact equality with the extra precision (presumably unreliably) preserved ever a useful thing to do?
[EDIT - following Hans Passants first comment, I've given some more thought to what I said about inlining above. Clearly, two different calls to the same code can be optimized differently due to different inlining decisions. That's not just at one level - it can happen at any depth of inlining functions into a piece of code. That means the precision reduction could happen pretty much anywhere, which really means that even when the same source code is used with the same inputs it can give different results.
The FPU is deterministic, so presumably any particular target code is deterministic, but two different calls to the same function may not use the same target code for that function. Now I'm feeling a bit of an idiot, not seeing the implications of what I already figured out - oh well. If there's no better answer in the next few days, I'll add one myself.]
Adding some debug output made it work. Removing the debug code but compiling for debug mode made it work. No possibility of memory corruption.
You seem to be having the sort of trouble described at length by David Monniaux in the article “The pitfalls of verifying floating-point computations”:
The above examples indicate that common debugging practises that apparently should not change the computational semantics may actually alter the result of computations. Adding a logging statement in the middle of a computation may alter the scheduling of registers, […]
Note that most of his reproaches are for C compilers and that the situation—for C compilers—has improved since his article was published: although the article was written after C99, some of the liberties taken by C compilers are clearly disallowed by the C99 standard or are not clearly allowed or are allowed only within a clearly defined framework (look for occurrences of FLT_EVAL_METHOD and FP_CONTRACT in the C99 standard for details).
One way in which the situation has improved for C compilers is that GCC now implements clear, deterministic, standard-compliant semantics for floating-point even when extra precision is present. The relevant option is -fexcess-precision=standard, set by -std=c99.
WRT the question - "How to discard unwanted extra precision from floating point computations?" - the simple answer is "don't". Discarding the precision at one point isn't sufficient as extra precision can be lost or retained at any point in the computation, so apparent reproducibility is a fluke.
Although G++ uses the same back-end as GCC, and although the C++ standard defers to the C standard for the definition of math.h (which provides FLT_EVAL_METHOD), G++ unfortunately does not support -fexcess-precision=standard.
One solution is for you to move all the floating-point computations to C files that you would compile with -fexcess-precision=standard and link to the rest of the application. Another solution is to use -msse2 -mfpmath=sse instead to cause the compiler to emit SSE2 instructions that do not have the “excess precision” problem. The latter two options can be implied by -m64, since SSE2 predates the x86-64 instruction set and the “AMD64 ABI” already uses SSE2 registers for passing floating-point arguments.
In your case, as with most cases of floating-point comparison, testing for equality should be done with some tolerance (sometimes called epsilon). If you want to know if a point is on a line, you need to check not only if the distance between them is zero, but also if it is less than some tolerance. This will overcome the "excess precision" problem you're having; you still need to watch out for accumulated errors in complex calculations (which may require a larger tolerance, or careful algorithm design specific to FP math).
There may be platform- and compiler-specific options to eliminate excess FP precision, but reliance on these is not very common, not portable, and perhaps not as efficient as other solutions.
The majority of scientific computing problems that we need solve by implementing a particular algorithm in C/C++ demands accuracy that are much lower than double precision. For example, 1e-6, 1e-7 accuracy covers 99% of the cases for ODE solvers or numerical integration. Even in the rare cases when we do need higher accuracy, usually the numerical method itself fail before we can dream of reaching an accuracy that is near double precision. Example: we can't expect 1e-16 accuracy from a simple Runge–Kutta method even when solving a standard nostiff ordinary differential equation because of roundoff errors. In this case, the double precision requirement is analogous of as asking to have a better approximation of the wrong answer.
Then, aggressive float point optimizations seems to be a win-win situation in most cases because it makes your code faster (a lot faster!) and it does not affect the target accuracy of your particular problem. That said, it seems remarkable difficult to make sure that a particular implementation/code is stable against fp optimizations. Classical (and somewhat disturbing) example: GSL, the GNU scientific library, is not only the standard numerical library in the market but also it is a very well written library (I can't imagine myself doing a better job). However, GSL is not stable against fp optimizations. In fact, if you compile GSL with intel compiler, for example, then its internal tests will fail unless you turn on -fp-model strict flag which turn off fp optimizations.
Thus, my question is: are there general guidelines for writing code that is stable against aggressive floating point optimizations. Are these guidelines language (compiler) specific. If so, what are the C/C++ (gcc/icc) best practices?
Note 1: This question is not asking what are the fp optimizations flags in gcc/icc.
Note 2: This question is not asking about general guidelines for C/C++ optimization (like don't use virtual functions for small functions that are called a lot).
Note 3: This question is not asking the list of most standard fp optimizations (like x/x -> 1).
Note 4: I strongly believe this is NOT a subjective/off-topic question similar to the classical "The Coolest Server Names". If you disagree (because I am not providing a concrete example/code/problem), please flag it as community wiki. I am much more interested in the answer than gaining a few status points (not they are not important - you get the point!).
Compiler makers justify the -ffast-math kind of optimizations with the assertion that these optimizations' influence over numerically stable algorithms is minimal.
Therefore, if you want to write code that is robust against these optimizations, a sufficient condition is to write only numerically stable code.
Now your question may be, “How do I write numerically stable code?”. This is where your question may be a bit broad: there are entire books dedicated to the subject. The Wikipedia page I already linked to has a good example, and here is another good one. I could not recommend a book in particular, this is not my area of expertise.
Note 1: Numerical stability's desirability goes beyond compiler optimization. If you have choice, write numerically stable code even if you do not plan to use -ffast-math-style optimizations. Numerically unstable code may provide wrong results even when compiled with strict IEEE 754 floating-point semantics.
Note 2: you cannot expect external libraries to work when compiled with -ffast-math-style flags. These libraries, written by floating-point experts, may need to play subtle tricks with the properties of IEEE 754 computations. This kind of trick may be broken by -ffast-math optimizations, but they improve performance more than you could expect the compiler to even if you let it. For floating-point computations, expert with domain knowledge beats compiler every time. On example amongst many is the triple-double implementation found in CRlibm. This code breaks if it is not compiled with strict IEEE 754 semantics. Another, more elementary algorithm that compiler optimizations break is Kahan summation: when compiled with unsafe optimizations, c = (t - sum) - y is optimized to c = 0. This, of course, defeats the purpose of the algorithm completely.
I'm working on an application that does a lot of floating-point calculations. We use VC++ on Intel x86 with double precision floating-point values. We make claims that our calculations are accurate to n decimal digits (right now 7, but trying to claim 15).
We go to a lot of effort of validating our results against other sources when our results change slightly (due to code refactoring, cleanup, etc.). I know that many many factors play in to the overall precision, such as the FPU control state, the compiler/optimizer, floating-point model, and the overall order of operations themselves (i.e., the algorithm itself), but given the inherent uncertainty in FP calculations (e.g., 0.1 cannot be represented), it seems invalid to claim any specific degree of precision for all calulations.
My question is this: is it valid to make any claims about the accuracy of FP calculations in general without doing any sort of analysis (such as interval analysis)? If so, what claims can be made and why?
EDIT:
So given that the input data is accurate to, say, n decimal places, can any guarantee be made about the result of any arbitrary calculations, given that double precision is being used? E.g., if the input data has 8 significant decimal digits, the output will have at least 5 significant decimal digits... ?
We are using math libraries and are unaware of any guarantees they may or may not make. The algorithms we use are not necessarily analyzed for precision in any way. But even given a specific algorithm, the implementation will affect the results (just changing the order of two addition operations, for example). Is there any inherent guarantee whatsoever when using, say, double precision?
ANOTHER EDIT:
We do empirically validate our results against other sources. So are we just getting lucky when we achieve, say, 10-digit accuracy?
As with all such questions, I have to just simply answer with the article What Every Computer Scientist Should Know About Floating-Point Arithmetic. It's absolutely indispensable for the type of work you are talking about.
Short answer: No.
Reason: Have you proved (yes proved) that you aren't losing any precision as you go along? Are you sure? Do you understand the intrinsic precision of any library functions you're using for transcendental functions? Have you computed the limits of additive errors? If you are using an iterative algorithm, do you know how well it has converged when you quit? This stuff is hard.
Unless your code uses only the basic operations specified in IEEE 754 (+, -, *, / and square root), you do not even know how much precision loss each call to library functions outside your control (trigonometric functions, exp/log, ...) introduce. Functions outside the basic 5 are not guaranteed to be, and are usually not, precise at 1ULP.
You can do empirical checks, but that's what they remain... empirical. Don't forget the part about there being no warranty in the EULA of your software!
If your software was safety-critical, and did not call library-implemented mathematical functions, you could consider http://www-list.cea.fr/labos/gb/LSL/fluctuat/index.html . But only critical software is worth the effort and has a chance to fit in the analysis constraints of this tool.
You seem, after your edit, mostly concerned about your compiler doing things in your back. It is a natural fear to have (because like for the mathematical functions, you are not in control). But it's rather unlikely to be the problem. Your compiler may compute with a higher precision than you asked for (80-bit extendeds when you asked for 64-bit doubles or 64-bit doubles when you asked for 32-bit floats). This is allowed by the C99 standard. In round-to-nearest, this may introduce double-rounding errors. But it's only 1ULP you are losing, and so infrequently that you needn't worry. This can cause puzzling behaviors, as in:
float x=1.0;
float y=7.0;
float z=x/y;
if (z == x/y)
...
else
... /* the else branch is taken */
but you were looking for trouble when you used == between floating-point numbers.
When you have code that does cancelations on purpose, such as in Kahan's summation algorithm:
d = (a+b)-a-b;
and the compiler optimizes that into d=0;, you have a problem. And yes, this optimization "as if floats operation were associative" has been seen in general compilers. It is not allowed by C99. But the situation has gotten better, I think. Compiler authors have become more aware of the dangers of floating-point and no longer try to optimize so aggressively. Plus, if you were doing this in your code you would not be asking this question.
Given that your vendors of machines, compilers, run-time libraries, and operation systems don't make any such claim about floating point accuracy, you should take that to be a warning that your group should be leery of making claims that could come under harsh scrutiny if clients ever took you to court.
Without doing formal verification of the entire system, I would avoid such claims. I work on scientific software that has indirect human safety implications, so we have consider such things in the past, and we do not make these sort of claims.
You could make useless claims about precision of double (length) floating point calculations, but it would be basically worthless.
Ref: The pitfalls of verifying floating-point computations from ACM Transactions on Programming Languages and Systems 30, 3 (2008) 12
No, you cannot make any such claim. If you wanted to do so, you would need to do the following:
Hire an expert in numerical computing to analyze your algorithms.
Either get your library and compiler vendors to open their sources to said expert for analysis, or get them to sign off on hard semantics and error bounds.
Double-precision floating-point typically carries about 15 digits of decimal accuracy, but there are far too many ways for some or all of that accuracy to be lost, that are far too subtle for a non-expert to diagnose, to make any claim like what you would like to claim.
There are relatively simpler ways to keep running error bounds that would let you make accuracy claims about any specific computation, but making claims about the accuracy of all computations performed with your software is a much taller order.
A double precision number on an Intel CPU has slightly better than 15 significant digits (decimal).
The potrntial error for a simple computation is in the ballparl of n/1.0e15, where n is the order of magnitude of the number(s) you are working with. I suspect that Intel has specs for the accuracy of CPU-based FP computations.
The potential error for library functions (like cos and log) is usually documented. If not, you can look at the source code (e.g. thr GNU source) and calculate it.
You would calculate error bars for your calculations just as you would for manual calculations.
Once you do that, you may be able to reduce the error by judicious ordering of the computations.
Since you seem to be concerned about accuracy of arbitrary calculations, here is an approach you can try: run your code with different rounding modes for floating-point calculations. If the results are pretty close to each other, you are probably okay. If the results are not close, you need to start worrying.
The maximum difference in the results will give you a lower bound on the accuracy of the calculations.