C++ Floating point precision error compiler flag - c++

In c++ is there a compiler flag or an option somewhere that makes it so that if 2 floats are within the error of the floating point arithmetic that they evaluate as equal?
It's annoying having to track down floating point errors.
For example a long time ago when testing something where I knew what the value was I even overwrote the value right before the line and it still failed.
This is a very simplified version of what it looked like
double x = 3;
if(x == 3)
printf("x is 3");
else
printf("x is not 3");
And that went into the else case and printed "x is not 3"
There has to be a way to handle this that doesn't mean I have to add handling to each floating point comparison.

If you use GCC and glibc you can include something like
#define _GNU_SOURCE 1
#include <fenv.h>
static void __attribute__((constructor)) trapfpe ()
{
/* Enable some exceptions. At startup all exceptions are masked. */
feenableexcept (FE_INEXACT);
}
in your project which will abort the program (with a core dump, if you have such enabled in your environment) when it hits one of the above FP exceptions.
That being said, I don't think FE_INEXACT is particularly useful in reality. A somewhat useful combination might be FE_INVALID|FE_DIVBYZERO|FE_OVERFLOW (but that's beside the question being asked).

Related

Transfer programs from one architecture to another

Immediately warn you that this is a difficult task.
There is a test. The test was the result of parsing a large problem to a bug in which we encountered at work. Construction __ attribute__((noinline)) prohibits the compiler to do the substitution function (for optimizations to something there not imploded). This is the easiest way to optimize guaranteed not to kill an interesting situation.
#include <stdio.h>
double d = 5436277361664796672.000000;
long long ll = 5436277361664796253LL;
int __attribute__((noinline))
func1 (void)
{
double d1 = (double)ll;
if (d > d1)
return 1;
else
return 0;
}
int __attribute__((noinline))
func2 (void)
{
if (d > (double)ll)
return 1;
else
return 0;
}
int
main (void)
{
printf ("%d %d\n", func1(), func2());
return 0;
}
I ran this test on intel and sparc. Gcc used in a mode with optimizations and without optimizations. Obtained the following results:
sparc: "gcc" printed "0 0"
sparc: "gcc -O2" printed "0 0"
intel: "gcc" printed "0 1"
intel: "gcc -O2" printed "1 1"
What is the cause differences? Anyway in the analysis situation would be useful to be able to repeat it all myself, but, of course, almost no one has the possibility to run this code on sparc. Instead sparc can try to run under Windows using microsoft or borland C compiler. I do not know what they will be given the results, but in any case something does not match with anything (because we see three different results)
Edit 1
_attribute_ ((noinline)) - an extension of the compiler gcc (forgot to write about it). Therefore VisualStudio can not compile it.
I note that the declaration of the double constant has 19 significant figures which is more precision than can be represented by a IEEE double (which allows 15 to 17 significant figures). So d cannot hold 5436277361664796672.000000 exactly.
The two constant definition strings become different at the 16th digit, so you are in the region where the inaccuracies in the double are of the same magnitude as the difference between these two numbers. Hence the comparison cannot be relied upon.
I do not know if the C++ standard specifies what happens when an over-precise string is converted to a double, but I would not be surprised if the exact result was either undefined or implementation-dependent.
Seems solved the problem. In general, all written correctly. But actually works correctly sparc version. Because standard to convert int64-> float64 must be a loss of precision. And in the code when you convert (for intel) int64-> float80 loss occurs. Ie intel-based code works with higher accuracy , but it is in contradiction with the standard.
Perhaps it is some sort of agreement for the platform Intel, which is permissible by default to work this way. Surely there are some options on which the code runs in strict accordance with the standard (but becomes slower)

Visual Studio C++ 2008 / 2010 - break on float NaN

Is there any way to set up Visual Studio (just upgraded from 2008 to 2010) to break, as if an assertion failed, whenever any floating point number becomes NaN, QNAN, INF, etc?
Up until now I have just been using the assert(x == x) trick, but I would rather something implicit, so that I dont have to add assertions everywhere.
Quite surprised I can't find an answer to this via google. Some stuff about 'floating point exceptions', but I'm not sure if they are the same thing, and I've tried enabling them in Visual Studio, but the program doesn't break until something catastrophic happens because of the NaN later on in execution.
1) Go to project option and enable /fp:strict (C/C++ -> Code Generation -> Floating Pint Model).
2) Use _controlfp to set the floating-point control word (see code below).
#include <float.h>
unsigned int fp_control_state = _controlfp(_EM_INEXACT, _MCW_EM);
#include <math.h>
int main () {
sqrtf(-1.0); // floating point exception
double x = 0.0;
double y = 1.0/x; // floating point exception
return 0;
}
Try enabling fp exceptions
At least on x86, when you generate an NaN etc, one of the FPU status register bits is set. There's a way you can set so that it throws a H/W exception on the next subsequent FP operation occurs, but that's not quite as soon as you hoped for. I can't recall the reference though.
I am not sure if this is possible the way you want it, but You could create an macro which wraps the code in the marked line into an assert or which sets a breakpoint for this.
Hope this helps

How to trace a NaN in C++

I am going to do some math calculations using C++ . The input floating point number is a valid number, but after the calculations, the resulting value is NaN. I would like to trace the point where NaN value appears (possibly using GDB), instead of inserting a lot of isNan() into the code. But I found that even code like this will not trigger an exception when a NaN value appears.
double dirty = 0.0;
double nanvalue = 0.0/dirty;
Could anyone suggest a method for tracing the NaN or turning a NaN into an exception?
Since you mention using gdb, here's a solution that works with gcc -- you want the
functions defined in fenv.h :
#define _GNU_SOURCE
#include <fenv.h>
#include <stdio.h>
int main(int argc, char **argv)
{
double dirty = 0.0;
feenableexcept(FE_ALL_EXCEPT & ~FE_INEXACT); // Enable all floating point exceptions but FE_INEXACT
double nanval=0.0/dirty;
printf("Succeeded! dirty=%lf, nanval=%lf\n",dirty,nanval);
}
Running the above program produces the output "Floating point exception". Without
the call to feenableexcept, the "Succeeded!" message is printed.
If you were to write a signal handler for SIGFPE, that might be a good place to
set a breakpoint and get the traceback you want. (Disclaimer: haven't tried it!)
In Visual Studio you can use the _controlfp function to set the behavior of floating-point calculations (see http://msdn.microsoft.com/en-us/library/e9b52ceh(VS.80).aspx). Maybe there is a similar variant for your platform.
Some notes on floating point programming can be found on http://ds9a.nl/fp/ including the difference between 1/0 and 1.0/0 etc, and what a NaN is and how it acts.
One can enable so-called "signaling NaN". That should make it easily possible to make the debugger find the correct position.
Via google, I found this for enabling signaling NaNs in C++, no idea if it works:
std::numeric_limits::signaling_NaN();
Usefulness of signaling NaN?

strange results with /fp:fast

We have some code that looks like this:
inline int calc_something(double x) {
if (x > 0.0) {
// do something
return 1;
} else {
// do something else
return 0;
}
}
Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1).
Also, the above problem goes away when we change the interface to
inline int calc_something(const double& x) {
But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.
I'm not familiar enough with FPUs to comment with any certainty, but my guess would be that the compiler is letting an existing value that it thinks should be equal to x sit in on that comparison. Maybe you go y = x + 20.; y = y - 20; y is already on the FP stack, so rather than load x the compiler just compares against y. But due to rounding errors, y isn't quite 0.0 like it is supposed to be, and you get the odd results you see.
For a better explanation: Why is cos(x) != cos(y) even though x == y? from the C++FAQ lite. This is part of what I'm trying to get across, I just couldn't remember where exactly I had read it until just now.
Changing to a const reference fixes this because the compiler is worried about aliasing. It forces a load from x because it can't assume its value hasn't changed at some point after creating y, and since x is actually exactly 0.0 [which is representable in every floating point format I'm familiar with] the rounding errors vanish.
I'm pretty sure MS provides a pragma that allows you to set the FP flags on a per-function basis. Or you could move this routine to a separate file and give that file custom flags. Either way, it could prevent your whole program from suffering just to keep that one routine happy.
what are the results of calc_something(0L), or calc_something(0.0f) ? It could be linked to the size of the types before casting. An integer is 4 bytes, a double is 8.
Have you tried looking at the asembled code, to see how the aforementioned conversion is done ?
Googling for 'fp fast', I found this post [social.msdn.microsoft.com]
As I've said in other question, compilers suck at generating floating point code. The article Dennis links to explains the problems well. Here's another: An MSDN article.
If the performance of the code is important, you can easily1 out-perform the compiler by writing your own assembler code. If your algoritm is vectorisable then you can make use of SIMD too (with a slight loss of precision though).
Assuming you understand the way the FPU works.
inline int calc_something(double x) will (probably) use an 80 bits register. inline int calc_something(const double& x) would store the double in memory, where it takes 64 bits. That at least explains the difference between the two.
However, I find your test quite fishy to begin with. The results of calc_something are extremely sensitive to rounding of its input. Your FP algorithms should be robust to rounding. calc_something(1.0-(1.0/3.0)*3) should be the same as calc_something(0.0).
I think the behavior is correct.
You never compare a floating point number up to less than the holding type's precision.
Something that comes from zero may be equal, greater or less than another zero.
See http://floating-point-gui.de/

How to detect an overflow in C++?

I just wonder if there is some convenient way to detect if overflow happens to any variable of any default data type used in a C++ program during runtime? By convenient, I mean no need to write code to follow each variable if it is in the range of its data type every time its value changes. Or if it is impossible to achieve this, how would you do?
For example,
float f1=FLT_MAX+1;
cout << f1 << endl;
doesn't give any error or warning in either compilation with "gcc -W -Wall" or running.
Thanks and regards!
Consider using boosts numeric conversion which gives you negative_overflow and positive_overflow exceptions (examples).
Your example doesn't actually overflow in the default floating-point environment in a IEEE-754 compliant system.
On such a system, where float is 32 bit binary floating point, FLT_MAX is 0x1.fffffep127 in C99 hexadecimal floating point notation. Writing it out as an integer in hex, it looks like this:
0xffffff00000000000000000000000000
Adding one (without rounding, as though the values were arbitrary precision integers), gives:
0xffffff00000000000000000000000001
But in the default floating-point environment on an IEEE-754 compliant system, any value between
0xfffffe80000000000000000000000000
and
0xffffff80000000000000000000000000
(which includes the value you have specified) is rounded to FLT_MAX. No overflow occurs.
Compounding the matter, your expression (FLT_MAX + 1) is likely to be evaluated at compile time, not runtime, since it has no side effects visible to your program.
In situations where I need to detect overflow, I use SafeInt<T>. It's a cross platform solution which throws an exception in overflow situations.
SafeInt<float> f1 = FLT_MAX;
f1 += 1; // throws
It is available on codeplex
http://www.codeplex.com/SafeInt/
Back in the old days when I was developing C++ (199x) we used a tool called Purify. Back then it was a tool that instrumented the object code and logged everything 'bad' during a test run.
I did a quick google and I'm not quite sure if it still exists.
As far as I know nowadays several open source tools exist that do more or less the same.
Checkout electricfence and valgrind.
Clang provides -fsanitize=signed-integer-overflow and -fsanitize=unsigned-integer-overflow.
http://clang.llvm.org/docs/UsersManual.html#controlling-code-generation