How to trace a NaN in C++ - c++

I am going to do some math calculations using C++ . The input floating point number is a valid number, but after the calculations, the resulting value is NaN. I would like to trace the point where NaN value appears (possibly using GDB), instead of inserting a lot of isNan() into the code. But I found that even code like this will not trigger an exception when a NaN value appears.
double dirty = 0.0;
double nanvalue = 0.0/dirty;
Could anyone suggest a method for tracing the NaN or turning a NaN into an exception?

Since you mention using gdb, here's a solution that works with gcc -- you want the
functions defined in fenv.h :
#define _GNU_SOURCE
#include <fenv.h>
#include <stdio.h>
int main(int argc, char **argv)
{
double dirty = 0.0;
feenableexcept(FE_ALL_EXCEPT & ~FE_INEXACT); // Enable all floating point exceptions but FE_INEXACT
double nanval=0.0/dirty;
printf("Succeeded! dirty=%lf, nanval=%lf\n",dirty,nanval);
}
Running the above program produces the output "Floating point exception". Without
the call to feenableexcept, the "Succeeded!" message is printed.
If you were to write a signal handler for SIGFPE, that might be a good place to
set a breakpoint and get the traceback you want. (Disclaimer: haven't tried it!)

In Visual Studio you can use the _controlfp function to set the behavior of floating-point calculations (see http://msdn.microsoft.com/en-us/library/e9b52ceh(VS.80).aspx). Maybe there is a similar variant for your platform.

Some notes on floating point programming can be found on http://ds9a.nl/fp/ including the difference between 1/0 and 1.0/0 etc, and what a NaN is and how it acts.

One can enable so-called "signaling NaN". That should make it easily possible to make the debugger find the correct position.
Via google, I found this for enabling signaling NaNs in C++, no idea if it works:
std::numeric_limits::signaling_NaN();
Usefulness of signaling NaN?

Related

C++ Floating point precision error compiler flag

In c++ is there a compiler flag or an option somewhere that makes it so that if 2 floats are within the error of the floating point arithmetic that they evaluate as equal?
It's annoying having to track down floating point errors.
For example a long time ago when testing something where I knew what the value was I even overwrote the value right before the line and it still failed.
This is a very simplified version of what it looked like
double x = 3;
if(x == 3)
printf("x is 3");
else
printf("x is not 3");
And that went into the else case and printed "x is not 3"
There has to be a way to handle this that doesn't mean I have to add handling to each floating point comparison.
If you use GCC and glibc you can include something like
#define _GNU_SOURCE 1
#include <fenv.h>
static void __attribute__((constructor)) trapfpe ()
{
/* Enable some exceptions. At startup all exceptions are masked. */
feenableexcept (FE_INEXACT);
}
in your project which will abort the program (with a core dump, if you have such enabled in your environment) when it hits one of the above FP exceptions.
That being said, I don't think FE_INEXACT is particularly useful in reality. A somewhat useful combination might be FE_INVALID|FE_DIVBYZERO|FE_OVERFLOW (but that's beside the question being asked).

C++11 round off error using pow() and std::complex

Running the following
#include <iostream>
#include <complex>
int main()
{
std::complex<double> i (0,1);
std::complex<double> comp =pow(i, 2 );
std::cout<<comp<<std::endl;
return 0;
}
gives me the expected result (-1,0) without c++11. However, compiling with c++11 gives the highly annoying (-1,1.22461e-016).
What to do, and what is best practice?
Of course this can be fixed manually by flooring etc., but I would appreciate to know the proper way of addressing the problem.
SYSTEM: Win8.1, using Desktop Qt 5.1.1 (Qt Creator) with MinGW 4.8 32 bit. Using c++11 by adding the flag QMAKE_CXXFLAGS += -std=c++11 in the Qt Creator .pro file.
In C++11 we have a few new overloads of pow(std::complex). GCC has two nonstandard overloads on top of that, one for raising to an int and one for raising to an unsigned int.
One of the new standard overloads (namely std::complex</*Promoted*/> pow(const std::complex<T> &, const U &)) causes an ambiguity when calling pow(i, 2) with the non-standard ones. Their solution is to #ifdef the non-standard ones out in the presence of C++11 and you go from calling the specialized function (which uses successive squaring) to the generic method (which uses pow(double,double) and std::polar).
You need to get into a different mode when you are using floating point numbers. Floating points are APPROXIMATIONS of real numbers.
1.22461e-016 is
0.0000000000000000122461
An engineer would say that IS zero. You will always get such variations (unless you stick to operations on sums of powers of 2 with the same general range.
A value as simple 0.1 cannot be represented exactly with floating point numbers.
The general problem you present has to parts:
1. Dealing with floating point numbers in processing
2. Displaying flooding point numbers.
For the processing, I would wager that doing:
comp = i * i ;
Would give you want you want.
Pow (x, y) is going to do
exp (log (x) * y)
For output, switch to using an F format.

atan2f precision on xcode

I have this very simple code:
#include <cstdio>
#include <cmath>
int main(int argc, const char * argv[])
{
printf("%2.21f", atan2f(0.f, -1.f));
return 0;
}
With next output on Intel CPUs:
Visual Studio 2010: 3.141592741012573200000
GCC 4.8.1 : 3.141592741012573242188
Xcode 5 : 3.141592502593994140625
After reading Appple manual pages for atan2f, I expect the printed value to be near 3.14159265359, as they say they will return +pi for special values like the one I'm using now. As you can see the difference is quite big from the value returned on Xcode and expected value.
Is this a know issue? If yes, is there any workaround to solve this?
A single-precision floating point number has only about 7 digits of decimal precision. Your test value of 3.14159265359 has 12. If you want better precision, use double or long double and atan2 or atan2l to match.
Likely the reason you're getting "better" results from VS and GCC is that the compiler is noticing your function has constant arguments and is precalculating the result at higher-than-single precision. Check the generated code for proof.
The knee-jerk workaround is to use atan2. Casting that down to float gave me 3.141592741012573242188 just like your GCC 4.8.1 test.
I would assume atan2f gives an answer not quite as precise as a float could hold because it arrives at its answer by some means that means that estimating the output precision is a smarter way to go.

Visual Studio C++ 2008 / 2010 - break on float NaN

Is there any way to set up Visual Studio (just upgraded from 2008 to 2010) to break, as if an assertion failed, whenever any floating point number becomes NaN, QNAN, INF, etc?
Up until now I have just been using the assert(x == x) trick, but I would rather something implicit, so that I dont have to add assertions everywhere.
Quite surprised I can't find an answer to this via google. Some stuff about 'floating point exceptions', but I'm not sure if they are the same thing, and I've tried enabling them in Visual Studio, but the program doesn't break until something catastrophic happens because of the NaN later on in execution.
1) Go to project option and enable /fp:strict (C/C++ -> Code Generation -> Floating Pint Model).
2) Use _controlfp to set the floating-point control word (see code below).
#include <float.h>
unsigned int fp_control_state = _controlfp(_EM_INEXACT, _MCW_EM);
#include <math.h>
int main () {
sqrtf(-1.0); // floating point exception
double x = 0.0;
double y = 1.0/x; // floating point exception
return 0;
}
Try enabling fp exceptions
At least on x86, when you generate an NaN etc, one of the FPU status register bits is set. There's a way you can set so that it throws a H/W exception on the next subsequent FP operation occurs, but that's not quite as soon as you hoped for. I can't recall the reference though.
I am not sure if this is possible the way you want it, but You could create an macro which wraps the code in the marked line into an assert or which sets a breakpoint for this.
Hope this helps

How to detect an overflow in C++?

I just wonder if there is some convenient way to detect if overflow happens to any variable of any default data type used in a C++ program during runtime? By convenient, I mean no need to write code to follow each variable if it is in the range of its data type every time its value changes. Or if it is impossible to achieve this, how would you do?
For example,
float f1=FLT_MAX+1;
cout << f1 << endl;
doesn't give any error or warning in either compilation with "gcc -W -Wall" or running.
Thanks and regards!
Consider using boosts numeric conversion which gives you negative_overflow and positive_overflow exceptions (examples).
Your example doesn't actually overflow in the default floating-point environment in a IEEE-754 compliant system.
On such a system, where float is 32 bit binary floating point, FLT_MAX is 0x1.fffffep127 in C99 hexadecimal floating point notation. Writing it out as an integer in hex, it looks like this:
0xffffff00000000000000000000000000
Adding one (without rounding, as though the values were arbitrary precision integers), gives:
0xffffff00000000000000000000000001
But in the default floating-point environment on an IEEE-754 compliant system, any value between
0xfffffe80000000000000000000000000
and
0xffffff80000000000000000000000000
(which includes the value you have specified) is rounded to FLT_MAX. No overflow occurs.
Compounding the matter, your expression (FLT_MAX + 1) is likely to be evaluated at compile time, not runtime, since it has no side effects visible to your program.
In situations where I need to detect overflow, I use SafeInt<T>. It's a cross platform solution which throws an exception in overflow situations.
SafeInt<float> f1 = FLT_MAX;
f1 += 1; // throws
It is available on codeplex
http://www.codeplex.com/SafeInt/
Back in the old days when I was developing C++ (199x) we used a tool called Purify. Back then it was a tool that instrumented the object code and logged everything 'bad' during a test run.
I did a quick google and I'm not quite sure if it still exists.
As far as I know nowadays several open source tools exist that do more or less the same.
Checkout electricfence and valgrind.
Clang provides -fsanitize=signed-integer-overflow and -fsanitize=unsigned-integer-overflow.
http://clang.llvm.org/docs/UsersManual.html#controlling-code-generation