c++ how to check -1.#IND in array - c++

I got a float array which stores some value thats been calculated by some functions. However, when I retrieve a value from the array, some of the values are -1.#IND, which is a float error or some sort i guess.
So heres my little question, how do I use a if statement to check if the float array contains a -1.#IND value so I can do something with it??
Thanks

if(a != a);
This is only true if a is a NaN.
Oh, and also in cmath there is a isnan() function.
More About isnan()...

-1.#IND is a NaN code (not a number) in that the value is undefined / unrepresentable. So your numerical algorithm might have an issue if it's producing NaN values. Check that floating point exceptions are turned on as NaN's can result from division by 0 errors and make sure you're running it in debug mode, step through and see when, how and why it occurs.
Not sure whether you can do a direct equality comparison as the representation can change,
check IEEE 754, also check that your compiler is using IEEE 754 floats.
Hope this helps.

-1.#IND looks like the "indefinite" value, which will come up if you do things like try to calculate 0/0.
Other values you might encounter are positive and negative infinity.
To filter out these special values, use functions like _finite, finitef or _fpclass.

A good way to check if a floating point value is a valid number is to use: std::isnormal()
If the number is not normal, you can use std::fpclassify() to figure out which error it is.

Related

c++ Floating point subtraction error and absolute values

The way I understand it is: when subtracting two double numbers with double precision in c++ they are first transformed to a significand starting with one times 2 to the power of the exponent. Then one can get an error if the subtracted numbers have the same exponent and many of the same digits in the significand, leading to loss of precision. To test this for my code I wrote the following safe addition function:
double Sadd(double d1, double d2, int& report, double prec) {
int exp1, exp2;
double man1=frexp(d1, &exp1), man2=frexp(d2, &exp2);
if(d1*d2<0) {
if(exp1==exp2) {
if(abs(man1+man2)<prec) {
cout << "Floating point error" << endl;
report=0;
}
}
}
return d1+d2;
}
However, testing this I notice something strange: it seems that the actual error (not whether the function reports error but the actual one resulting from the computation) seems to depend on the absolute values of the subtracted numbers and not just the number of equal digits in the significand...
For examples, using 1e-11 as the precision prec and subtracting the following numbers:
1) 9.8989898989898-9.8989898989897: The function reports error and I get the highly incorrect value 9.9475983006414e-14
2) 98989898989898-98989898989897: The function reports error but I get the correct value 1
Obviously I have misunderstood something. Any ideas?
If you subtract two floating-point values that are nearly equal, the result will mostly reflect noise in the low bits. Nearly equal here is more than just same exponent and almost the same digits. For example, 1.0001 and 1.0000 are nearly equal, and subtracting them could be caught by a test like this. But 1.0000 and 0.9999 differ by exactly the same amount, and would not be caught by a test like this.
Further, this is not a safe addition function. Rather, it's a post-hoc check for a design/coding error. If you're subtracting two values that are so close together that noise matters you've made a mistake. Fix the mistake. I'm not objecting to using something like this as a debugging aid, but please call it something that implies that that's what it is, rather than suggesting that there's something inherently dangerous about floating-point addition. Further, putting the check inside the addition function seems excessive: an assert that the two values won't cause problems, followed by a plain old floating-point addition, would probably be better. After all, most of the additions in your code won't lead to problems, and you'd better know where the problem spots are; put asserts in the problems spots.
+1 to Pete Becker's answer.
Note that the problem of degenerated result might also occur with exp1!=exp2
For example, if you subtract
1.0-0.99999999999999
So,
bool degenerated =
(epx1==exp2 && abs(d1+d2)<prec)
|| (epx1==exp2-1 && abs(d1+2*d2)<prec)
|| (epx1==exp2+1 && abs(2*d1+d2)<prec);
You can omit the check for d1*d2<0, or keep it to avoid the whole test otherwise...
If you want to also handle loss of precision with degenerated denormalized floats, that'll be a bit more involved (it's as if the significand had less bits).
It's quite easy to prove that for IEEE 754 floating-point arithmetic, if x/2 <= y <= 2x then calculating x - y is an exact operation and will give the exact result correctly without any rounding error.
And if the result of an addition or subtraction is a denormalised number, then the result is always exact.

Can you get a "nan" from overflow in C++?

I'm writing a program that uses a very long recursion (about 50,000) and some very large vectors (also 50,000 in length of type double) to store the result of each recursion before averaging them. At the end of the program, I expect to get a number output.
However, some of the results I got was "nan". The mysterious thing is, if I reduce the number of recursions the program will work just fine. So I'm guessing this might be something to do with the size of the vector. So my question is, if you get an overflow in a very long vector (or say array), what will be the effect? Will you get an "nan" just like in my case?
Another mysterious thing about my program is that I have tried some even larger recursions (100,000), but the output was normal. But when I changed a parameter value, so that each numbers stored in the vector will become larger (although they are still of type double), the output becomes "nan". Will the maximum capacity of a vector be dependent on the size of the number it stores?
You didn't tell us what your recursion is, but it is fairly easy to generate NaNs with a long sequence of operations if you are using square root, pow, inverse sine, or inverse cosine.
Suppose your calculation produces a quantity, call it x, that is supposed to be the sine of some angle θ, and suppose the underlying math dictates that x must always be between -1 and 1, inclusive. You calculate θ by taking the inverse sine of x.
Here's the problem: Arithmetic done on a computer is but an approximation of the arithmetic of the real numbers. Addition and multiplication with IEEE floating point numbers are not transitive. You might well get a value of 1.0000000000000002 for x instead of 1. Take the inverse sine of this value and you get a NaN.
A standard trick is to protect against those near misses that result from numerical errors. Don't use the built-in asin, acos, sqrt, and pow. Use wrappers that protects against things like asin(1.0000000000000002) and sqrt(-1e-16). Make the former pi/2 rather than NaN, and make the latter zero. This is admittedly a kludge, and doing this can get you in trouble. What if the problem is that your calculations are formulated incorrectly? It's legitimate to treat 1.0000000000000002 as 1, but it's best not to treat a value of 100 as if it were 1. A value of 100 to your asin wrapper is best treated by throwing an exception rather than truncating to 1.
There's one other problem with infinities and NaNs: They propagate. An Inf or NaN in one single computation quickly becomes an Inf or a NaN in hundreds, then thousands of values. I usually make the floating point machinery raise a floating point exception on obtaining an Inf or NaN instead of continuing on. (Note well: Floating point exceptions are not C++ exceptions.) When you do this, your program will bomb unless you have a signal handler in place. That's not necessarily a bad thing. You can run the program in the debugger and find exactly where the problem arose. Without these floating point exceptions it is very hard to find the source of the problem.
Depends on the exact natur of your computations. If you just add up numbers which aren't NaN, the result shouldn't be NaN, either. It might be +infinity, though.
But you will get NaN if e.g. some part of your computation yields +infinity, another -infinity, and you later add those two results.
Assuming that your architecture conforms to IEEE 754, this http://en.wikipedia.org/wiki/NaN#Creation tells the situations in which arithmetic operations return NaN.

What does -1.#IND00 mean? [duplicate]

I'm messing around with some C code using floats, and I'm getting 1.#INF00, -1.#IND00 and -1.#IND when I try to print floats in the screen. What does those values mean?
I believe that 1.#INF00 means positive infinity, but what about -1.#IND00 and -1.#IND? I also saw sometimes this value: 1.$NaN which is Not a Number, but what causes those strange values and how can those help me with debugging?
I'm using MinGW which I believe uses IEEE 754 representation for float point numbers.
Can someone list all those invalid values and what they mean?
From IEEE floating-point exceptions in C++ :
This page will answer the following questions.
My program just printed out 1.#IND or 1.#INF (on Windows) or nan or inf (on Linux). What happened?
How can I tell if a number is really a number and not a NaN or an infinity?
How can I find out more details at runtime about kinds of NaNs and infinities?
Do you have any sample code to show how this works?
Where can I learn more?
These questions have to do with floating point exceptions. If you get some strange non-numeric output where you're expecting a number, you've either exceeded the finite limits of floating point arithmetic or you've asked for some result that is undefined. To keep things simple, I'll stick to working with the double floating point type. Similar remarks hold for float types.
Debugging 1.#IND, 1.#INF, nan, and inf
If your operation would generate a larger positive number than could be stored in a double, the operation will return 1.#INF on Windows or inf on Linux. Similarly your code will return -1.#INF or -inf if the result would be a negative number too large to store in a double. Dividing a positive number by zero produces a positive infinity and dividing a negative number by zero produces a negative infinity. Example code at the end of this page will demonstrate some operations that produce infinities.
Some operations don't make mathematical sense, such as taking the square root of a negative number. (Yes, this operation makes sense in the context of complex numbers, but a double represents a real number and so there is no double to represent the result.) The same is true for logarithms of negative numbers. Both sqrt(-1.0) and log(-1.0) would return a NaN, the generic term for a "number" that is "not a number". Windows displays a NaN as -1.#IND ("IND" for "indeterminate") while Linux displays nan. Other operations that would return a NaN include 0/0, 0*∞, and ∞/∞. See the sample code below for examples.
In short, if you get 1.#INF or inf, look for overflow or division by zero. If you get 1.#IND or nan, look for illegal operations. Maybe you simply have a bug. If it's more subtle and you have something that is difficult to compute, see Avoiding Overflow, Underflow, and Loss of Precision. That article gives tricks for computing results that have intermediate steps overflow if computed directly.
For anyone wondering about the difference between -1.#IND00 and -1.#IND (which the question specifically asked, and none of the answers address):
-1.#IND00
This specifically means a non-zero number divided by zero, e.g. 3.14 / 0 (source)
-1.#IND (a synonym for NaN)
This means one of four things (see wiki from source):
1) sqrt or log of a negative number
2) operations where both variables are 0 or infinity, e.g. 0 / 0
3) operations where at least one variable is already NaN, e.g. NaN * 5
4) out of range trig, e.g. arcsin(2)
Your question "what are they" is already answered above.
As far as debugging (your second question) though, and in developing libraries where you want to check for special input values, you may find the following functions useful in Windows C++:
_isnan(), _isfinite(), and _fpclass()
On Linux/Unix you should find isnan(), isfinite(), isnormal(), isinf(), fpclassify() useful (and you may need to link with libm by using the compiler flag -lm).
For those of you in a .NET environment the following can be a handy way to filter non-numbers out (this example is in VB.NET, but it's probably similar in C#):
If Double.IsNaN(MyVariableName) Then
MyVariableName = 0 ' Or whatever you want to do here to "correct" the situation
End If
If you try to use a variable that has a NaN value you will get the following error:
Value was either too large or too small for a Decimal.

Rounding doubles - .5 - sprintf

I'm using the following code for rounding to 2dp:
sprintf(temp,"%.2f",coef[i]); //coef[i] returns a double
It successfully rounds 6.666 to 6.67, but it doesn't work properly when rounding
5.555. It returns 5.55, whereas it should (at least in my opinion) return 5.56.
How can I get it to round up when the next digit is 5? i.e. return 5.56.
edit: I now realise that this is happening because when I enter 5.555 with cin it gets
saved as 5.554999997.
I'm going to try rounding in two stages- first to 3dp and then to 2dp. any other
(more elegant) ideas?
It seems you have to use math round function for correct rounding.
printf("%.2f %.2f\n", 5.555, round(5.555 * 100.)/100.);
This gives the following output on my machine:
5.55 5.56
The number 5.555 cannot be represented as an exact number in IEEE754. Printing out the constant 5.555 with "%.50f" results in:
5.55499999999999971578290569595992565155029300000000
so it will be rounded down. Try using this instead:
printf ("%.2f\n",x+0.0005);
although you need to be careful of numbers that can be represented exactly, since they'll be rounded up wrongly by this expression.
You need to understand the limitations of floating point representations. If it's important that you get accuracy, you can use (or code) a BCD or other decimal class that doesn't have the shortcoming of IEEE754 representation.
How about this for another possible solution:
printf("%.2f", _nextafter(n, n*2));
The idea is to increase the number away from zero (the n*2 gets the sign right) by the smallest possible amount representable by floating point math.
Eg:
double n=5.555;
printf("%.2f\n", n);
printf("%.2f\n", _nextafter(n, n*2));
printf("%.20f\n", n);
printf("%.20f\n", _nextafter(n, n*2));
With MSVC yields:
5.55
5.56
5.55499999999999970000
5.55500000000000060000
This question is tagged C++, so I'll proceed under that assumption. Note that the C++ streams will round, unlike the C printf family. All you have to do is provide the precision you want and the streams library will round for you. I'm just throwing that out there in case you don't already have a reason not to use streams.
You could also do this (saves multiply/divide):
printf("%.2f\n", coef[i] + 0.00049999);

What do 1.#INF00, -1.#IND00 and -1.#IND mean?

I'm messing around with some C code using floats, and I'm getting 1.#INF00, -1.#IND00 and -1.#IND when I try to print floats in the screen. What does those values mean?
I believe that 1.#INF00 means positive infinity, but what about -1.#IND00 and -1.#IND? I also saw sometimes this value: 1.$NaN which is Not a Number, but what causes those strange values and how can those help me with debugging?
I'm using MinGW which I believe uses IEEE 754 representation for float point numbers.
Can someone list all those invalid values and what they mean?
From IEEE floating-point exceptions in C++ :
This page will answer the following questions.
My program just printed out 1.#IND or 1.#INF (on Windows) or nan or inf (on Linux). What happened?
How can I tell if a number is really a number and not a NaN or an infinity?
How can I find out more details at runtime about kinds of NaNs and infinities?
Do you have any sample code to show how this works?
Where can I learn more?
These questions have to do with floating point exceptions. If you get some strange non-numeric output where you're expecting a number, you've either exceeded the finite limits of floating point arithmetic or you've asked for some result that is undefined. To keep things simple, I'll stick to working with the double floating point type. Similar remarks hold for float types.
Debugging 1.#IND, 1.#INF, nan, and inf
If your operation would generate a larger positive number than could be stored in a double, the operation will return 1.#INF on Windows or inf on Linux. Similarly your code will return -1.#INF or -inf if the result would be a negative number too large to store in a double. Dividing a positive number by zero produces a positive infinity and dividing a negative number by zero produces a negative infinity. Example code at the end of this page will demonstrate some operations that produce infinities.
Some operations don't make mathematical sense, such as taking the square root of a negative number. (Yes, this operation makes sense in the context of complex numbers, but a double represents a real number and so there is no double to represent the result.) The same is true for logarithms of negative numbers. Both sqrt(-1.0) and log(-1.0) would return a NaN, the generic term for a "number" that is "not a number". Windows displays a NaN as -1.#IND ("IND" for "indeterminate") while Linux displays nan. Other operations that would return a NaN include 0/0, 0*∞, and ∞/∞. See the sample code below for examples.
In short, if you get 1.#INF or inf, look for overflow or division by zero. If you get 1.#IND or nan, look for illegal operations. Maybe you simply have a bug. If it's more subtle and you have something that is difficult to compute, see Avoiding Overflow, Underflow, and Loss of Precision. That article gives tricks for computing results that have intermediate steps overflow if computed directly.
For anyone wondering about the difference between -1.#IND00 and -1.#IND (which the question specifically asked, and none of the answers address):
-1.#IND00
This specifically means a non-zero number divided by zero, e.g. 3.14 / 0 (source)
-1.#IND (a synonym for NaN)
This means one of four things (see wiki from source):
1) sqrt or log of a negative number
2) operations where both variables are 0 or infinity, e.g. 0 / 0
3) operations where at least one variable is already NaN, e.g. NaN * 5
4) out of range trig, e.g. arcsin(2)
Your question "what are they" is already answered above.
As far as debugging (your second question) though, and in developing libraries where you want to check for special input values, you may find the following functions useful in Windows C++:
_isnan(), _isfinite(), and _fpclass()
On Linux/Unix you should find isnan(), isfinite(), isnormal(), isinf(), fpclassify() useful (and you may need to link with libm by using the compiler flag -lm).
For those of you in a .NET environment the following can be a handy way to filter non-numbers out (this example is in VB.NET, but it's probably similar in C#):
If Double.IsNaN(MyVariableName) Then
MyVariableName = 0 ' Or whatever you want to do here to "correct" the situation
End If
If you try to use a variable that has a NaN value you will get the following error:
Value was either too large or too small for a Decimal.