check NaN number [duplicate] - c++

This question already has answers here:
Checking if a double (or float) is NaN in C++
(21 answers)
Closed 1 year ago.
Is it possible to check if a number is NaN or not?

Yes, by use of the fact that a NaN is not equal to any other number, including itself.
That makes sense when you think about what NaN means, the fact that you've created a value that isn't really within your power to represent with "normal" floating point values.
So, if you create two numbers where you don't know what they are, you can hardly consider them equal. They may be but, given the rather large possibility of numbers that it may be (infinite in fact), the chances that two are the same number are vanishingly small :-)
You can either look for a function (macro actually) like isnan (in math.h for C and cmath for C++) or just use the property that a NaN value is not equal to itself with something like:
if (myFloat != myFloat) { ... }
If, for some bizarre reason, your C implementation has no isnan (it should, since the standard mandates it), you can code your own, something like:
int isnan_float (float f) { return (f != f); }

Under Linux/gcc, there's isnan(double), conforming to BSD4.3.
C99 provides fpclassify(x) and isnan(x).
(But C++ standards/compilers don't necessarily include C99 functionality.)
There ought to be some way with std::numeric_limit<>... Checking...
Doh. I should have known... This question has been answered before...
Checking if a double (or float) is NaN in C++
Using NaN in C++?
http://bytes.com/topic/c/answers/588254-how-check-double-inf-nan

you are looking for null, but that is only useful for pointers. a number can't be null itself, it either has a known value that you put in there or random data from whatever was there in memory before.

Related

In Haskell, [0.1..1] returns [0.1,1.1]. Why? [duplicate]

This question already has answers here:
Haskell ranges and floats
(2 answers)
Closed 1 year ago.
I am finding the list of [0.1..1] that will be returned from Haskell and I do not understand why it is [0.1,1.1]. Can anyone provide explanation for me please?
TL;DR: don't use [x..y] on floating point numbers. You might get unexpected results.
On floating point numbers, there's no sane semantics for [x..y]. For instance, one might argue that the semantics should be [x,x+1,x+2,...,x+n] where x+n is the largest value of that form which is <=y. However, this does not account for floating point rounding errors. It is possible that x+n produces a slightly larger value than the exact y, making the list shorter than expected. Hence this semantics makes the value of length [x..y] rather unpredictable.
Haskell tries to mitigate this issue, by allowing an error up to 0.5. The rationale is as follows: when x+n is closer to y than to y+1, it should be regarded as some value in the interval [x..y] which got rounded to something larger. Arguable, but this is how Haskell works.
In enumerations like [x,y .. z] with an explicit stepping (e.g. [0.0,5.0 .. 1000.0]) Haskell instead allows an error of (y-x)/2 (2.5 in the example). The rationale is the same: we include those points which are closer to 1000 than to 1000+5.
You can find all the gory details in the Haskell Report which defines the semantics of Haskell. This part is also relevant.
This is generally seen by Haskellers as a small wart in the language. Some argue that we should not have mandated Enum Float and Enum Double. Removing those instances would effectively prohibit the troublesome cases like [1.0 .. 5.0] or the much worse [1.0 .. 5.5] (which is again numerically unstable).

Comparisons involving literals safe? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Consider the code:
#define LITERAL 1.0
int main()
{
double x = LITERAL;
if (x == LITERAL) return 1;
else return 0;
}
Is this guaranteed to return 1 for any numerical double value we set LITERAL (not just 1.0 but any other double literal)?
EDIT: Why was the question closed because of "missing details"? It is a well defined C/C++ question and got a very good answer. There are no more details required, it is a general question about how these languages work.
First, you have to assume an implementation that's (attempting to be) conforming to Annex F, since otherwise all bets are off; without Annex F (IEEE floating point) C allows all floating point results to be arbitrarily bogus.
Then, according to the language spec, depending on your C implementation's definition of FLT_EVAL_METHOD, yes or no.
If the value is 0 or 1, then yes. The literal is interpreted as double, and the double object stores that value faithfully, and the equality operator yields 1 (true), reflecting that.
If the value is 2, then only if the literal is the eact decimal representation of a representable double or is expressed with sufficient precision that it differs from one only past the precision of long double. Otherwise (for example if it's something like 0.1), since the literal is interpreted with excess precision in long double format the initialization/assignment to a double object truncates the precision to the nominal double precision. Then the equality comparison is guaranteed to result in 0 (false). You can see this in action on Compiler Explorer (note: remove the volatile and you can see it optimized to return a constant 0).
To make matters more complicated, GCC does this wrong by default unless you use -std=c.. or -fexcess-precision=standard, and always does it wrong in C++ mode, and clang/LLVM always do it wrong. So on a target with excess precision (32-bit x86 or m68k, the only real-world-relevant targets with FLT_EVAL_METHOD not 0 or 1) horrible things happen. For a peek into how bad they get, see GCC issue 93806 and (recursively) all of the "See Also" related issues.
So for practical purposes, yes, for everything but 32-bit x86 and m68k, and in a correct C implementation no (but maybe yes, because your compiler is probably broken) for them.

How to ensure the function return consistent floating point values in C/C++? [duplicate]

This question already has answers here:
How deterministic is floating point inaccuracy?
(10 answers)
Closed 9 years ago.
How to enure the function return consistent floating point values in C/C++?
I mean: if a and b are of floating point type, if I wrote a polynomial function (which takes floating point argument and returns floating point results), lets call it polyfun(), do the compiler can ensure that:
if a==b, then polyfun(a)==polyfun(b), which means the order of maths ops/rounding up are consistent at runtime?
Reproducible results are not guaranteed by the language standards. Generally, a C implementation is permitted to evaluate a floating-point expression with greater precision than the nominal type. It may do so in unpredictable ways, such as inlining a function call in one place and not another or inlining a function call in two places but, in one place, narrowing the result to the nominal type to save it on the stack before later retrieving it to compare it to another value.
Some additional information is in this question, and there are likely other duplicates as well.
Methods of dealing with this vary by language and by implementation (particularly the compiler), so you might get additional information if you specify what C or C++ implementation you are using, including the details of the target system, and if you search for related questions.
Instead of polyfun(a)==polyfun(b) try ABS(polyfun(a) - polyfun(b)) < 1e-6, or 1e-12 or whatever you find suitably appropriate for "nearness"... (Yeah, cumulative float point errors will still kill you.)

Why is double not allowed as a non-type template parameter? [duplicate]

This question already has answers here:
Why can't I use float value as a template parameter?
(11 answers)
Closed 9 years ago.
In 2003 - yes, 2003 - Vandervoorde and Josuttis wrote this in their book "C++ Templates" (p. 40):
Not being able to use floating-point literals (and simple constant floating-point expressions) as template arguments has historical reasons. Because there are no serious technical challenges, this may be supported in future versions of C++.
But this still doesn't work, even under C++11:
template<double D> //error
void foo() {}
Why was this not added?
I had always assumed it had to do with matching implementations against each other. Like are these two instances the same or different:
template class foo<10./3.>
template class foo<1./3 * 10.>
They may not generate the same double-precision representation, so the compiler might think of them as different classes. Then you couldn't assign them to each other, etc.
Lets look at the following code:
template<double D> int f(){
static int i=0;
++i;
return i;
}
...
#define D1=...
#define D2=...
cout << f<D1>()<<endl; // returns 1
cout << f<D1-D2+D2>()<<endl; // may return 1 or 2, depending on many things
See, D1-D2+D2 may be equal to D1 for some values but not equal for others.
More importantly - they may be equal or not depending on the rounding settings
And finally, they may be equal or not depending on compilers / architectures / many other things.
The point is, floating point operations are not well enough defined for template use (they are well defined, but there is a lot of possible variance depending on various options)
When using floating point numbers, there is are lots of problem with rounding and equality.
From the point of view of the normalizing committee, you need to ensure that two programs execute the same on several compilers. Therefore you need to specify very precisely what is the result of floating point operation. Probably they felt that the IEEE-754 norm is not precise enough...
So this is not a question of whether or not it is implementable but more of what precise behavior we want to have.
Note however that constexpr accept floating point values. This is usually enough for compile time computation.

Storing erroneous value in integer [duplicate]

This question already has answers here:
Can an integer be NaN in C++?
(5 answers)
Closed 9 years ago.
I have a complete design of my developing software in C++. I really do not want to change the structure.
However, I sometimes get erroneous outputs to store in integer variable. Output is not any number, output is NaN. But I do not want to add any other variable to check whether my integer variable is erroneous or not.
Is there any way to store a thing such as NaN in an integer variable?
It's not magic, it's information theory basics. int is something that stores values in range [INT_MIN, INT_MAX]. That is all it can do, no less no more.
You constrain to use just the int, leaving you the only option to use some value as your indicator. If that is not good enough, you must reconsider the constraint.
No, there is no value that you can store in an integral type which can represent a NaN.
If you need to store this value, you are going to have to reconsider your design. This doesn't necesarrily mean adding a new variable, but you might change an existing one. For example, the int variable where you currently store this value which can be NaN could be changed to something like boost::optional <int>. That way, it could be unset if the value was NaN, or set otherwise.