This question already has answers here:
What do 1.#INF00, -1.#IND00 and -1.#IND mean?
(4 answers)
Closed 9 years ago.
I have been running into a problem where some of my doubles have been assigned the value -1.#IND, and I have absolutely no idea what it means or how exactly to catch it.
Any help on the issue would be much appreciated.
Kind Regards,
Alex
-1.#IND It is Negative indefinite NaN.
http://blogs.msdn.com/b/oldnewthing/archive/2013/02/21/10395734.aspx
-1.#IND
according to this article
is the Indefinite NaN, which is a special type of quiet NaN generated
under specific conditions. If you perform an invalid arithmetic
operation like add positive infinity and negative infinity, or take
the square root of a negative number, then the IEEE standard requires
that the result be a quiet NaN, but it doesn't appear to specify what
quiet NaN exactly. Different floating point processor manufacturers
chose different paths. The term Indefinite NaN refers to this special
quiet NaN, whatever the processor ends up choosing it to be.
Related
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 3 years ago.
Can somebody give me an in-depth explanation of what's going on?
The system considers approximation right?(correct me if I'm wrong)
I would like to know how the computer behaves in these kinds of situation. Thank you.
Normal numbers in computers are stored with only so many bits of precision. A float in C++ is typically 4 bytes. 32 bits can't store that many 9s of precision, so the compiler does rounding to the precision it can handle.
Basically, you get approximately 10 digits of precision in total, and you have a lot more 9s than that.
This question already has answers here:
Is There a Better Double-Precision Assignment in Fortran 90?
(2 answers)
Closed 5 years ago.
When I run the code below I get an output of 6378136.5 instead of 6378136.3
PROGRAM test
implicit none
real*8 radius
radius = 6378136.3
print*,radius
END
I have read this other link (Precision problems of real numbers in Fortran) but it doesn't explain how to fix this problem.
The reason this is happening is not because the variable you are using lacks precision, but because you initialized the value using a single precision number.
Take a look at this answer for a good explanation, and an elegant solution to your problem for any larger programs.
If you just want to solve it quickly, then you only have to change one line:
radius = 6378136.3d0
Though this will still give you a value of 6378136.2999999998 because of floating point precision.
This question already has answers here:
Why are the return values of these doubles -1.#IND?
(3 answers)
Closed 8 years ago.
I could neither find it via google, search here or on Microsofts helppages...
After some extensive calculations, sometimes, when outputting my doubles via std::cout i prints as result on console:
-1.#IND
There are no modifcations(like precision etc) to the cout-stream. I assume the program wants to tell me about some sort of error, but I can't figure it out :/
It doesn't happen that often but with a low frequency (it is a genetical algorithm, so i have an output after every generation, and in about every 5th to 10th generation this seems to happen...)
For information, I'm using Visual Studio Pro 2013.
Windows displays NaN as -1.#IND. NaN is a result of a mathematical operation that does not make sense. For example, 0.0 / 0.0, or sqrt(-1.0) will return NaN. I can't really help further without more details about the underlying operation. Hopefully this is enough to point you in the right direction, though.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
I heard that C/C++ has a problem with the management of the float point numbers.
I've implemented a simple program to try it. It consists in a change machine: the user enter the quantity to charge and the quantity paid, and the program calculates the number of coins for each coin type to give as change.
Here is the code: Link to my google drive folder with the code
The thing is, when you insert a non-integer value, the program enter in a loop and never ends.
I've printed the content of the variables to find out what's going on, and, somehow, from a 2 decimal value let's say: 0.10, the program changes its value to a 0.0999998.
Then, the remaining change to be processed never is 0 and it enters in a infinite loop.
I've heard that this is due to the machine representation of the float point numbers. I've experimented the same either windows and Linux; and also programming it in Java, but I don't remember to have had the same issue in pascal.
Well, Now the question is: what is the best workaround for this?
I've thought that one possible solution is using fixed point representation, via external libraries as: http://www.trenki.net/content/view/17/1/ or http://www.codef00.com/code/Fixed.h . Other maybe is to use a precision arithmetic library as: GMP
Neither C nor C++ has a problem with floating point values. You as the programmer are trusted to use floating point appropriately in any language supporting it.
While integer variables cannot store fractions nor out of bounds values, floating point can only store a specific subset of fractions. A high quality floating point implementation also gives tight guarantees for the accuracy of calculation.
Floating point numbers are not rational numbers, which would need infinite space to store reliably.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
If I understand IEEE floating points correctly, they are unable to accurately represent some values. They are accurate in very limited cases and pretty much every floating point operation increases the accumulated approximations. Also, another downside - the "minimum step" grows with the exponent.
Wouldn't it be better to offer some more concrete representation?
For example, use 20 bits for the "decimal" part, but not all all 2^20 values, instead only 1000000, giving a full 1/millionth smallest possible representation/resolution, and use the other 44 bits for the integer part, giving quite the range. This way "floating point" numbers can be calculated using integer arithmetic, which may even end up faster. And in the case of multiplication, addition and subtraction there is no accumulation of approximations, the only possible loss is during division.
This concept rests on the fact that 2^n values are not optimal for representing decimal numbers, e.g. 1 does not divide that well into 1024 parts, but it divides pretty well into 1000. Technically, this is omitting to make use of the full precision, but I can think of plenty of cases where LESS can be MORE.
Naturally, this approach will lose both range and precision in a way, but in all the cases where extremities are not required, such a representation sounds like a good idea.
What you describe as a proposition is a fixed point arithmetic. Now, it's not necesserily about better or worse; each representation has advantages and disadvantages that often make one more suitable than the other for some specific purpose. For example:
Fixed point arithmetic does not introduce rouding errors for operations like addition and subtraction, what makes it suitable for financial calculations. You certainly don't want to store money as a floating point values.
Speculation: arguably, fixed point arithmetic is simpler in terms of implementation, which probably leads to smaller, more efficient circuits.
Floating-point representation covers extremely large range: it can be used to store really big numbers (~1040 for 32-bit float, 10308 for 64-bit one) and really small positive ones (~10-320) at the expense of precision, while the fixed-point representation is linearly limited by its size.
Floating-point precision is not distributed uniformly accross the representable range. Instead, most of the values (in terms of number of representable numbers) lies in the unit ball around 0. That makes it very accurate in the range we operate in most often.
You said it yourself:
Technically, this is omitting to make use of the full precision, but I
can think of plenty of cases where LESS can be MORE
Exactly, that's the whole point. Now, depending on the problem at hand, a choice must be made. There is no one-size-fits-all representation, it's always a tradeoff.