What does floating point error -1.#J mean? - c++

Recently, sometimes (rarely) when we export data from our application, the export log contains float values that look like "-1.#J". I haven't been able to reproduce it so I don't know what the float looks like in binary, or how Visual Studio displays it.
I tried looking at the source code for printf, but didn't find anything (not 100% sure I looked at the right version though...).
I've tried googling but google throws away any #, it seems. And I can't find any lists of float errors.

It can be either negative infinity or NaN (not a number). Due to the formatting on the field printf does not differentiate between them.
I tried the following code in Visual Studio 2008:
double a = 0.0;
printf("%.3g\n", 1.0 / a); // +inf
printf("%.3g\n", -1.0 / a); // -inf
printf("%.3g\n", a / a); // NaN
which results in the following output:
1.#J
-1.#J
-1.#J
removing the .3 formatting specifier gives:
1.#INF
-1.#INF
-1.#IND
so it's clear 0/0 gives NaN and -1/0 gives negative infinity (NaN, -inf and +inf are the only "erroneous" floating point numbers, if I recall correctly)

Related

Qdoublespinbox does not allow values less than 1

Qdoublespinbox does not allow values less than 1, decimal values with precision grater than 1, values in positive range 0.00 - 0.99 (for example). There is no problem with setting it´s value to 1.1, 1.11, 1,04, but not 0.5 .... it is rounding up anyway to 1.
I have tried setting range to negative values with precision, setting explicitly the numbers of decimals and the minimum value of the widget, but all for nothing.
You can have a look at the Spin Boxes Example (accessible through QtCreator/Welcome/Examples or https://doc.qt.io/qt-5/qtwidgets-widgets-spinboxes-example.html).
You can have float or double precision rounding issues if you get your values from a calculation with not enough precision in memory.
You can also force the local to accept dot as decimal separator as you seem to mix comma and dot: add QLocale::setDefault(QLocale::C); at the beginning of your program. You can also create a custom double validator to accept both dot and comma by inheriting QDoubleValidator.

C++ Xtensor increase floating point significant numbers

I am building a neural network and using xtensor for array multiplication in feed forward. The network takes in xt::xarray<double> and outputs a decimal number between 0 and 1. I have been given a sheet for expected output. when i compare my output with the provided sheet, I found that all the results differ after exactly 7 digits. for example if the required value is 0.1234567890123456, I am getting values like 0.1234567-garbage-numbers-so-that-total-numbers-equal-16, 0.1234567993344660, 0.1234567221155667.
I know I can not get that exact number 0.1234567890123456 due to floating point math. But how can I debug/ increase precision to be close to that required number. thanks
Update:
xt::xarray<double> Layer::call(xt::xarray<double> input)
{
return xt::linalg::dot(input, this->weight) + this->bias;
}
for code I am simply calling this call method a bunch of times where weight and bias are xt::xarray<double> arrays.

Why the outputs are strange when using REAL type values as do-loop control variables in Fortran?

When I code in Fortran language, I find when I set REAL value as control-var in do-loop, the outputs are strange, for example:
do i=0.1,1.0,0.1
write (13,"(F15.6)") i
end do
The out puts are: 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0.But when I set the start value as 0.6:
do i=0.6,1.0,0.1
write (13,"(F15.6)") i
end do
the outputs are:0.6,0.7,0.8,0.9,and 1.0 is not outputted. Why does this happen?
This is basically a rounding issue see Precision problems of real numbers in Fortran and the link that follows from there. Comparing two floating points is tricky and it does not play well with do loops.
You should print more decimal numbers and not just 6:
0.1000000015
0.2000000030
0.3000000119
0.4000000060
0.5000000000
0.6000000238
0.7000000477
0.8000000715
0.9000000954
1.0000001192
The values are not precise.
In Fortran the number of iterations is computed before starting the loop. And when you compute the loop trip count:
write (*,"(F15.10)") (1.0 - 0.1) / 0.1
write (*,"(F15.10)") (1.0 - 0.6) / 0.1
you will get:
9.0000000000
3.9999997616
so the latter one will be iterated only four times (3 + 1 = 4; so i = 0.6, 0.7, 0.8 and 0.9), because the count is truncated from 3.999... to 3.
Real loop counters were deleted from Fortran for good reasons, don't use them. The rounding is one of the problems. The compiler should warn you:
Warning: Deleted feature: Start expression in DO loop at (1) must be
integer
Also, naming a real variable as i should be a crime.

Sort behaves strange for negative numbers with absolute value below one

Could someone explain me why in vim
:% !sort -ngk1
applied to
-1.3
0.002
0.1
-0.0021
0.2
-0.1
-0.15
gives:
-1.3
-0.0021
-0.1
-0.15
0.002
0.1
0.2
? How can I change this? Or is this a real bug in sort?
I could post a lot of such examples where the output is even more confusing (e.g. even mixed signs). It seems that this errors only occure for values below one. Thanks!
For me, both sort -nk1 and sort -gk1 (sort 8.20 complains about options '-gn' are incompatible when both are given) give the correct order. (Also, this probably has nothing to do with Vim, as you're invoking the external sort command.)
My best guess is that you're using a locale with a different decimal point (e.g. in German, it's 0,42 instead of 0.42). Try:
$ LC_ALL=en_US.UTF-8 sort -nk1 file
It appears that the -n and -g options are incompatible. Try instead
:% !sort -nk1
This seems to do what you want.

How to correctly add floating numbers in Python?

I am trying to add 0.2 value to constant x where x = 8 in a loop that runs to 100. Following is the code
x = 8
>>> for i in range(100):
... x += 0.2
...
>>> x
but everytime I get different answer and calculation always incorrect. I read about Floating Point Arithmetic Issue and Limitations but there should be some way around this. Can I use doubles (if they exists) ? I am using Python 2.7
UPDATE:
import time
x=1386919679
while(1):
x+=0.02
print "xx %0.9f"%x
b= round (x,2)
print "bb %0.9f"%b
time.sleep(1)
output
xx 1386933518.586801529
bb 1386933518.589999914
xx 1386933518.606801510
bb 1386933518.609999895
xx 1386933518.626801491
bb 1386933518.630000114
Desired output
I want correct output, I know If just write print x it will be accurate. But my application require that I should print results with 9 precision. I am newbie so please be kind.
You can use double-precision floating point, sure. You're already using it by default.
As for a way around it:
x += 0.2 * 100
I know that sounds facile, but the solution to floating point imprecision is not setting FLOATING_POINT_IMPRECISION = False. This is a fundamental limitation of the representation, and has no general solution, only specific ones (and patterns which apply to groups of specific situations).
There's also a rational number type which can exactly store 0.2, but it's not worth considering for most real-world use cases.