I am trying to add 0.2 value to constant x where x = 8 in a loop that runs to 100. Following is the code
x = 8
>>> for i in range(100):
... x += 0.2
...
>>> x
but everytime I get different answer and calculation always incorrect. I read about Floating Point Arithmetic Issue and Limitations but there should be some way around this. Can I use doubles (if they exists) ? I am using Python 2.7
UPDATE:
import time
x=1386919679
while(1):
x+=0.02
print "xx %0.9f"%x
b= round (x,2)
print "bb %0.9f"%b
time.sleep(1)
output
xx 1386933518.586801529
bb 1386933518.589999914
xx 1386933518.606801510
bb 1386933518.609999895
xx 1386933518.626801491
bb 1386933518.630000114
Desired output
I want correct output, I know If just write print x it will be accurate. But my application require that I should print results with 9 precision. I am newbie so please be kind.
You can use double-precision floating point, sure. You're already using it by default.
As for a way around it:
x += 0.2 * 100
I know that sounds facile, but the solution to floating point imprecision is not setting FLOATING_POINT_IMPRECISION = False. This is a fundamental limitation of the representation, and has no general solution, only specific ones (and patterns which apply to groups of specific situations).
There's also a rational number type which can exactly store 0.2, but it's not worth considering for most real-world use cases.
Related
I am building a neural network and using xtensor for array multiplication in feed forward. The network takes in xt::xarray<double> and outputs a decimal number between 0 and 1. I have been given a sheet for expected output. when i compare my output with the provided sheet, I found that all the results differ after exactly 7 digits. for example if the required value is 0.1234567890123456, I am getting values like 0.1234567-garbage-numbers-so-that-total-numbers-equal-16, 0.1234567993344660, 0.1234567221155667.
I know I can not get that exact number 0.1234567890123456 due to floating point math. But how can I debug/ increase precision to be close to that required number. thanks
Update:
xt::xarray<double> Layer::call(xt::xarray<double> input)
{
return xt::linalg::dot(input, this->weight) + this->bias;
}
for code I am simply calling this call method a bunch of times where weight and bias are xt::xarray<double> arrays.
I need to solve differential equation y'=6e^(2x-y).
I am trying to do that in sympy with dsolve().
sol = dsolve(Derivative(f(x), x) - 6 *(e**(2*x-f(x))), f(x))
But always get error
expecting ints or fractions, got 7.38905609893065022723042746058 and 6
What is the problem?
Where did you get e from? It seems you used math.exp(1) or similar to get a floating point value that the symbolic package can not treat correctly
Using sympy.exp instead works perfectly, even defining e=sympy.exp(1) is correctly recognized. Both with the result
Eq(f(x), log(C1 + 3*exp(2*x)))
When I code in Fortran language, I find when I set REAL value as control-var in do-loop, the outputs are strange, for example:
do i=0.1,1.0,0.1
write (13,"(F15.6)") i
end do
The out puts are: 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0.But when I set the start value as 0.6:
do i=0.6,1.0,0.1
write (13,"(F15.6)") i
end do
the outputs are:0.6,0.7,0.8,0.9,and 1.0 is not outputted. Why does this happen?
This is basically a rounding issue see Precision problems of real numbers in Fortran and the link that follows from there. Comparing two floating points is tricky and it does not play well with do loops.
You should print more decimal numbers and not just 6:
0.1000000015
0.2000000030
0.3000000119
0.4000000060
0.5000000000
0.6000000238
0.7000000477
0.8000000715
0.9000000954
1.0000001192
The values are not precise.
In Fortran the number of iterations is computed before starting the loop. And when you compute the loop trip count:
write (*,"(F15.10)") (1.0 - 0.1) / 0.1
write (*,"(F15.10)") (1.0 - 0.6) / 0.1
you will get:
9.0000000000
3.9999997616
so the latter one will be iterated only four times (3 + 1 = 4; so i = 0.6, 0.7, 0.8 and 0.9), because the count is truncated from 3.999... to 3.
Real loop counters were deleted from Fortran for good reasons, don't use them. The rounding is one of the problems. The compiler should warn you:
Warning: Deleted feature: Start expression in DO loop at (1) must be
integer
Also, naming a real variable as i should be a crime.
a = (random.random(), random.random())
print(a)
print(a[0])
the result is:
(0.4817527913069962, 0.7017598562799067)
0.481752791307
What extra is happening behind printing a tuple(similar behavior for list)? Why is there extra fraction?
Thanks a lot.
BTW, this is python 2.7
What you are seeing is the difference between the formatting choices made by str(float) and repr(float). In Python 2.x, str(float) returns 12 digits while repr(float) returns 17 digits. In its interactive mode, Python uses str() to format the result. That accounts for the 12 digits of precision when formatting a float. But when the result is a tuple or list, the string formatting logic uses repr() to format each element.
The output of repr(float) must be able to be converted back to the original value. Using 17 digits of precision guarantees that behavior. Python 3 uses a more sophisticated algorithm that returns the shortest string that will round-trip back to the original value. Since repr(float) frequently returns a more friendly appearing result, str(float) was changed to be the same as repr(float).
I have problems working with large numbers and long decimal numbers,
as others have mentioned or solved such issue using PrecisionEvaluate,
I could not get consistent result with such function.
Example with this code :
<cfset n = 0.000000000009>
<cfoutput>#precisionEvaluate(n)#</cfoutput> // this will produce "9E-12"
<cfoutput>#precisionEvaluate("n")#</cfoutput> // this will produce "0.000000000009"
According to Adobe Documentation, using Quote is not recommended (due to processing inefficiency) as well as it should produce same result, however this is not the case from the above code.
Further trials with inconsistent result:
<cfset n = 0.000000000009>
<cfset r = 12567.8903>
<cfoutput>#precisionEvaluate(r * n)#</cfoutput> // this will produce "1.131110127E-7"
<cfoutput>#precisionEvaluate("r * n")#</cfoutput> // this will produce "1.131110127E-7", same as above
<cfoutput>#precisionEvaluate(r / n)#</cfoutput> // this will produce "1396432255555555.55555555555555555556"
<cfoutput>#precisionEvaluate("r / n")#</cfoutput> // this will produce "1396432255555555.55555555555555555556", same as above
Has anybody run into problems with a similar case? What is a practical solution to address the inconsistency?
I have tried : using val() function does not resolve as it is limited to short numbers only,
using numberFormat() function which is difficult as we have to pass number of decimals to format it properly.
When it comes to numbers, do not always believe what you see on the screen. That is just a "human friendly" representation of the number. In your case, the actual results (or numbers) are consistent. It is just a matter of how those numbers are presented ..
PrecisionEvaluate returns a java.math.BigDecimal object. In order to display the number represented by that object inside <cfoutput>, CF invokes the object's toString() method. Per the API, toString() may use scientific notation to represent the value. That explains why it is used for some of your values, but not others. (Though with or without the exponent, it still represents the same number). However, if you prefer to exclude the exponent, just use BigDecimal.toPlainString() instead:
toPlainString() - Returns a string representation of this BigDecimal without an exponent
field....
Example:
<cfscript>
n = 0.000000000009;
r = 12567.8903;
result = precisionEvaluate(r * n);
WriteOutput( result.getClass().name );
WriteOutput("<br />result.toString() ="& result.toString());
WriteOutput("<br />result.toPlainString() ="& result.toPlainString());
</cfscript>
Result:
java.math.BigDecimal
result.toString() =1.131110127E-7
result.toPlainString() =0.0000001131110127