Sympy precision using rationals - sympy

In sympy using large rationals, ie
xa2=2869041017039531/549755813888
xtotatives=5221
print(Rational(2869041017039531/549755813888)*5221)
For a larger set of similar fractions I am getting unexpected output when I check the number of distinct fractions. Is there a way to increase the precision in sympy for rationals using simple code like this?
Edit1:
import math
import numpy as np
from collections import Counter
from sympy import Symbol, Rational, fraction
#from decimal import *
#getcontext().prec = 10000 #digits of precision for decimal
A060753=[1, 2, 3, 15, 35, 77, 1001]
A038110=[1, 1, 1, 4, 8, 16, 192]
totatives=[]
#primesList=[2,3,5,7]
#primesList=[2,3,5,7,11]
primesList=[2,3,5,7,11,13]
primeProduct=math.prod(primesList)
nToUse=len(primesList)
valuesToCheck=range(1,primeProduct)
for n in valuesToCheck:
if math.gcd(n, primeProduct)==1 and n<primeProduct:
totatives.append(n)
print(len(totatives))
correctOutput=[]
incorrectOutput=[]
a2=[]
k=0
totativeCount=len(totatives)
while k<totativeCount:
#if k%round(totativeCount/50)==1:
#print(f"loop {k+1} of {totativeCount}")
a2.append(A060753[nToUse]/A038110[nToUse]*(k+1))
correctOutput.append(A060753[nToUse]*(k+1)-A038110[nToUse]*totatives[k])
incorrectOutput.append((Rational(a2[k])-Rational(totatives[k]))*A038110[nToUse])
k+=1
print(f"count of distinct correct: {len(np.unique(correctOutput))}")
print(f"sum(correctOutput): {sum(correctOutput)}")
print(f"count of distinct incorrect: {len(np.unique(incorrectOutput))}")
print(f"sum(incorrectOutput): {float(sum(incorrectOutput))}")
Output:
count of distinct correct: 2422
sum(correctOutput): 2882880
count of distinct incorrect: 3408
sum(incorrectOutput): 2882880.000000864
Cheers,
Jamie

SymPy doesn't use a global precision like decimal. Every SymPy Float object stores its own precision (defaulting to 15 digits).
If you want to use rational numbers, your best bet is to just avoid floats entirely. This can be done by ensuring your integers are SymPy integers. This can be done by calling sympify() on values that start out as Python integers, like
A060753 = sympify([1, 2, 3, 15, 35, 77, 1001])
A038110 = sympify([1, 1, 1, 4, 8, 16, 192])
And also using sympy.prod and sympy.gcd instead of math.prod and math.gcd. This avoids the gotcha where dividing two Python int objects produces a float, which loses information about the exact rational number it represents.
If you do this, the results will be rational numbers. You can then convert these to floats with as many digits as you want with number.evalf()

Related

How to round specific parts of symbolic expression using sympy?

I'm new to python and sympy and am a little lost. What's the easiest way to round all of the numbers except 0.268994781998603, 0.525103332486078, and 0.2357023740927390 in equations that look like this:
0.268994781998603*x**0.24883285 + 0.525103332486078*exp(-Abs(2.011218*x - 1.101318)) + 0.2357023740927390*x**0.25234357
Would it have to do with using srepr?
Ultimately, I'd like to round the exponents 0.24883285 and 0.25234357 to .25 so sympy will combine those respective terms when using sympify.
Thanks!
It looks like what you want to do is keep the high precision Float but round the lower precision ones. You can discriminated based on the associated precision. I defined 'eq' to be the equation you gave above:
>>> for i in sorted(eq.atoms(Float)):
... print(i._prec, i)
...
27 -1.101318
53 0.235702374092739
30 0.24883285
30 0.25234357
53 0.268994781998603
53 0.525103332486078
27 2.011218
So let's get the lower precision floats in a list:
>>> lp = [i for i in eq.atoms(Float) if i._prec <= 30]
And let's define a replacement dictionary that rounds to two decimal places:
>>> reps = {k: k.round(2) for k in lp}
And now use it to replace those Floats in eq
>>> eq.subs(reps)
>>> eq.subs(reps)
0.504697156091342*x**0.25 + 0.525103332486078*exp(-Abs(2.01*x - 1.1))
The exponents, now being the same, caused the two terms to join.
If you rounded at two significant figures you would get:
>>> reps = {k: k.n(2) for k in lp}
>>> eq.subs(reps)
0.268994781998603*x**0.25 + 0.235702374092739*x**0.25 + 0.525103332486078*exp(-Abs(2.0*x - 1.1))
The terms don't join because these 2-sig-fig values are not exactly the same. Conversion to a string and re-sympification will work, however. (But I would stick to the round version.)
>>> eq2 = _
>>> from sympy import S
>>> S(str(eq2))
0.504697156091342*x**0.25 + 0.525103332486078*exp(-Abs(2.0*x - 1.1))
To just replace Floats in a given region of the expression there are lots of ways to parse up the expression: coefficients of Mul, constant terms of Add, etc... In the comments below you say that you want to make the change in sin, sign, exp and exponents (Pow) so something like this can work:
>>> from sympy import sin, sign, exp, Pow
>>> eq.replace(
... lambda x: isinstance(x, (sin, sign, exp, Pow)),
... lambda x: x.xreplace(dict([(i,i.round(2)) for i in x.atoms(Float)])))
0.504697156091342*x**0.25 + 0.525103332486078*exp(-Abs(2.01*x - 1.1))

Handle float values beyond the scope of numpy

I have a part of an integral quad function that does
x * math.exp(a * b)
Where a and b are huge values. a = 13.03 and b = 95.632154355654, for example.
And this gave me a math range error. overflowError.
Is there any exponential function that can handle extremely large values? I tried using
numpy.exp(a * b)
But this returned inf. Are there any other alternatives?
Try the decimal module. It handles math very well with large numbers or when you need lots of decimal precision.
import decimal as d
a = 13.03
b = 95.632154355654
print(d.Decimal(a * b).exp())
For me this didn't raise an error, and printed 1.474672519395501705817002084E+541

Why isn't to!int() working properly?

Why does this assertion fail?
import std.conv;
void main()
{
auto y = 0.6, delta=0.1;
auto r = to!int(y/delta);
assert(r == 6);
}
r's value should be 6 and yet it's 5, Why?
This is probably because 0.6 can't be represented purely in a floating point number. You write 0.6, but that's not exactly what you get - you get something like 0.599999999. When you divide that by 0.1, you get something like 5.99999999, which converts to an integer of 5 (by rounding down).
Examples in other languages:
C#: Why is (double)0.6f > (double)(6/10f)?
Java: Can someone please explain me that in java why 0.6 is <0.6f but 0.7is >=0.7f
Computers represent floating point numbers in binary. The decimal numbers 0.6 and 0.1 do not have an exact binary representation, while number of bits used to represent them is finite. As a result, there would be truncation, whose effect is seen during division. The result of that division is not exactly 6.00000000, but perhaps 5.99999999, which is then truncated to 5.

Can't divide a smaller number by a larger number in Python

In python, i cannot divide 5 by 22. When I try this, it gives me zero-even when i use float?!!
>>> print float(5/22)
0.0
It's a problem with order of operations. What's happening is this:
* First python takes 5/22. Since 5 and 22 are integers, it returns an integer result, rounding down. The result is 0
* Next you're converting to a float. So float(0) results in 0.0
What you want to do is force one (or both) operands to floats before dividing. e.g.
print 5.0/22 (if you know the numbers absolutely)
print float(x)/22 (if you need to work with a variable integer x)
Right now you're casting the result of integer division (5/22) to float. 5/22 in integer division is 0, so you'll be getting 0 from that. You need to call float(5)/22.

Fortran fomat statement with highest precision in the system

Someone wanting less precision would write
999 format ('The answer is x = ', F8.3)
Others wanting higher output precision may write
999 format ('The answer is x = ', F18.12)
Thus it totally depends on what the user desires. What is the format
statement that exactly matches the precision used in the calculation?
(Note, this may vary from system to system)
It is a difficult question because you request "the precision of the calculation", which depends on so many factors. For example: if I solve f(x)=0 via Newton's method to a tolerance of 1E-6, would you want a format with seven digits?
On the other hand, if you mean the "highest precision attainable by the type" (e. g., double or single precision) then you can simply find the corresponding epsilon (machine eps, or precision) and use that as the format flag. If epsilon is 1E-15, then you can use a format flag that does not have more than 16 digits.
In Fortran you can use the EPSILON(X) function to get this number (the answer will depend on the type of X), the you can take the floor of the absolute value of the logarithm (base 10) of epsilon, and make that the number of decimals in your float representation.
For example, if epsilon is 1E-12, the log is -12, the abs is 12, and the floor is 12, so you want a format like 15.12F (12 decimals + 1 point + the zero + the sign = 15 places)
The problem with floating point numbers is that there is no precision as such: only significant digits.
For instance, if you are calculating longitudes in real*1, near the UK, you'd be accurate to 6 decimal places but if you were in Colorado Springs, it would only be accurate to 4 decimal places. It would not make any sense to print the number in F format it is just rubbish after the 4th decimal place.
If you wish to print to maximum precision, print in E format. Since it is always n.nn..nEnn, you get all the significant digits.
Edit - user4050's query
Try the following example
program main
real intpart, multiplier
integer ii
multiplier = 1
do ii = 1, 6
intpart = 9.87654321
intpart = intpart * multiplier
print '(F15.7 E15.7 G15.8)', intpart, intpart, intpart
multiplier = multiplier * 10
end do
stop
end program
What you will get is something like
9.8765430 0.9876543E+01 9.8765430
98.7654266 0.9876543E+02 98.765427
987.6542969 0.9876543E+03 987.65430
9876.5429688 0.9876543E+04 9876.5430
98765.4296875 0.9876543E+05 98765.430
987654.3125000 0.9876543E+06 987654.31
Notice that the precision changes as the number gets bigger because a float only has 7 significant figures.