How to restrict division in sympy(polynomial)? - sympy

I have the following expression
X=Symbol('X')
expression=(((X**2)*(X-1))*((X*(((2*X)*(X-2))+1))+1)/12)
n,d=fraction(expression)
n=sympify(n).expand(basic=True)
print n/d
I am getting following result:
X**6/6 -X**5/2 + 5*X**4/12 - X**2/12
My expected result is
(2*X**6 - 6*X**5 + 5*X**4 - X**2)/12
Is there way in sympy or need to write customize function to handle that

SymPy represents division as multiplication by power -1. To represent division without evaluation, use Mul with setting evaluate=False.
print Mul(n, Pow(d, -1), evaluate=False)
returns
(2*X**6 - 6*X**5 + 5*X**4 - X**2)/12

Related

Fast inverse square root using fixed point instead of floating point

I am trying to implement Fast Inverse Square Root for a fixed point number, but I'm not getting anywhere.
I am trying to follow exactly the same principle as the article, except instead of writing the number in the floating point format x = (-1) ^ s * (1 + M) * 2 ^ (E-127), I am using the format x = M * 2 ^ -16, which is a 32-bit fixed point number with 16 decimal bits and 16 fractional bits.
The problem is that I cannot find the value of the "magic constant". According to my calculations, it doesn’t exist, but I’m not a mathematician and I think I’m doing everything wrong.
To solve Y = 1 / sqrt (x), I used the following reasoning (I don't know if it is correct).
In the original code we have that Y0 for approximation of newton is given by:
i = 0x5f3759df - (i >> 1);
Which means that we will have as a result a floating point number given by:
y0 = (1 + R2 - M / 2) * 2 ^ (R1 - E / 2);
This is because the operation >> divides exponent and mantissa by 2, and then we perform a subtraction of the numbers as integers.
Following the steps shown in the article, I set the format of x to:
x = M * 2 ^ -16
In an attempt to perform the same logic, I try to define Y0 for:
Y0 = (R2 - M / 2) * 2 ^ (R1 - (-16/2));
I'm trying to find a number, which can minimize the error given by:
error = (Y - Y0) / Y
Regardless of the value of R1, I can do shift operations to correct the exponent value of my final result, having the correct result at a fixed point.
Where am I wrong?
It can't be done.
The fast inverse sqrt is due to the floating point representation, that has already split the number into powers of two (exponent) and the significant.
It can be done.
With the same tricks as done for floating points, it's possible to convert your fixed point into 2^exp * x. Given uint32_t a, uint8_t exp = bias- builtin_count_leading_zeros(a); uint32_t b = a << exp, with the constants (and domain of a) so carefully chosen, that there will be no underflows or overflows.
Thus, you will actually have a custom floating point representation, which is tailored for this specific purpose, omitting the sign bit at least and having the best possible number of bits for the exponent, which might as well be 8.

How to make sympy simplify a radical expression equaling zero

The three (real) roots of the polynomial x^3 - 3x + 1 sum up to 0. But sympy does not seem to be able to simplify this sum of roots:
>>> from sympy import *
>>> from sympy.abc import x
>>> rr = real_roots(x**3 -3*x + 1)
>>> sum(rr)
CRootOf(x**3 - 3*x + 1, 0) + CRootOf(x**3 - 3*x + 1, 1) + CRootOf(x**3 - 3*x + 1, 2)
The functions simplify and radsimp cannot simplify this expression. The minimal polynomial, however, is computed correctly:
>>> minimal_polymial(sum(rr))
_x
From this we can conclude that the sum is 0. Is there a direct way of making sympy simplify this sum of roots?
The following function computes the rational number equal to an algebraic term if possible:
import sympy as sp
# try to simplify an algebraic term to a rational number
def try_simplify_to_rational(expr):
try:
float(expr) # does the expression evaluate to a real number?
minPoly = sp.poly(sp.minimal_polynomial(expr))
print('minimal polynomial:', minPoly)
if len(minPoly.monoms()) == 1: # minPoly == x
return 0
if minPoly.degree() == 1: # minPoly == a*x + b
a,b = minPoly.coeffs()
return sp.Rational(-b, a)
except TypeError:
pass # expression does not evaluate to a real number
except sp.polys.polyerrors.NotAlgebraic:
pass # expression does not evaluate to an algebraic number
except Exception as exc:
print("unexpected exception:", str(exc))
print('simplification to rational number not successful')
return expr # simplification not successful
See the working example:
x = sp.symbols('x')
rr = sp.real_roots(x**3 - 3*x + 1)
# sum of roots equals (-1)*coefficient of x^2, here 0
print(sp.simplify(sum(rr)))
print(try_simplify_to_rational(sum(rr))) # -> 0
A more elaborate function computing also simple radical expressions is proposed in sympy issue #19726.

Numerically calculate combinations of factorials and polynomials

I am trying to write a short C++ routine to calculate the following function F(i,j,z) for given integers j > i (typically they lie between 0 and 100) and complex number z (bounded by |z| < 100), where L are the associated Laguerre Polynomials:
The issue is that I want this function to be callable from within a CUDA kernel (i.e. with a __device__ attribute). Standard library/Boost/etc functions are therefore out of the questions, unless they are simple enough to re-implement on my own - this especially relates to the Laguerre polynomials which exist in Boost and C++17. Regardless if I manage to wrap any standard function for Laguerre polynomials, I still have a similar pre-factor to calculate of the form (z^j/j!).
Question: How can I do a relatively simple implementation of such a function, without introducing significant numerical instability?
My idea so far is to calculate L and its pre-factor independently. The pre-factor I will calculate by first looping from 0 to j-i and calculate (z^1 * z^2/2 * ... * z^(j-1)/(j-i)!). I will then calculate the remaining factor exp(-|z|^2/2) *(j-i)! * sqrt(i!/j!) (either in a similar way, or through the Gamma-function, which is implemented in CUDA math). The idea is then to find a minimal algorithm to calculate the associated Laguerre polynomial, unless I manage to wrap an implementation from e.g. Boost or GNU C++.
Edit/side note: The expression for F actually blows up numerically for some values of i/j. It was derived wrong in the source where I got it, and the indices of the associated Laguerre polynomials should instead be L_i^(j-i). That does not invalidate the approaches suggested in the answers/comments.
I recommend finding a recurrence relation for the coefficients of the Laguerre Polynomial:
C(k+1) = g(k)C(k)
g(k) = C(k+1) / C(k)
g(k) = -z * (j - k) / ((j - i + k + 1) * (k + 1)) //Verify this yourself :)
This allows you to avoid most of factorials in computing the polynomial.
After that I would follow Severin's idea of doing the calculations in logarithms
so as to not overload the double floating point range:
log(F) = log(sqrt(i!/j!)) - |z|^2 + (j-i) * log(-z) + log(L(|z|^2))
log(L) = log((2*j - i)!) + log(sum) // where the summation is computed using the recurrence relation above
and using the fact that:
log(a!) = sum(k=1..a, log(k))
and also:
log(z) = log(|z|) + I * arg(z) for complex z
log(-z) = log(|z|) + I * arg(-z)
log(-z) = log(|z|) - I * arg(z)
for the log(sqrt(i!/j!)) part I would do (assuming that j >= i):
log(sqrt(i!/j!))
= 0.5 * (log(i!) - log(j!))
= -0.5 * sum(k==i+1..j, log(k))
I haven't tried this out so there could definitely be little mistakes here and there. This answer is more about the technique rather than a copy-paste-ready answer
Well, what you should do is to logarithm it
Assuming natural logarithm,
q = log(z^j/j!) = log(z^j) - log(j!) = j*log(z) - log(Gamma(j+1))
First term is simple, second term is standard C++ function lgamma(x) (or you could use GSL).
compute value of q and return cexp(q)
You could fold exponent in this method as well

sympy dsolve returns incorrect answer

I'm using sympy.dsolve to solve a simple ODE for a decay chain.
The answer I get for different decay rates (e.g. lambda_1 > lambda_2) is wrong. After substituting C1=0, I get a simple exponential
-N_0*lambda_1*exp(-lambda_1*t)/(lambda_1 - lambda_2)
instead of the correct answer which has:
(exp(-lambda_1*t)-exp(-lambda_2*t)).
What am I doing wrong?
Here is my code
sp.var('lambda_1,lambda_2 t')
sp.var('N_0')
daughter = sp.Function('N2',Positive=True)(t)
stage1 = N_0*sp.exp(-lambda_1*t)
eq = sp.Eq(daughter.diff(t),stage1*lambda_1 - daughter*lambda_2)
sp.dsolve(eq,daughter)
Your differential equation is (using shorter variable identifiers)
y' = A*N0*exp(-A*t) - B*y
Apply the integrating factor exp(B*t) to get the equivalent
(exp(B*t)*y(t))' = A*N0*exp((B-A)*t)
Integrate to get
exp(B*t)*y(t) = A*N0*exp((B-A)*t)/(B-A) + C
y(t) = A*N0*exp(-A*t)/(B-A) + C*exp(-B*t)
which is exactly what the solver computed.
Did you plug in C1 = 0, expecting to get a solution such that y(0) = 0? That's not how it works. C1 is an arbitrary constant in the formula, setting it to 0 is no guarantee that the expression will evaluate to 0 when t = 0.
Here is a correct approach, step by step
sol1 = sp.dsolve(eq, daughter)
This returns a Piecewise because it's not known if two lambdas are equal:
Eq(N2(t), (C1 + N_0*lambda_1*Piecewise((t, Eq(lambda_2, lambda_1)), (-exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)), True)))*exp(-lambda_2*t))
We can clarify that they are not:
sol2 = sol1.subs(Eq(lambda_2, lambda_1), False)
getting
Eq(N2(t), (C1 - N_0*lambda_1*exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)))*exp(-lambda_2*t))
Next, we need C1 such that the right hand side turns into 0 when t = 0. So, take the right hand side, plug 0 for t, solve for C1:
C1 = Symbol('C1')
C1val = solve(sol2.rhs.subs(t, 0), C1, dict=True)[0][C1]
(It's not necessary to include dict=True but I like it, because it enforces uniform output of solve: it's an array of dictionaries.)
By the way, C1val is now N_0*lambda_1/(lambda_1 - lambda_2). Put it in:
sol3 = sol2.subs(C1, C1val).simplify()
and there you have it:
Eq(N2(t), N_0*lambda_1*(exp(lambda_1*t) - exp(lambda_2*t))*exp(-t*(lambda_1 + lambda_2))/(lambda_1 - lambda_2))
The expression is equivalent to N_0*lambda_1*(exp(-lambda_2*t) - exp(-lambda_1*t))/(lambda_1 - lambda_2) although SymPy seems loathe to combine the exponentials here.

program to evaluate the polynomial ax 3 + bx 2 + cx + d with minimum number of operations for given values of a,b,c and d

Please suggest subroutine program to evaluate the polynomial ax 3 + bx 2 + cx + d with minimum number of operations for given values of a,b,c and d.
If using bisection method is there any way to guess limit values dynamically.
The fastest method to evaluate f(x)=ax³+bx²+cx+d is the Horner scheme which uses parantheses to transform the expression to
f(x) = ((a*x+b)*x+c)*x+d
For finding roots note that at x=-R and x=+R where
R = 1+max(abs(b), abs(c), abs(d))/abs(a)
the polynomial will have non-zero values of opposite sign. Use bisection or better regula-falsi with the Illinois-anti-stalling modification.