I am trying in Sympy to use a standard engineering method of simplifying an equation, where you know one variable is much larger or smaller than another. For instance, given the equation
C1*R2*s+C1*R2+R1+R2
and knowing that
R1 >> R2
the equation can be simplified to
C1*R2*s+C1*R2+R1.
What you'd normally do by hand is divide by R1, giving
C1*R2*s/R1+C1*R2/R1+1+R2/R1
then anyplace you see R2/R1 by itself you can set it to zero and then multiply by R1. I've not been able to figure how this would be done in Sympy. Obviously it's easy to do the division step, but I haven't been able to figure out how to do the search and replace step- just using subs gives you
R1
which isn't the right answer. factor, expand, collect, don't seem to get me anywhere.
Using replace instead of subs works here.
C1, C2, R1, R2 = sp.symbols('C1, C2, R1, R2', real = True)
s = sp.symbols('s')
expr = C1*R2*s+C1*R2+R1+R2
print('original expression:', expr)
expr_approx = (R1 * ((expr/R1).expand().replace(R2/R1,0))).simplify()
print('approximate expression:', expr_approx)
original expression: C1*R2*s + C1*R2 + R1 + R2
approximate expression: C1*R2*s + C1*R2 + R1
Related
I am not pratice in Sympy manipulation.
I need to find roots on particular poly:
-4x**(11/2)-24x**(9/2)-16x**(7/2)+2x**(5/2)+16x**(5)+23x**(4)+5x**(3)-x**(2)
I verified that I have 2 real solution and I find one of them with Sympy function
nsolve(mypoly,x,1).
Why the previous step doesn't look the other?
How can I proceed to find ALL roots?
Thank you to all for assistance
A.
To my knowledge, nsolve looks in the proximity of the provided initial guess to find one root for each equations.
I would plot the expression to find suitable initial guesses:
from sympy import *
from sympy.plotting import PlotGrid
expr = -4*x**(S(11)/2)-24*x**(S(9)/2)-16*x**(S(7)/2)+2*x**(S(5)/2)+16*x**(5)+23*x**(4)+5*x**(3)-x**(2)
p1 = plot(expr, (x, 0, 0.5), adaptive=False, n=1000, ylim=(-0.01, 0.05), show=False)
p2 = plot(expr, (x, 0, 5), adaptive=False, n=1000, ylim=(-200, 200), show=False)
PlotGrid(1, 2, p1, p2)
Now, we can do:
nsolve(expr, x, 0.2)
# out: 0.169003536680445
nsolve(expr, x, 4)
# out: 4.28968831654177
EDIT: to find all roots (even the complex one), we can:
compute the derivative of the expression.
convert both the expression and the derivative to numerical functions with sympy's lambdify.
visually inspect the expression in the complex plane to determine good initial values for the root finding algorithm. I'm going to use this plotting module, SymPy Plotting Backend which exposes a very handy function, plot_complex, to generate domain coloring plots. In particular, I will plot alternating black and white stripes corresponding to modulus.
use scipy's newton method to compute the actual roots. EDIT: I just discovered that nsolve works too :)
# step 1 and 2
f = lambdify(x, expr)
f_der = lambdify(x, expr.diff(x))
# step 3
from spb import plot_complex
r = (x, -1-0.8j, 4.5+0.8j)
w = r[1].real - r[2].real
h = r[1].imag - r[2].imag
# number of discretization points, watch out memory usage
n1 = 1500
n2 = int(h / w * n1)
plot_complex(expr, r, {"interpolation": "spline36"}, grid=False, coloring="e", n1=n1, n2=n2, size=(10, 5))
In the above picture we see circular stripes getting bigger and deforming. The center of these circular stripes represent a pole or a zero. But this is an easy case: there are no poles. So, from the above pictures we count 7 zeros. We already know 3, the two computed above and the value 0. Let's find the others:
from scipy.optimize import newton
r1 = newton(f, x0=-0.9+0.1j, fprime=f_der)
r2 = newton(f, x0=-0.9-0.1j, fprime=f_der)
r3 = newton(f, x0=0.6+0.6j, fprime=f_der)
r4 = newton(f, x0=0.6-0.6j, fprime=f_der)
for r in (r1, r2, r3, r4):
print(r, ": is it a zero?", expr.subs(x, r).evalf())
# out:
# (-0.9202719950522663+0.09010409402273806j) : is it a zero? -8.21787666002984e-15 + 2.06697764417957e-15*I
# (-0.9202719950522663-0.09010409402273806j) : is it a zero? -8.21787666002984e-15 - 2.06697764417957e-15*I
# (0.6323265751497729+0.6785871500619469j) : is it a zero? -2.2103533615688e-15 - 2.77549897301442e-15*I
# (0.6323265751497729-0.6785871500619469j) : is it a zero? -2.2103533615688e-15 + 2.77549897301442e-15*I
As you can see, inserting those values into the original expression get values very very close to zero. It is perfectly normal to see these kind of errors.
I just discovered that you can use also use nsolve instead of newton to compute complex roots. This makes step 1 and 2 unnecessary.
nsolve(expr, x, -0.9+0.1j)
# out: −0.920271995052266+0.0901040940227375𝑖
I'm trying to use sympy to generate equations for non-linear least squares fitting. My goal is to make this quite complex but for the moment, here's a simple case (but not too simple!). It's basically fitting a two dimensional sinusoid to data. Here's the sympy code:
from sympy import *
S, l, m = symbols('S l m', real=True)
u, v = symbols('u v', real=True)
Vobs = symbols('Vobs', complex=True)
Vres = Vobs - S * exp(- 1j * 2 * pi * (u*l+v*m))
J=Vres*conjugate(Vres)
axes = [S, l, m]
grad = derive_by_array(J, axes)
hess = derive_by_array(grad, axes)
One element of the grad term looks like:
- 2.0*I*pi*S*u*(-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs)*exp(2.0*I*pi*(l*u + m*v)) + 2.0*I*pi*S*u*(-S*exp(2.0*I*pi*(l*u + m*v)) + conjugate(Vobs))*exp(-2.0*I*pi*(l*u + m*v))
What I'd like is to replace the expanded term (-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs) by Vres and contract the two conjugate terms into the more compact equivalent is:
4.0*pi*S*u*im(Vres*exp(2.0*I*pi*(l*u + m*v)))
I cannot see how to do this with sympy. This problem is bad for the first derivative (grad) but get really out of hand with the second derivative (hess).
First of all, let's not use 1j in SymPy, it's a float and floats are bad for symbolic math. SymPy's imaginary unit is I. So,
Vres = Vobs - S * exp(- I * 2 * pi * (u*l+v*m))
To replace the expression Vres by a symbol, we first need to create such a symbol. I'm going to call it Vres0, but its name will be Vres, so it prints as "Vres" in formulas.
Vres0 = symbols('Vres')
g1 = grad[1].subs(Vres, Vres0).conjugate().subs(Vres, Vres0).conjugate()
The conjugate-substitute-conjugate back is needed because subs doesn't quite recognize the possibility of replacing the conjugate of an expression with the conjugate of the symbol.
Now g1 is
-2*I*pi*S*Vres*u*exp(2*I*pi*(l*u + m*v)) + 2*I*pi*S*u*exp(-2*I*pi*(l*u + m*v))*conjugate(Vres)
and we want to fold the sum of conjugate terms. I use a custom transformation rule for this: the rule fold_conjugates applies to every sum (Add) of two terms (len(f.args) == 2) where the second is a conjugate of the first (f.args[1] == f.args[0].conjugate()). The transformation it performs: replace the sum by twice the real part of first argument (2*re(f.args[0])). Like so:
from sympy.core.rules import Transform
fold_conjugates = Transform(lambda f: 2*re(f.args[0]),
lambda f: isinstance(f, Add) and len(f.args) == 2 and f.args[1] == f.args[0].conjugate())
g = g1.xreplace(fold_conjugates)
Final result: 4*pi*S*u*im(Vres*exp(2*I*pi*(l*u + m*v))).
I'm using sympy.dsolve to solve a simple ODE for a decay chain.
The answer I get for different decay rates (e.g. lambda_1 > lambda_2) is wrong. After substituting C1=0, I get a simple exponential
-N_0*lambda_1*exp(-lambda_1*t)/(lambda_1 - lambda_2)
instead of the correct answer which has:
(exp(-lambda_1*t)-exp(-lambda_2*t)).
What am I doing wrong?
Here is my code
sp.var('lambda_1,lambda_2 t')
sp.var('N_0')
daughter = sp.Function('N2',Positive=True)(t)
stage1 = N_0*sp.exp(-lambda_1*t)
eq = sp.Eq(daughter.diff(t),stage1*lambda_1 - daughter*lambda_2)
sp.dsolve(eq,daughter)
Your differential equation is (using shorter variable identifiers)
y' = A*N0*exp(-A*t) - B*y
Apply the integrating factor exp(B*t) to get the equivalent
(exp(B*t)*y(t))' = A*N0*exp((B-A)*t)
Integrate to get
exp(B*t)*y(t) = A*N0*exp((B-A)*t)/(B-A) + C
y(t) = A*N0*exp(-A*t)/(B-A) + C*exp(-B*t)
which is exactly what the solver computed.
Did you plug in C1 = 0, expecting to get a solution such that y(0) = 0? That's not how it works. C1 is an arbitrary constant in the formula, setting it to 0 is no guarantee that the expression will evaluate to 0 when t = 0.
Here is a correct approach, step by step
sol1 = sp.dsolve(eq, daughter)
This returns a Piecewise because it's not known if two lambdas are equal:
Eq(N2(t), (C1 + N_0*lambda_1*Piecewise((t, Eq(lambda_2, lambda_1)), (-exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)), True)))*exp(-lambda_2*t))
We can clarify that they are not:
sol2 = sol1.subs(Eq(lambda_2, lambda_1), False)
getting
Eq(N2(t), (C1 - N_0*lambda_1*exp(lambda_2*t)/(lambda_1*exp(lambda_1*t) - lambda_2*exp(lambda_1*t)))*exp(-lambda_2*t))
Next, we need C1 such that the right hand side turns into 0 when t = 0. So, take the right hand side, plug 0 for t, solve for C1:
C1 = Symbol('C1')
C1val = solve(sol2.rhs.subs(t, 0), C1, dict=True)[0][C1]
(It's not necessary to include dict=True but I like it, because it enforces uniform output of solve: it's an array of dictionaries.)
By the way, C1val is now N_0*lambda_1/(lambda_1 - lambda_2). Put it in:
sol3 = sol2.subs(C1, C1val).simplify()
and there you have it:
Eq(N2(t), N_0*lambda_1*(exp(lambda_1*t) - exp(lambda_2*t))*exp(-t*(lambda_1 + lambda_2))/(lambda_1 - lambda_2))
The expression is equivalent to N_0*lambda_1*(exp(-lambda_2*t) - exp(-lambda_1*t))/(lambda_1 - lambda_2) although SymPy seems loathe to combine the exponentials here.
Motivation. It is well known that generating function for Catalan numbers satisfies quadratic equation. I would like to have first several coefficients of a function, implicitly defined by an algebraic equation (not necessarily a quadratic one!).
Example.
import sympy as sp
sp.init_printing() # math as latex
from IPython.display import display
z = sp.Symbol('z')
F = sp.Function('F')(z)
equation = 1 + z * F**2 - F
display(equation)
solution = sp.solve(equation, F)[0]
display(solution)
display(sp.series(solution))
Question. The approach where we explicitly solve the equation and then expand it as power series, works only for low-degree equations. How to obtain first coefficients of formal power series for more complicated algebraic equations?
Related.
Since algebraic and differential framework may behave differently, I posted another question.
Sympy: how to solve differential equation in formal power series?
I don't know a built-in way, but plugging in a polynomial for F and equating the coefficients works well enough. Although one should not try to find all coefficients at once from a large nonlinear system; those will give SymPy trouble. I take iterative approach, first equating the free term to zero and solving for c0, then equating 2nd and solving for c1, etc.
This assumes a regular algebraic equation, in which the coefficient of z**k in the equation involves the k-th Taylor coefficient of F, and does not involve higher-order coefficients.
from sympy import *
z = Symbol('z')
d = 10 # how many coefficients to find
c = list(symbols('c:{}'.format(d))) # undetermined coefficients
for k in range(d):
F = sum([c[n]*z**n for n in range(k+1)]) # up to z**k inclusive
equation = 1 + z * F**2 - F
coeff_eqn = Poly(series(equation, z, n=k+1).removeO(), z).coeff_monomial(z**k)
c[k] = solve(coeff_eqn, c[k])[0]
sol = sum([c[n]*z**n for n in range(d)]) # solution
print(series(sol + z**d, z, n=d)) # add z**d to get SymPy to print as a series
This prints
1 + z + 2*z**2 + 5*z**3 + 14*z**4 + 42*z**5 + 132*z**6 + 429*z**7 + 1430*z**8 + 4862*z**9 + O(z**10)
Please suggest subroutine program to evaluate the polynomial ax 3 + bx 2 + cx + d with minimum number of operations for given values of a,b,c and d.
If using bisection method is there any way to guess limit values dynamically.
The fastest method to evaluate f(x)=ax³+bx²+cx+d is the Horner scheme which uses parantheses to transform the expression to
f(x) = ((a*x+b)*x+c)*x+d
For finding roots note that at x=-R and x=+R where
R = 1+max(abs(b), abs(c), abs(d))/abs(a)
the polynomial will have non-zero values of opposite sign. Use bisection or better regula-falsi with the Illinois-anti-stalling modification.