sympy's solve() command for equations != 0 - sympy

As i've read in the sympy docs, the solve() command expects an equation to solve as being equal to zero.
As the equations i would like to solve are not in that form and in fact solving them for 0 is my purpose in using a library like sympy, is there a way to get around this?

What the docs are saying is that if you do something like
>>> solve(x**2 - 1, x)
Then solve is implicitly assuming that x**2 - 1 is equal to 0. If you wanted to solve x**2 - 1 = 2, then you could either subtract 2 from both sides, to get
>>> solve(x**2 - 1 - 2, x)
or you could use the Eq() class
>>> solve(Eq(x**2 - 1, 2), x)

Related

Evaluating the solution pairs of the sympy function

I have a non linear function with two variables, I want to solve the equation . But the solution itself is an equation. How do i evaluate the function at a certain point
CODE:
import sympy as sp
sp.init_printing()
x1,x2,y1,y2 = sp.symbols('x1,x2,y1,y2')
x1,y1=-2,3
f = sp.Eq((x1-x2)**2 + (y1-y2)**2,1)
a = sp.solve([f],(x2,y2))
now i want a a few solution pairs of the function 'a' .
Thanks in advance
:)
You have a single equation in two unknowns: pick a value for one and solve for the other. Here, we pick values for y2 and solve for x2 and pair each solution with the value of i. There is one solution for y2 = 2 and 4 and 2 solutions when it is 3
>>> [(j,i) for i in range(2,5) for j in sp.solve(f.subs(y2,i),x2)]
[(-2, 2), (-3, 3), (-1, 3), (-2, 4)]
Realizing that the equation represents a circle centered at (-2, 3) also permits one to use SymPy's Circle to give you an arbitrary Point in terms of a parameter:
>>> from sympy import Circle
>>> from sympy.abc import t
>>> Circle((-2,3),1).arbitrary_point(t)
Point2D(cos(t) - 2, sin(t) + 3)
Substitute a value for t to get the corresponding point
>>> _.subs(t,pi)
Point2D(-3, 3)

how to use SymPy or other library to have a numerical solution

I was trying to solve two equations for two unknown symbols 'Diff' and 'OHs'. the equations are shown below
x = (8.67839580228369e-26*Diff + 7.245e-10*OHs**3 +
1.24402291559836e-10*OHs**2 + OHs*(-2.38073807380738e-19*Diff -
2.8607855978291e-18) - 1.01141409254177e-29)
J= (-0.00435840284294195*Diff**0.666666666666667*(1 +
3.64525434266056e-7/OHs) - 1)
solution = sym.nsolve ((x, J), (OHs, Diff), (0.000001, 0.000001))
print (solution)
is this the correct way to solve for the two unknowns?
Thanks for your help :)
Note: I edited your equation per Vialfont's comments.
I would say it is a possible way but you could do better by noticing that the J equation can be solved easily for OHs and substituted into the x equation. This will then be much easier for nsolve to solve:
>>> osol = solve(J, OHs)[0] # there is only one solution
>>> eq = x.subs(OHs,osol)
>>> dsol = nsolve(eq, 1e-5)
>>> eq.subs(Diff,dsol) # verify
4.20389539297445e-45
>>> osol.subs(Diff,dsol), dsol
(-2.08893263437131e-12, 4.76768525775849e-5)
But this is still pretty ill behaved in terms of scaling...proceed with caution. And I would suggest writing Diff**Rational(2,3) instead of Diff**0.666666666666667. Or better, then let Diff be y**3 so you are working with a polynomial in y.
>>> y = var('y', postive=True)
>>> yx=x.subs(Diff,y**3)
>>> yJ=J.subs(Diff,y**3)
>>> yosol=solve(yJ,OHs)[0]
>>> yeq = yx.subs(OHs, yosol)
Now, the solutions of eq will be where its numerator is zero so find the real roots of that:
>>> ysol = real_roots(yeq.as_numer_denom()[0])
>>> len(ysol)
1
>>> ysol[0].n()
0.0362606728976173
>>> yosol.subs(y,_)
-2.08893263437131e-12
That is consistent with our previous solution, and this time the solutions in ysol were exact (given the limitations of the coefficients). So if your OHs solution should be positive, check your numbers and equations.
Your expressions do not meet Sympy requirements, including the exponential expressions. May be it is easier to start with a simpler system to solve with two unknowns and only a square such as:
from sympy.abc import a,b,x, y
from sympy import solve,exp
eq1= a*x**2 + b*y+ exp(0)
eq2= x + y + 2
sol=solve((eq1, eq2),(x,y),dict=True)
sol includes your answers and you have access to solutions with sol[0][x] and sol[0][y]. Giving values to the parameters is done with the .sub() method:
sol[0][x].subs({a:1, b:2}) #gives -1
sol[0][y].subs({a:1, b:2}) #gives -1

Solving constraint satisfaction problems in Sympy

I'm attempting to solve some simple Boolean satisfiability problems in Sympy. Here, I tried to solve a constraint that contains the Or logic operator:
from sympy import *
a,b = symbols("a b")
print(solve(Or(Eq(3, b*2), Eq(3, b*3))))
# In other words: (3 equals b*2) or (3 equals b*3)
# [1,3/2] was the answer that I expected
Surprisingly, this leads to an error instead:
TypeError: unsupported operand type(s) for -: 'Or' and 'int'
I can work around this problem using Piecewise, but this is much more verbose:
from sympy import *
a,b = symbols("a b")
print(solve(Piecewise((Eq(3, b*2),Eq(3, b*2)), (Eq(3, b*3),Eq(3, b*3)))))
#prints [1,3/2], as expected
Unfortunately, this work-around fails when I try to solve for two variables instead of one:
from sympy import *
a,b = symbols("a b")
print(solve([Eq(a,3+b),Piecewise((Eq(b,3),Eq(b,3)), (Eq(b,4),Eq(b,4)))]))
#AttributeError: 'BooleanTrue' object has no attribute 'n'
Is there a more reliable way to solve constraints like this one in Sympy?
To expand on zaq's answer, SymPy doesn't recognize logical operators in solve, but you can use the fact that
a*b = 0
is equivalent to
a = 0 OR b = 0
That is, multiply the two equations
solve((3 - 2*b)*(3 - 3*b), b)
As an additional note, if you wanted to use AND instead of OR, you can solve for a system. That is,
solve([eq1, eq2])
is equivalent to solving
eq1 = 0 AND eq2 = 0
Every equation can be expressed as something being equated to 0. For example, 3-2*b = 0 instead of 3 = 2*b. (In Sympy, you don't even have to write the =0 part, it's assumed.) Then you can simply multiply equations to express the OR logic:
>>> from sympy import *
>>> a,b = symbols("a b")
>>> solve((3-b*2)*(3-b*3))
[1, 3/2]
>>> solve([a-3-b, (3-b*2)*(3-b*3)])
[{b: 1, a: 4}, {b: 3/2, a: 9/2}]

Sympy Piecewise expression for even and odd numbers

The objective is to implement a Piecewise expression that gives 0 when n is even, and 1 when n is odd. One way to do it is using the floor function like below:
from sympy import *
from sympy.abc import n
f = Lambda((n,), Piecewise((0, Eq(n, floor(n / S(2)))),
(1, Eq(n, floor(n / S(2))+1))))
print(f(0))
print(f(1))
print(f(2))
print(f(3))
However, this returns the wrong output:
0
1
1
Piecewise()
The correct output should be:
0
1
0
1
Another way to achieve the same is to use:
from sympy import *
from sympy.abc import n
f = Lambda((n,), Piecewise((0, Eq((-1)**n, 1)),
(1, Eq((-1)**n, -1))))
print(f(0))
print(f(1))
print(f(2))
print(f(3))
and this returns the correct output. Is there a way to achieve this using the floor function in the original code?
A better way would be to use Mod, like
Piecewise((0, Eq(Mod(n, 2), 0)), (1, Eq(Mod(n, 2), 1)))
However, since your function coincides exactly with the definition of Mod, you can just use it directly
Mod(n, 2)
or equivalently
n % 2

how to solve 3 nonlinear equations in python

I have the following system of 3 nonlinear equations that I need to solve:
-xyt + HF = 0
-2xzt + 4yzt - xyt + 4z^2t - M1F = 0
-2xt + 2yt + 4zt - 1 = 0
where x, HF, and M1F are known parameters. Therefore, y,z, and t are the parameters to be calculated.
Attemp to solve the problem:
def equations(p):
y,z,t = p
f1 = -x*y*t + HF
f2 = -2*x*z*t + 4*y*z*t - x*y*t + 4*t*z**2 - M1F
f3 = -2*x*t + 2*y*t + 4*z*t - 1
return (f1,f2,f3)
y,z,t = fsolve(equations)
print equations((y,z,t))
But the thing is that if I want to use scipy.optimize.fsolve then I should input an initial guess. In my case, I do not have any initial conditions.
Is there another way to solve 3 nonlinear equations with 3 unknowns in python?
Edit:
It turned out that I have a condition! The condition is that HF > M1F, HF > 0, and M1F > 0.
#Christian, I don't think the equation system can be linearize easily, unlike the post you suggested.
Powell's Hybrid method (optimize.fsolve()) is quite sensitive to initial conditions, so it is very useful if you can come up with a good initial parameter guess. In the following example, we firstly minimize the sum-of-squares of all three equations using Nelder-Mead method (optimize.fmin(), for small problem like OP, this is probably already enough). The resulting parameter vector is then used as the initial guess for optimize.fsolve() to get the final result.
>>> from numpy import *
>>> from scipy import stats
>>> from scipy import optimize
>>> HF, M1F, x=1000.,900.,10.
>>> def f(p):
return abs(sum(array(equations(p))**2)-0)
>>> optimize.fmin(f, (1.,1.,1.))
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 131
Function evaluations: 239
array([ -8.95023217, 9.45274653, -11.1728963 ])
>>> optimize.fsolve(equations, (-8.95023217, 9.45274653, -11.1728963))
array([ -8.95022376, 9.45273632, -11.17290503])
>>> pr=optimize.fsolve(equations, (-8.95023217, 9.45274653, -11.1728963))
>>> equations(pr)
(-7.9580786405131221e-13, -1.2732925824820995e-10, -5.6843418860808015e-14)
The result is pretty good.