how to solve 3 nonlinear equations in python - python-2.7

I have the following system of 3 nonlinear equations that I need to solve:
-xyt + HF = 0
-2xzt + 4yzt - xyt + 4z^2t - M1F = 0
-2xt + 2yt + 4zt - 1 = 0
where x, HF, and M1F are known parameters. Therefore, y,z, and t are the parameters to be calculated.
Attemp to solve the problem:
def equations(p):
y,z,t = p
f1 = -x*y*t + HF
f2 = -2*x*z*t + 4*y*z*t - x*y*t + 4*t*z**2 - M1F
f3 = -2*x*t + 2*y*t + 4*z*t - 1
return (f1,f2,f3)
y,z,t = fsolve(equations)
print equations((y,z,t))
But the thing is that if I want to use scipy.optimize.fsolve then I should input an initial guess. In my case, I do not have any initial conditions.
Is there another way to solve 3 nonlinear equations with 3 unknowns in python?
Edit:
It turned out that I have a condition! The condition is that HF > M1F, HF > 0, and M1F > 0.

#Christian, I don't think the equation system can be linearize easily, unlike the post you suggested.
Powell's Hybrid method (optimize.fsolve()) is quite sensitive to initial conditions, so it is very useful if you can come up with a good initial parameter guess. In the following example, we firstly minimize the sum-of-squares of all three equations using Nelder-Mead method (optimize.fmin(), for small problem like OP, this is probably already enough). The resulting parameter vector is then used as the initial guess for optimize.fsolve() to get the final result.
>>> from numpy import *
>>> from scipy import stats
>>> from scipy import optimize
>>> HF, M1F, x=1000.,900.,10.
>>> def f(p):
return abs(sum(array(equations(p))**2)-0)
>>> optimize.fmin(f, (1.,1.,1.))
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 131
Function evaluations: 239
array([ -8.95023217, 9.45274653, -11.1728963 ])
>>> optimize.fsolve(equations, (-8.95023217, 9.45274653, -11.1728963))
array([ -8.95022376, 9.45273632, -11.17290503])
>>> pr=optimize.fsolve(equations, (-8.95023217, 9.45274653, -11.1728963))
>>> equations(pr)
(-7.9580786405131221e-13, -1.2732925824820995e-10, -5.6843418860808015e-14)
The result is pretty good.

Related

How to solve an equation that contains Sum in SymPy?

I want to solve the following equation for x with SymPy:
(Note that the equation can be simplified as mentioned in the comments, I copied it verbatim from an example in a legal document.)
According to my understanding, this translates to the following SymPy expression:
from sympy import Sum, solve
from sympy.abc import k, x
solve(350 - 18500 + Sum(182.94 * (1/(1+x)**(k/12)), (k, 1, 120)), x)
However, when I run this, the result is empty:
[]
What am I doing wrong?
solve probably shouldn't give [] but you will get better results from nsolve for this expression using a guess for x near 0:
>>> from sympy.abc import k, x
>>> from sympy import nsolve
eq = 350 - 18500 + Sum(182.94 * (1/(1+x)**(k/12)), (k, 1, 120))
>>> nsolve(eq, 0)
0.0397546543274819
>>> eq.subs(x,_).round(2)
0

how to use SymPy or other library to have a numerical solution

I was trying to solve two equations for two unknown symbols 'Diff' and 'OHs'. the equations are shown below
x = (8.67839580228369e-26*Diff + 7.245e-10*OHs**3 +
1.24402291559836e-10*OHs**2 + OHs*(-2.38073807380738e-19*Diff -
2.8607855978291e-18) - 1.01141409254177e-29)
J= (-0.00435840284294195*Diff**0.666666666666667*(1 +
3.64525434266056e-7/OHs) - 1)
solution = sym.nsolve ((x, J), (OHs, Diff), (0.000001, 0.000001))
print (solution)
is this the correct way to solve for the two unknowns?
Thanks for your help :)
Note: I edited your equation per Vialfont's comments.
I would say it is a possible way but you could do better by noticing that the J equation can be solved easily for OHs and substituted into the x equation. This will then be much easier for nsolve to solve:
>>> osol = solve(J, OHs)[0] # there is only one solution
>>> eq = x.subs(OHs,osol)
>>> dsol = nsolve(eq, 1e-5)
>>> eq.subs(Diff,dsol) # verify
4.20389539297445e-45
>>> osol.subs(Diff,dsol), dsol
(-2.08893263437131e-12, 4.76768525775849e-5)
But this is still pretty ill behaved in terms of scaling...proceed with caution. And I would suggest writing Diff**Rational(2,3) instead of Diff**0.666666666666667. Or better, then let Diff be y**3 so you are working with a polynomial in y.
>>> y = var('y', postive=True)
>>> yx=x.subs(Diff,y**3)
>>> yJ=J.subs(Diff,y**3)
>>> yosol=solve(yJ,OHs)[0]
>>> yeq = yx.subs(OHs, yosol)
Now, the solutions of eq will be where its numerator is zero so find the real roots of that:
>>> ysol = real_roots(yeq.as_numer_denom()[0])
>>> len(ysol)
1
>>> ysol[0].n()
0.0362606728976173
>>> yosol.subs(y,_)
-2.08893263437131e-12
That is consistent with our previous solution, and this time the solutions in ysol were exact (given the limitations of the coefficients). So if your OHs solution should be positive, check your numbers and equations.
Your expressions do not meet Sympy requirements, including the exponential expressions. May be it is easier to start with a simpler system to solve with two unknowns and only a square such as:
from sympy.abc import a,b,x, y
from sympy import solve,exp
eq1= a*x**2 + b*y+ exp(0)
eq2= x + y + 2
sol=solve((eq1, eq2),(x,y),dict=True)
sol includes your answers and you have access to solutions with sol[0][x] and sol[0][y]. Giving values to the parameters is done with the .sub() method:
sol[0][x].subs({a:1, b:2}) #gives -1
sol[0][y].subs({a:1, b:2}) #gives -1

Solving constraint satisfaction problems in Sympy

I'm attempting to solve some simple Boolean satisfiability problems in Sympy. Here, I tried to solve a constraint that contains the Or logic operator:
from sympy import *
a,b = symbols("a b")
print(solve(Or(Eq(3, b*2), Eq(3, b*3))))
# In other words: (3 equals b*2) or (3 equals b*3)
# [1,3/2] was the answer that I expected
Surprisingly, this leads to an error instead:
TypeError: unsupported operand type(s) for -: 'Or' and 'int'
I can work around this problem using Piecewise, but this is much more verbose:
from sympy import *
a,b = symbols("a b")
print(solve(Piecewise((Eq(3, b*2),Eq(3, b*2)), (Eq(3, b*3),Eq(3, b*3)))))
#prints [1,3/2], as expected
Unfortunately, this work-around fails when I try to solve for two variables instead of one:
from sympy import *
a,b = symbols("a b")
print(solve([Eq(a,3+b),Piecewise((Eq(b,3),Eq(b,3)), (Eq(b,4),Eq(b,4)))]))
#AttributeError: 'BooleanTrue' object has no attribute 'n'
Is there a more reliable way to solve constraints like this one in Sympy?
To expand on zaq's answer, SymPy doesn't recognize logical operators in solve, but you can use the fact that
a*b = 0
is equivalent to
a = 0 OR b = 0
That is, multiply the two equations
solve((3 - 2*b)*(3 - 3*b), b)
As an additional note, if you wanted to use AND instead of OR, you can solve for a system. That is,
solve([eq1, eq2])
is equivalent to solving
eq1 = 0 AND eq2 = 0
Every equation can be expressed as something being equated to 0. For example, 3-2*b = 0 instead of 3 = 2*b. (In Sympy, you don't even have to write the =0 part, it's assumed.) Then you can simply multiply equations to express the OR logic:
>>> from sympy import *
>>> a,b = symbols("a b")
>>> solve((3-b*2)*(3-b*3))
[1, 3/2]
>>> solve([a-3-b, (3-b*2)*(3-b*3)])
[{b: 1, a: 4}, {b: 3/2, a: 9/2}]

Python (Scipy): Finding the scale parameter (standard deviation) of a gaussian distribution

it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.

Parameter optimization using a genetic algorithm?

I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.