I'm a complete beginner with sympy, so it may be I'm overlooking something basic. I would like to rotate my coordinate system so I can build a hyperbola in standard position and then transform it to the arbitrary case. First I set up my equation:
> x, y, a, b, theta = symbols('x y a b theta')
> f = Eq(x ** 2/a ** 2 - y ** 2/b ** 2, 1)
> f1 = f.subs({a: 5, b: 10})
> f1
> x**2/25 - y**2/100 == 1
Next I want to rotate it, which I try to do by using a sub:
> f1.subs({x: x*cos(theta) - y*sin(theta), y: x*sin(theta) + y*cos(theta)})
> -(x*sin(theta) + y*cos(theta))**2/100 + (x*cos(theta) - (x*sin(theta) + y*cos(theta))*sin(theta))**2/25 == 1
But that doesn't work because apparently the substitution for x is made before the one for y, and the value of x substituted in is already updated. There must be some way to do this substitution, right?
Or is there a better tool than sympy to do this in? Once I get my hyperbolas I will want to find points of intersection between different ones.
Thanks for any suggestions.
One simple solution is to use temporary symbols :
x_temp, y_temp = symbols('x_temp y_temp')
f1.subs({x: x_temp*cos(theta) - y_temp*sin(theta), y: x_temp*sin(theta) + y_temp*cos(theta)}).subs({x_temp: x, y_temp: y})
> -(x*sin(theta) + y*cos(theta))**2/100 + (x*cos(theta) - y*sin(theta))**2/25 == 1
I think sympy can do what you want. There is a polysys modules in sympy.solvers :
"""
Solve a system of polynomial equations.
Examples
========
>>> from sympy import solve_poly_system
>>> from sympy.abc import x, y
>>> solve_poly_system([x*y - 2*y, 2*y**2 - x**2], x, y)
[(0, 0), (2, -sqrt(2)), (2, sqrt(2))]
"""
You can set simultaneous=True in the .subs( ) method to force SymPy to replace all variables at once (practically, SymPy internally creates temporary variables, proceeds with the substitution, then restores the old variables).
In [13]: f1.subs({x: x*cos(theta) - y*sin(theta),
y: x*sin(theta) + y*cos(theta)}, simultaneous=True)
Out[13]:
2 2
(x⋅sin(θ) + y⋅cos(θ)) (x⋅cos(θ) - y⋅sin(θ))
- ────────────────────── + ────────────────────── = 1
100 25
Otherwise use .xreplace( ... ) instead of .subs( ... )
In [11]: f1.xreplace({x: x*cos(theta) - y*sin(theta),
y: x*sin(theta) + y*cos(theta)})
Out[11]:
2 2
(x⋅sin(θ) + y⋅cos(θ)) (x⋅cos(θ) - y⋅sin(θ))
- ────────────────────── + ────────────────────── = 1
100 25
Related
Can lp_solve return a unifrom solution? (Is there a flag or something that will force this kinf of behavior?)
Say that I have this:
max: x + y + z + w;
x + y + z + w <= 100;
Results in:
Actual values of the variables:
x 100
y 0
z 0
w 0
However, I would like to have something like:
Actual values of the variables:
x 25
y 25
z 25
w 25
This is an oversimplyfied example, but the idea is that if the variables have the same factor in the objective function, then the result should idealy be more uniform and not everything for one, and the other what is left.
Is this possible to do? (I've tested other libs and some of them seem to do this by default like the solver on Excel or Gekko for Python).
EDIT:
For instance, Gekko has already this behavior without me especifing anything...
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = [m.Var() for i in range(4)]
#upper bounds
x1.upper = 100
x2.upper = 100
x3.upper = 100
x4.upper = 100
# Constrain
m.Equation(x1 + x2 + x3 + x4 <= 100)
# Objective
m.Maximize(x1 + x2 + x3 + x4)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
>> [24.999999909] [24.999999909] [24.999999909] [24.999999909]
You would need to explicitly model this (as another objective). A solver does nothing automatically: it just finds a solution that obeys the constraints and optimizes the objective function.
Also, note that many linear solvers will produce so-called basic solutions (corner points). So "all variables in the middle" does not come naturally at all.
The example in Gekko ended on [25,25,25,25] because of how the solver took a step towards the solution from an initial guess of [0,0,0,0] (default in Gekko). The problem is under-specified so there are an infinite number of feasible solutions. Changing the guess values gives a different solution.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
Solution with guess values [50,0,0,0]
[3.1593723566] [32.280209196] [32.280209196] [32.280209196]
Here is one method with equality constraints m.Equations([x1==x2,x1==x3,x1==x4]) to modify the problem to guarantee a unique solution that can be used by any linear programming solver.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
m.Equations([x1==x2,x1==x3,x1==x4])
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
This gives a solution:
[25.000000002] [25.000000002] [25.000000002] [25.000000002]
QP Solution
Switching to a QP solver allows a slight penalty for deviations but doesn't consume a degree of freedom.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
penalty = 1e-5
m.Minimize(penalty*(x1-x2)**2)
m.Minimize(penalty*(x1-x3)**2)
m.Minimize(penalty*(x1-x4)**2)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
Solution with QP penalty
[24.999998377] [25.000000544] [25.000000544] [25.000000544]
Tell me please, How to forbid to open brackets? For example,
8 * (x + 1) It should be that way, not 8 * x + 8
Using evaluate = False doesn't help
The global evaluate flag will allow you to do this in the most natural manner:
>>> with evaluate(False):
... 8*(x+1)
...
8*(x + 1)
Otherwise, Mul(8, x + 1, evaluate=False) is a lower level way to do this. And conversion from a string (already in that form) is possible as
>>> S('8*(x+1)',evaluate=False)
8*(x + 1)
In general, SymPy will convert the expression to its internal format, which includes some minimal simplifications. For example, sqrt is represented internally as Pow(x,1/2). Also, some reordering of terms may happen.
In your specific case, you could try:
from sympy import factor
from sympy.abc import x, y
y = x + 1
g = 8 * y
g = factor(g)
print(g) # "8 * (x + 1)"
But, if for example you have g = y * y, SymPy will either represent it as a second power ((x + 1)**2), or expand it to x**2 + 2*x + 1.
PS: See also this answer by SymPy's maintainer for some possible workarounds. (It might complicate things later when you would like to evaluate or simplify this expression in other calculations.)
How about sympy.collect_const(sympy.S("8 * (x + 1)"), 8)?
In general you might be interested in some of these expression manipulations: https://docs.sympy.org/0.7.1/modules/simplify/simplify.html
I have different locations, and I need to calculate the distance between them:
Location Lat Long Distance
A -20 -50 (A-B)+(A-C)+(A-D)
B -20.3 -51 (B-A)+(B-C)+(B-D)
C -21 -50 (C-A)+(C-B)+(C-D)
D -20.8 -50.2 (D-A)+(D-B)+(D-C)
Would anyone know me?
I'm using this equation to calculate the distance between two points, but I don't know how to calculate between several points.
R = 6373.0
dist_lat=lat2-lat
dist_lon=long2-lon
a= np.sin(dist_lat/2)**2 + np.cos(lat) * np.cos(lat2) * np.sin(dist_lon/2)**2
b= 2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a))
Dist= R * b
this is my first time with python so beware, also I made this rather quickly - it is probably buggy.
Tested on python2.7.
import numpy as np
import itertools as it
# these are the points
points = {
'A' :{'lat':-20 ,'lon':-50}
,'B':{'lat':-20.3 ,'lon':-51}
,'C':{'lat':-21 ,'lon':-50}
,'D':{'lat':-20.8 ,'lon':-50.2}
}
# this calculates the distance beween two points
# basicly your formula wrapped in a function
# - used below
def distance(a, b):
R = 6373.0
dist_lat = a['lat']-b['lat']
dist_lon = a['lon']-b['lon']
x = np.sin(dist_lat/2)**2 + np.cos(b['lat']) * np.cos(a['lat']) * np.sin(dist_lon/2)**2
y = 2 * np.arctan2(np.sqrt(x), np.sqrt(1 - x))
return R * y
distances = {}
# produce the distance beween all combinations
# see: https://docs.python.org/2/library/itertools.html
# - uses above function
for x in it.combinations(points.keys(), 2):
distances[ ''.join(x) ] = distance(points[x[0]],points[x[1]])
# for every starting point
# - filter the distance it is involved in
# - sum
for CH in ('A','B','C','D'):
print CH,' ', sum(dict((key,value) for key, value in distances.iteritems() if CH in key).values())
output:
$ python2.7 test.py
A 6983.4634697
B 9375.21128818
C 5859.5650322
D 9472.23979008
Let's use sympy to find the response of a linear system to an external force
from sympy import *
t, w, beta = symbols('t omega beta', positive=1)
x0, v0 = symbols('x0 v0')
x = symbols('x', cls=Function)
homogeneous = diff(x(t), t, 2)/w**2+x(t)
force = sin(beta*w*t)
disp = dsolve(homogeneous-force, x(t)).rhs
display(disp)
constants = solve((disp.subs(t,0)-x0, disp.diff(t).subs(t,0)-v0))
display(constants)
obtaining
sin(beta*omega*t)
C1*sin(omega*t) + C2*cos(omega*t) - -----------------
2
beta - 1
2
beta *v0 + beta*omega - v0
[{C1: --------------------------, C2: x0}]
/ 2 \
omega*\beta - 1/
Now, to complete my solution, I tried to substitute these constant of integration into the general+particular integral
display(disp.subs(constants))
that gives me
sin(beta*omega*t)
C2*sin(omega*t) + C2*cos(omega*t) - -----------------
2
beta - 1
Where have I done a mistake?
The argument of subs should be a dictionary.
What you got from solve as constants is not a dictionary but a one-element list containing a dictionary. Use it as
disp.subs(constants[0])
then the result is
x0*cos(omega*t) - sin(beta*omega*t)/(beta**2 - 1) + (beta**2*v0 + beta*omega - v0)*sin(omega*t)/(omega*(beta**2 - 1))
I'm using Newton's method, so I want to find the positions of all six roots of the sixth-order polynomial, basically the points where the function is zero.
I found the rough values on my graph with this code below but want to output those positions of all six roots. I'm thinking of using x as an array to input the values in to find those positions but not sure. I'm using 1.0 for now to locate the rough values. Any suggestions from here??
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1**-10
x = 1.0
xlast = float("inf")
while np.abs(x - xlast) > accuracy:
xlast = x
x = xlast - P(xlast)/dPdx(xlast)
print(x)
p_points = []
x_points = np.linspace(0, 1, 100)
y_points = np.zeros(len(x_points))
for i in range(len(x_points)):
y_points[i] = P(x_points[i])
p_points.append(P(x_points))
plt.plot(x_points,y_points)
plt.savefig("roots.png")
plt.show()
The traditional way is to use deflation to factor out the already found roots. If you want to avoid manipulations of the coefficient array, then you have to divide the roots out.
Having found z[1],...,z[k] as root approximations, form
g(x)=(x-z[1])*(x-z[2])*...*(x-z[k])
and apply Newtons method to h(x)=f(x)/g(x) with h'(x)=f'/g-fg'/g^2. In the Newton iteration this gives
xnext = x - f(x)/( f'(x) - f(x)*g'(x)/g(x) )
Fortunately the quotient g'/g has a simple form
g'(x)/g(x) = 1/(x-z[1])+1/(x-z[2])+...+1/(x-z[k])
So with a slight modification to the Newton step you can avoid finding the same root over again.
This all still keeps the iteration real. To get at the complex root, use a complex number to start the iteration.
Proof of concept, adding eps=1e-8j to g'(x)/g(x) allows the iteration to go complex without preventing real values. Solves the equivalent problem 0=exp(-eps*x)*f(x)/g(x)
import numpy as np
import matplotlib.pyplot as plt
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1e-10
roots = []
for k in range(6):
x = 1.0
xlast = float("inf")
x_points = np.linspace(0.0, 1.0, 200)
y_points = P(x_points)
for rt in roots:
y_points /= (x_points - rt)
y_points = np.array([ max(-1.0,min(1.0,np.real(y))) for y in y_points ])
plt.plot(x_points,y_points,x_points,0*y_points)
plt.show()
while np.abs(x - xlast) > accuracy:
xlast = x
corr = 1e-8j
for rt in roots:
corr += 1/(xlast-rt)
Px = P(xlast)
dPx = dPdx(xlast)
x = xlast - Px/(dPx - Px*corr)
print(x)
roots.append(x)