Simplify algebraic equation in CAS - sympy

I want to be able to simplify the ellipse equation:
sqrt((x + c)^2 + y^2) + sqrt((x - c)^2 + y^2) = 2a
into its canonical form:
x^2/a^2 + y^2/(a^2 - c^2) = 1
using CAS. I actually want to know how to do that in sympy, but any other CAS will do.
If it is not possible to do that in one call, then may be by transforming the original equation using operations like "get square of the both sides; move non-radicals (e.g. by enumerating them manually) to the right side; get square of the both sides again; simplify"

unrad will do most of the heavy lifting for you in SymPy:
>>> l # your original expression with the 2a subtracted from the lhs
-2*a + sqrt(y**2 + (-c + x)**2) + sqrt(y**2 + (c + x)**2)
>>> unrad(_)
(-a**4 + a**2*c**2 + a**2*x**2 + a**2*y**2 - c**2*x**2, [], [])
>>> neg_i, dep = _[0].as_independent(x,y)
>>> xpart, ypart = [dep.coeff(i**2) for i in (x,y)]
>>> Eq(-x**2*cancel(xpart/neg_i)-y**2*cancel(ypart/neg_i), neg_i/neg_i)
y**2/(a**2 - c**2) + x**2/a**2 == 1

Subtract the doubled second sqrt from both sides.
Multiply respective sides of the new equation and the original one.
Reduce LHS applying (m+n)(m-n) = m^2 - n^2.
You'll get (if i did it right): -4xc = 4a(a - sqrt(something))
Then: -xc/a = a - sqrt(something)
and: sqrt(something) = a + xc/a
Square both sides and see what happens.
I did it wrong. Should be: 4xc = 4a(a - sqrt(something))
so sqrt(something) = a - xc/a.

Related

Sympy solve cannot find real solution when solving first and substituting later

I have a system of two equations that calculate the coordinates of two points T1(u1,v1) and T2(u2,v2), where lines starting from point A to those points are tangent with a given circle. Point A is always outside of the circle, so there should always be two solutions.
The system of equations is given as follows:
eq1 = Eq((xpos-u)**2 + (ypos-v)**2, tglensq)
eq2 = Eq((oxpos-u)**2 + (oypos-v)**2, radsq)
where xpos,ypos are coordinates of A, and oxpos, oypos are coordinates of the center of the circle. tglensq represents the length of the tangent squared, and radsq the radius of the circle squared.
The solutions need to be calculated continuously in a loop.
I want to solve the equation first for u and v, and then plugin the other values (as described here). Solving for each iteration of the loop turns out to be quite expensive, although it always produces the two solutions.
After getting the solution, in the loop I substitute the other values, like so:
u1 = first_solution[0].subs([(xpos, robot_pos[0]), (ypos, robot_pos[1]), (oxpos, obstacle_pos[0]), (oypos, obstacle_pos[1]), (tglensq, tangent_len_squared), (minkradsq, minkowski_radius_sq)])
However, sympy fails to find the real solutions, and ends up giving imaginary ones, like:
628644.458063364 - 3693.82126269719*I
I saw here a similar issue about solving first then substituting, but I am not sure how to proceed from here. Any help would be greatly appreciated :).
Does defining the location of one circle relative to the other help? Let dx,dy = Point(oxpos,oypos)-Point(xpos,ypos) so the equations are
>>> dx,dy = symbols('dx,dy') # and your other symbols
>>> eq1,eq2 = Eq((u)**2 + (v)**2, tglensq),Eq((dx-u)**2 + (dy-v)**2, radsq)
>>> sol=solve((eq1,eq2), (u,v))
>>> sol[0]
(-(-dx**2 - dy**2 + 2*dy*(-dx*sqrt(-dx**4 - 2*dx**2*dy**2 + 2*dx**2*radsq + 2*dx**2*tglensq - dy**4 + 2*dy**2*radsq + 2*dy**2*tglensq - radsq**2 + 2*radsq*tglensq - tglensq**2)/(2*(dx**2 + dy**2)) + dy*(dx**2 + dy**2 - radsq + tglensq)/(2*(dx**2 + dy**2))) + radsq - tglensq)/(2*dx), -dx*sqrt(-dx**4 - 2*dx**2*dy**2 + 2*dx**2*radsq + 2*dx**2*tglensq - dy**4 + 2*dy**2*radsq + 2*dy**2*tglensq - radsq**2 + 2*radsq*tglensq - tglensq**2)/(2*(dx**2 + dy**2)) + dy*(dx**2 + dy**2 - radsq + tglensq)/(2*(dx**2 + dy**2)))
>>> sol[1]
(-(-dx**2 - dy**2 + 2*dy*(dx*sqrt(-dx**4 - 2*dx**2*dy**2 + 2*dx**2*radsq + 2*dx**2*tglensq - dy**4 + 2*dy**2*radsq + 2*dy**2*tglensq - radsq**2 + 2*radsq*tglensq - tglensq**2)/(2*(dx**2 + dy**2)) + dy*(dx**2 + dy**2 - radsq + tglensq)/(2*(dx**2 + dy**2))) + radsq - tglensq)/(2*dx), dx*sqrt(-dx**4 - 2*dx**2*dy**2 + 2*dx**2*radsq + 2*dx**2*tglensq - dy**4 + 2*dy**2*radsq + 2*dy**2*tglensq - radsq**2 + 2*radsq*tglensq - tglensq**2)/(2*(dx**2 + dy**2)) + dy*(dx**2 + dy**2 - radsq + tglensq)/(2*(dx**2 + dy**2)))
To calculate u,v try
>>> dif = (Point(oxpos,oypos)-Point(xpos,ypos)).subs(dict(zip((xpos,ypos,oxpos,oypos),(1,2,3,4))))
>>> Tuple(*sol).subs(dict(zip((tglensq,radsq),(5,6)))).subs(dict(zip((dx,dy),dif)))
Replace 1,2,3,4,5,6 with the actual values. Does that give you two real solutions?

Simplification of derivative of square using sympy

I'm trying to use sympy to generate equations for non-linear least squares fitting. My goal is to make this quite complex but for the moment, here's a simple case (but not too simple!). It's basically fitting a two dimensional sinusoid to data. Here's the sympy code:
from sympy import *
S, l, m = symbols('S l m', real=True)
u, v = symbols('u v', real=True)
Vobs = symbols('Vobs', complex=True)
Vres = Vobs - S * exp(- 1j * 2 * pi * (u*l+v*m))
J=Vres*conjugate(Vres)
axes = [S, l, m]
grad = derive_by_array(J, axes)
hess = derive_by_array(grad, axes)
One element of the grad term looks like:
- 2.0*I*pi*S*u*(-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs)*exp(2.0*I*pi*(l*u + m*v)) + 2.0*I*pi*S*u*(-S*exp(2.0*I*pi*(l*u + m*v)) + conjugate(Vobs))*exp(-2.0*I*pi*(l*u + m*v))
What I'd like is to replace the expanded term (-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs) by Vres and contract the two conjugate terms into the more compact equivalent is:
4.0*pi*S*u*im(Vres*exp(2.0*I*pi*(l*u + m*v)))
I cannot see how to do this with sympy. This problem is bad for the first derivative (grad) but get really out of hand with the second derivative (hess).
First of all, let's not use 1j in SymPy, it's a float and floats are bad for symbolic math. SymPy's imaginary unit is I. So,
Vres = Vobs - S * exp(- I * 2 * pi * (u*l+v*m))
To replace the expression Vres by a symbol, we first need to create such a symbol. I'm going to call it Vres0, but its name will be Vres, so it prints as "Vres" in formulas.
Vres0 = symbols('Vres')
g1 = grad[1].subs(Vres, Vres0).conjugate().subs(Vres, Vres0).conjugate()
The conjugate-substitute-conjugate back is needed because subs doesn't quite recognize the possibility of replacing the conjugate of an expression with the conjugate of the symbol.
Now g1 is
-2*I*pi*S*Vres*u*exp(2*I*pi*(l*u + m*v)) + 2*I*pi*S*u*exp(-2*I*pi*(l*u + m*v))*conjugate(Vres)
and we want to fold the sum of conjugate terms. I use a custom transformation rule for this: the rule fold_conjugates applies to every sum (Add) of two terms (len(f.args) == 2) where the second is a conjugate of the first (f.args[1] == f.args[0].conjugate()). The transformation it performs: replace the sum by twice the real part of first argument (2*re(f.args[0])). Like so:
from sympy.core.rules import Transform
fold_conjugates = Transform(lambda f: 2*re(f.args[0]),
lambda f: isinstance(f, Add) and len(f.args) == 2 and f.args[1] == f.args[0].conjugate())
g = g1.xreplace(fold_conjugates)
Final result: 4*pi*S*u*im(Vres*exp(2*I*pi*(l*u + m*v))).

general expression substitution in sympy

I have two univariate functions, f(x) and g(x), and I'd like to substitute g(x) = y to rewrite f(x) as some f2(y).
Here is a simple example that works:
In [240]: x = Symbol('x')
In [241]: y = Symbol('y')
In [242]: f = abs(x)**2 + 6*abs(x) + 5
In [243]: g = abs(x)
In [244]: f.subs({g: y})
Out[244]: y**2 + 6*y + 5
But now, if I try a slightly more complex example, it fails:
In [245]: h = abs(x) + 1
In [246]: f.subs({h: y})
Out[246]: Abs(x)**2 + 6*Abs(x) + 5
Is there a general approach that works for this problem?
The expression abs(x)**2 + 6*abs(x) + 5 does not actually contain abs(x) + 1 anywhere, so there is nothing to substitute for.
One can imagine changing it to abs(x)**2 + 5*(abs(x) + 1) + abs(x), with the substitution result being abs(x)**2 + 5*y + abs(x). Or maybe changing it to abs(x)**2 + 6*(abs(x) + 1) - 1, with the result being abs(x)**2 + 6*y - 1. There are other choices too. What should the result be?
There is no general approach to this task because it's not a well-defined task to begin with.
In contrast, the substitution f.subs(abs(x), y-1) is a clear instruction to replace all occurrences of abs(x) in the expression tree with y-1. It returns 6*y + (y - 1)**2 - 1.
The substitution above of abs(x) + 1 in abs(x)**2 + 6*abs(x) + 5 is a clear instruction too: to find exact occurrences of the expression abs(x) + 1 in the syntax tree of the expression abs(x)**2 + 6*abs(x) + 5, and replace those subtrees with the syntax tree of the expression abs(x) + 1. There is a caveat about heuristics though.
Aside: in addition to subs SymPy has a method .replace which supports wildcards, but I don't expect it to help here. In my experience, it is overeager to replace:
>>> a = Wild('a')
>>> b = Wild('b')
>>> f.replace(a*(abs(x) + 1) + b, a*y + b)
5*y/(Abs(x) + 1) + 6*y*Abs(x*y)/(Abs(x) + 1)**2 + (Abs(x*y)/(Abs(x) + 1))**(2*y/(Abs(x) + 1))
Eliminate a variable
There is no "eliminate" in SymPy. One can attempt to emulate it with solve by introducing another variable, e.g.,
fn = Symbol('fn')
solve([Eq(fn, f), Eq(abs(x) + 1, y)], [fn, x])
which attempts to solve for "fn" and "x", and therefore the solution for "fn" is an expression without x. If this works
In fact, it does not work with abs(); solving for something that sits inside an absolute value is not implemented in SymPy. Here is a workaround.
fn, ax = symbols('fn ax')
solve([Eq(fn, f.subs(abs(x), ax)), Eq(ax + 1, y)], [fn, ax])
This outputs [(y*(y + 4), y - 1)] where the first term is what you want; a solution for fn.

Solve for the positions of all six roots PYTHON

I'm using Newton's method, so I want to find the positions of all six roots of the sixth-order polynomial, basically the points where the function is zero.
I found the rough values on my graph with this code below but want to output those positions of all six roots. I'm thinking of using x as an array to input the values in to find those positions but not sure. I'm using 1.0 for now to locate the rough values. Any suggestions from here??
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1**-10
x = 1.0
xlast = float("inf")
while np.abs(x - xlast) > accuracy:
xlast = x
x = xlast - P(xlast)/dPdx(xlast)
print(x)
p_points = []
x_points = np.linspace(0, 1, 100)
y_points = np.zeros(len(x_points))
for i in range(len(x_points)):
y_points[i] = P(x_points[i])
p_points.append(P(x_points))
plt.plot(x_points,y_points)
plt.savefig("roots.png")
plt.show()
The traditional way is to use deflation to factor out the already found roots. If you want to avoid manipulations of the coefficient array, then you have to divide the roots out.
Having found z[1],...,z[k] as root approximations, form
g(x)=(x-z[1])*(x-z[2])*...*(x-z[k])
and apply Newtons method to h(x)=f(x)/g(x) with h'(x)=f'/g-fg'/g^2. In the Newton iteration this gives
xnext = x - f(x)/( f'(x) - f(x)*g'(x)/g(x) )
Fortunately the quotient g'/g has a simple form
g'(x)/g(x) = 1/(x-z[1])+1/(x-z[2])+...+1/(x-z[k])
So with a slight modification to the Newton step you can avoid finding the same root over again.
This all still keeps the iteration real. To get at the complex root, use a complex number to start the iteration.
Proof of concept, adding eps=1e-8j to g'(x)/g(x) allows the iteration to go complex without preventing real values. Solves the equivalent problem 0=exp(-eps*x)*f(x)/g(x)
import numpy as np
import matplotlib.pyplot as plt
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1e-10
roots = []
for k in range(6):
x = 1.0
xlast = float("inf")
x_points = np.linspace(0.0, 1.0, 200)
y_points = P(x_points)
for rt in roots:
y_points /= (x_points - rt)
y_points = np.array([ max(-1.0,min(1.0,np.real(y))) for y in y_points ])
plt.plot(x_points,y_points,x_points,0*y_points)
plt.show()
while np.abs(x - xlast) > accuracy:
xlast = x
corr = 1e-8j
for rt in roots:
corr += 1/(xlast-rt)
Px = P(xlast)
dPx = dPdx(xlast)
x = xlast - Px/(dPx - Px*corr)
print(x)
roots.append(x)

Understanding DEL2 function in Matlab in order to code it in C++

in order to code the DEL2 matlab function in c++ I need to understand the algorithm. I've managed to code the function for elements of the matrix that are not on the borders or the edges.
I've seen several topics about it and read the MATLAB code by typing "edit del2" or "type del2" but I don't understand the calculations that are made to obtain the borders and the edges.
Any help would be appreciated, thanks.
You want to approximate u'' knowing only the value of u on the right (or the left) of a point.
In order to have a second order approximation, you need 3 equations (basic taylor expansion):
u(i+1) = u(i) + h u' + (1/2) h^2 u'' + (1/6) h^3 u''' + O(h^4)
u(i+2) = u(i) + 2 h u' + (4/2) h^2 u'' + (8/6) h^3 u''' + O(h^4)
u(i+3) = u(i) + 3 h u' + (9/2) h^2 u'' + (27/6) h^3 u''' + O(h^4)
Solving for u'' gives (1):
h^2 u'' = -5 u(i+1) + 4 u(i+2) - u(i+3) + 2 u(i) +O(h^4)
To get the laplacian you need to replace the traditional formula with this one on the borders.
For example where "i = 0" you'll have:
del2(u) (i=0,j) = [-5 u(i+1,j) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j) + u(i,j+1) + u(i,j-1) - 2u(i,j) ]/h^2
EDIT clarifications:
The laplacian is the sum of the 2nd derivatives in the x and in the y directions. You can calculate the second derivative with the formula (2)
u'' = (u(i+1) + u(i-1) - 2u(i))/h^2
if you have both u(i+1) and u(i-1). If i=0 or i=imax you can use the first formula I wrote to compute the derivatives (notice that due to the simmetry of the 2nd derivative, if i = imax you can just replace "i+k" with "i-k"). The same applies for the y (j) direction:
On the edges you can mix up the formulas (1) and (2):
del2(u) (i=imax,j) = [-5 u(i-1,j) + 4 u(i-2,j) - u(i-3,j) + 2 u(i,j) + u(i,j+1) + u(i,j-1) - 2u(i,j) ]/h^2
del2(u) (i,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + u(i+1,j) + u(i-1,j) - 2u(i,j) ]/h^2
del2(u) (i,j=jmax) = [-5 u(i,j-1) + 4 u(i,j-2) - u(i,j-3) + 2 u(i,j) + u(i+1,j) + u(i-1,j) - 2u(i,j) ]/h^2
And on the corners you'll just use (1) two times for both directions.
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i,j+1) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2
Del2 is the 2nd order discrete laplacian, i.e. it permits to approximate the laplacian of a real continuous function given its values on a square cartesian grid NxN where the distance between two adjacent nodes is h.
h^2 is just a constant dimensional-factor, you can get the matlab implementation from these formulas by setting h^2 = 4.
For example, if you want to compute the real laplacian of u(x,y) on the (0,L) x (0,L) square, what you do is writing down the values of this function on an NxN cartesian grid, i.e. you calculate u(0,0), u(L/(N-1),0), u(2L/(N-1),0) ... u( (N-1)L/(N-1) =L,0) ... u(0,L/(N-1)), u(L/(N-1),L/(N-1)) etc. and you put down these N^2 values in a matrix A.
Then you'll have
ans = 4*del2(A)/h^2, where h = L/(N-1).
del2 will return the exact value of the continuous laplacian if your starting function is linear or quadratic (x^2+y^2 fine, x^3 + y^3 not fine). If the function is not linear nor quadratic, the result will be more accurate the more points you use (i.e. in the limit h -> 0)
I hope this is more clear, notice that i used 0-based indices for accessing matrix (C/C++ array style), while matlab uses 1-based.
DEL2 in MatLab represents Discrete Laplace operator, you can find some information about it here.
The main thing about the edges is that elements in the interior of the matrix have four neighbors, while elements on the edges and corners have three or two neighbors respectfully. So you calculate the corners and edges the same way, but using less elements.
Here is a module I wrote in Fortran 90 that replicates the "del2()" operator in MATLAB implementing the above ideas. It only works for arrays that that are atleast 4x4 or larger. It works successfully when I run it so I thought I would post it so that other people dont have to waste time making their own.
module del2_mod
implicit none
real, private :: pi
integer, private :: nr, nc, i, j, k
contains
! nr is number of rows in array, while nc is the number of columns in the array.
!!----------------------------------------------------------
subroutine del2(in, out)
real, dimension(:,:) :: in, out
real, dimension(nr,nc) :: interior, left, right, top, bottom, ul_corner, br_corner, disp
integer :: i, j
real :: h, ul, ur, bl, br
! Zero out internal arrays
out = 0.0; interior=0.0; left = 0.0; right = 0.0; top = 0.0; bottom = 0.0; ul_corner = 0.0; br_corner = 0.0;
h=2.0
! Interior Points
do j=1,nc
do i=1,nr
! Interior Point Calculations
if( j>1 .and. j<nc .and. i>1 .and. i<nr )then
interior(i,j) = ((in(i-1,j) + in(i+1,j) + in(i,j-1) + in(i,j+1)) - 4*in(i,j) )/(h**2)
end if
! Boundary Conditions for Left and Right edges
left(i,1) = (-5.0*in(i,2) + 4.0*in(i,3) - in(i,4) + 2.0*in(i,1) + in(i+1,1) + in(i-1,1) - 2.0*in(i,1) )/(h**2)
right(i,nc) = (-5.0*in(i,nc-1) + 4.0*in(i,nc-2) - in(i,nc-3) + 2.0*in(i,nc) + in(i+1,nc) + in(i-1,nc) - 2.0*in(i,nc) )/(h**2)
end do
! Boundary Conditions for Top and Bottom edges
top(1,j) = (-5.0*in(2,j) + 4.0*in(3,j) - in(4,j) + 2.0*in(1,j) + in(1,j+1) + in(1,j-1) - 2.0*in(1,j) )/(h**2)
bottom(nr,j) = (-5.0*in(nr-1,j) + 4.0*in(nr-2,j) - in(nr-3,j) + 2.0*in(nr,j) + in(nr,j+1) + in(nr,j-1) - 2.0*in(nr,j) )/(h**2)
end do
out = interior + left + right + top + bottom
! Calculate BC for the corners
ul = (-5.0*in(1,2) + 4.0*in(1,3) - in(1,4) + 2.0*in(1,1) - 5.0*in(2,1) + 4.0*in(3,1) - in(4,1) + 2.0*in(1,1))/(h**2)
br = (-5.0*in(nr,nc-1) + 4.0*in(nr,nc-2) - in(nr,nc-3) + 2.0*in(nr,nc) - 5.0*in(nr-1,nc) + 4.0*in(nr-2,nc) - in(nr-3,nc) + 2.0*in(nr,nc))/(h**2)
bl = (-5.0*in(nr,2) + 4.0*in(nr,3) - in(nr,4) + 2.0*in(nr,1) - 5.0*in(nr-1,1) + 4.0*in(nr-2,1) - in(nr-3,1) + 2.0*in(nr,1))/(h**2)
ur = (-5.0*in(1,nc-1) + 4.0*in(1,nc-2) - in(1,nc-3) + 2.0*in(1,nc) - 5.0*in(2,nc) + 4.0*in(3,nc) - in(4,nc) + 2.0*in(1,nc))/(h**2)
! Apply BC for the corners
out(1,1)=ul
out(1,nc)=ur
out(nr,1)=bl
out(nr,nc)=br
end subroutine
end module
It's so hard! I wasted a few hours to understand and implement it in Java.
Here is: https://gist.github.com/emersonmoretto/dec8f7125c032775da0d
Tested and compared to the original function DEL2 (Matlab)
I've found a typo in sbabbi response:
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i,j+1) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2
is
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i+1,j) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2