Cannot substitute the constants of integration into the integral of response - sympy

Let's use sympy to find the response of a linear system to an external force
from sympy import *
t, w, beta = symbols('t omega beta', positive=1)
x0, v0 = symbols('x0 v0')
x = symbols('x', cls=Function)
homogeneous = diff(x(t), t, 2)/w**2+x(t)
force = sin(beta*w*t)
disp = dsolve(homogeneous-force, x(t)).rhs
display(disp)
constants = solve((disp.subs(t,0)-x0, disp.diff(t).subs(t,0)-v0))
display(constants)
obtaining
sin(beta*omega*t)
C1*sin(omega*t) + C2*cos(omega*t) - -----------------
2
beta - 1
2
beta *v0 + beta*omega - v0
[{C1: --------------------------, C2: x0}]
/ 2 \
omega*\beta - 1/
Now, to complete my solution, I tried to substitute these constant of integration into the general+particular integral
display(disp.subs(constants))
that gives me
sin(beta*omega*t)
C2*sin(omega*t) + C2*cos(omega*t) - -----------------
2
beta - 1
Where have I done a mistake?

The argument of subs should be a dictionary.
What you got from solve as constants is not a dictionary but a one-element list containing a dictionary. Use it as
disp.subs(constants[0])
then the result is
x0*cos(omega*t) - sin(beta*omega*t)/(beta**2 - 1) + (beta**2*v0 + beta*omega - v0)*sin(omega*t)/(omega*(beta**2 - 1))

Related

Simplification of derivative of square using sympy

I'm trying to use sympy to generate equations for non-linear least squares fitting. My goal is to make this quite complex but for the moment, here's a simple case (but not too simple!). It's basically fitting a two dimensional sinusoid to data. Here's the sympy code:
from sympy import *
S, l, m = symbols('S l m', real=True)
u, v = symbols('u v', real=True)
Vobs = symbols('Vobs', complex=True)
Vres = Vobs - S * exp(- 1j * 2 * pi * (u*l+v*m))
J=Vres*conjugate(Vres)
axes = [S, l, m]
grad = derive_by_array(J, axes)
hess = derive_by_array(grad, axes)
One element of the grad term looks like:
- 2.0*I*pi*S*u*(-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs)*exp(2.0*I*pi*(l*u + m*v)) + 2.0*I*pi*S*u*(-S*exp(2.0*I*pi*(l*u + m*v)) + conjugate(Vobs))*exp(-2.0*I*pi*(l*u + m*v))
What I'd like is to replace the expanded term (-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs) by Vres and contract the two conjugate terms into the more compact equivalent is:
4.0*pi*S*u*im(Vres*exp(2.0*I*pi*(l*u + m*v)))
I cannot see how to do this with sympy. This problem is bad for the first derivative (grad) but get really out of hand with the second derivative (hess).
First of all, let's not use 1j in SymPy, it's a float and floats are bad for symbolic math. SymPy's imaginary unit is I. So,
Vres = Vobs - S * exp(- I * 2 * pi * (u*l+v*m))
To replace the expression Vres by a symbol, we first need to create such a symbol. I'm going to call it Vres0, but its name will be Vres, so it prints as "Vres" in formulas.
Vres0 = symbols('Vres')
g1 = grad[1].subs(Vres, Vres0).conjugate().subs(Vres, Vres0).conjugate()
The conjugate-substitute-conjugate back is needed because subs doesn't quite recognize the possibility of replacing the conjugate of an expression with the conjugate of the symbol.
Now g1 is
-2*I*pi*S*Vres*u*exp(2*I*pi*(l*u + m*v)) + 2*I*pi*S*u*exp(-2*I*pi*(l*u + m*v))*conjugate(Vres)
and we want to fold the sum of conjugate terms. I use a custom transformation rule for this: the rule fold_conjugates applies to every sum (Add) of two terms (len(f.args) == 2) where the second is a conjugate of the first (f.args[1] == f.args[0].conjugate()). The transformation it performs: replace the sum by twice the real part of first argument (2*re(f.args[0])). Like so:
from sympy.core.rules import Transform
fold_conjugates = Transform(lambda f: 2*re(f.args[0]),
lambda f: isinstance(f, Add) and len(f.args) == 2 and f.args[1] == f.args[0].conjugate())
g = g1.xreplace(fold_conjugates)
Final result: 4*pi*S*u*im(Vres*exp(2*I*pi*(l*u + m*v))).

rotating coordinate systems sympy

I'm a complete beginner with sympy, so it may be I'm overlooking something basic. I would like to rotate my coordinate system so I can build a hyperbola in standard position and then transform it to the arbitrary case. First I set up my equation:
> x, y, a, b, theta = symbols('x y a b theta')
> f = Eq(x ** 2/a ** 2 - y ** 2/b ** 2, 1)
> f1 = f.subs({a: 5, b: 10})
> f1
> x**2/25 - y**2/100 == 1
Next I want to rotate it, which I try to do by using a sub:
> f1.subs({x: x*cos(theta) - y*sin(theta), y: x*sin(theta) + y*cos(theta)})
> -(x*sin(theta) + y*cos(theta))**2/100 + (x*cos(theta) - (x*sin(theta) + y*cos(theta))*sin(theta))**2/25 == 1
But that doesn't work because apparently the substitution for x is made before the one for y, and the value of x substituted in is already updated. There must be some way to do this substitution, right?
Or is there a better tool than sympy to do this in? Once I get my hyperbolas I will want to find points of intersection between different ones.
Thanks for any suggestions.
One simple solution is to use temporary symbols :
x_temp, y_temp = symbols('x_temp y_temp')
f1.subs({x: x_temp*cos(theta) - y_temp*sin(theta), y: x_temp*sin(theta) + y_temp*cos(theta)}).subs({x_temp: x, y_temp: y})
> -(x*sin(theta) + y*cos(theta))**2/100 + (x*cos(theta) - y*sin(theta))**2/25 == 1
I think sympy can do what you want. There is a polysys modules in sympy.solvers :
"""
Solve a system of polynomial equations.
Examples
========
>>> from sympy import solve_poly_system
>>> from sympy.abc import x, y
>>> solve_poly_system([x*y - 2*y, 2*y**2 - x**2], x, y)
[(0, 0), (2, -sqrt(2)), (2, sqrt(2))]
"""
You can set simultaneous=True in the .subs( ) method to force SymPy to replace all variables at once (practically, SymPy internally creates temporary variables, proceeds with the substitution, then restores the old variables).
In [13]: f1.subs({x: x*cos(theta) - y*sin(theta),
y: x*sin(theta) + y*cos(theta)}, simultaneous=True)
Out[13]:
2 2
(x⋅sin(θ) + y⋅cos(θ)) (x⋅cos(θ) - y⋅sin(θ))
- ────────────────────── + ────────────────────── = 1
100 25
Otherwise use .xreplace( ... ) instead of .subs( ... )
In [11]: f1.xreplace({x: x*cos(theta) - y*sin(theta),
y: x*sin(theta) + y*cos(theta)})
Out[11]:
2 2
(x⋅sin(θ) + y⋅cos(θ)) (x⋅cos(θ) - y⋅sin(θ))
- ────────────────────── + ────────────────────── = 1
100 25

Solve for the positions of all six roots PYTHON

I'm using Newton's method, so I want to find the positions of all six roots of the sixth-order polynomial, basically the points where the function is zero.
I found the rough values on my graph with this code below but want to output those positions of all six roots. I'm thinking of using x as an array to input the values in to find those positions but not sure. I'm using 1.0 for now to locate the rough values. Any suggestions from here??
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1**-10
x = 1.0
xlast = float("inf")
while np.abs(x - xlast) > accuracy:
xlast = x
x = xlast - P(xlast)/dPdx(xlast)
print(x)
p_points = []
x_points = np.linspace(0, 1, 100)
y_points = np.zeros(len(x_points))
for i in range(len(x_points)):
y_points[i] = P(x_points[i])
p_points.append(P(x_points))
plt.plot(x_points,y_points)
plt.savefig("roots.png")
plt.show()
The traditional way is to use deflation to factor out the already found roots. If you want to avoid manipulations of the coefficient array, then you have to divide the roots out.
Having found z[1],...,z[k] as root approximations, form
g(x)=(x-z[1])*(x-z[2])*...*(x-z[k])
and apply Newtons method to h(x)=f(x)/g(x) with h'(x)=f'/g-fg'/g^2. In the Newton iteration this gives
xnext = x - f(x)/( f'(x) - f(x)*g'(x)/g(x) )
Fortunately the quotient g'/g has a simple form
g'(x)/g(x) = 1/(x-z[1])+1/(x-z[2])+...+1/(x-z[k])
So with a slight modification to the Newton step you can avoid finding the same root over again.
This all still keeps the iteration real. To get at the complex root, use a complex number to start the iteration.
Proof of concept, adding eps=1e-8j to g'(x)/g(x) allows the iteration to go complex without preventing real values. Solves the equivalent problem 0=exp(-eps*x)*f(x)/g(x)
import numpy as np
import matplotlib.pyplot as plt
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1e-10
roots = []
for k in range(6):
x = 1.0
xlast = float("inf")
x_points = np.linspace(0.0, 1.0, 200)
y_points = P(x_points)
for rt in roots:
y_points /= (x_points - rt)
y_points = np.array([ max(-1.0,min(1.0,np.real(y))) for y in y_points ])
plt.plot(x_points,y_points,x_points,0*y_points)
plt.show()
while np.abs(x - xlast) > accuracy:
xlast = x
corr = 1e-8j
for rt in roots:
corr += 1/(xlast-rt)
Px = P(xlast)
dPx = dPdx(xlast)
x = xlast - Px/(dPx - Px*corr)
print(x)
roots.append(x)

Understanding DEL2 function in Matlab in order to code it in C++

in order to code the DEL2 matlab function in c++ I need to understand the algorithm. I've managed to code the function for elements of the matrix that are not on the borders or the edges.
I've seen several topics about it and read the MATLAB code by typing "edit del2" or "type del2" but I don't understand the calculations that are made to obtain the borders and the edges.
Any help would be appreciated, thanks.
You want to approximate u'' knowing only the value of u on the right (or the left) of a point.
In order to have a second order approximation, you need 3 equations (basic taylor expansion):
u(i+1) = u(i) + h u' + (1/2) h^2 u'' + (1/6) h^3 u''' + O(h^4)
u(i+2) = u(i) + 2 h u' + (4/2) h^2 u'' + (8/6) h^3 u''' + O(h^4)
u(i+3) = u(i) + 3 h u' + (9/2) h^2 u'' + (27/6) h^3 u''' + O(h^4)
Solving for u'' gives (1):
h^2 u'' = -5 u(i+1) + 4 u(i+2) - u(i+3) + 2 u(i) +O(h^4)
To get the laplacian you need to replace the traditional formula with this one on the borders.
For example where "i = 0" you'll have:
del2(u) (i=0,j) = [-5 u(i+1,j) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j) + u(i,j+1) + u(i,j-1) - 2u(i,j) ]/h^2
EDIT clarifications:
The laplacian is the sum of the 2nd derivatives in the x and in the y directions. You can calculate the second derivative with the formula (2)
u'' = (u(i+1) + u(i-1) - 2u(i))/h^2
if you have both u(i+1) and u(i-1). If i=0 or i=imax you can use the first formula I wrote to compute the derivatives (notice that due to the simmetry of the 2nd derivative, if i = imax you can just replace "i+k" with "i-k"). The same applies for the y (j) direction:
On the edges you can mix up the formulas (1) and (2):
del2(u) (i=imax,j) = [-5 u(i-1,j) + 4 u(i-2,j) - u(i-3,j) + 2 u(i,j) + u(i,j+1) + u(i,j-1) - 2u(i,j) ]/h^2
del2(u) (i,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + u(i+1,j) + u(i-1,j) - 2u(i,j) ]/h^2
del2(u) (i,j=jmax) = [-5 u(i,j-1) + 4 u(i,j-2) - u(i,j-3) + 2 u(i,j) + u(i+1,j) + u(i-1,j) - 2u(i,j) ]/h^2
And on the corners you'll just use (1) two times for both directions.
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i,j+1) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2
Del2 is the 2nd order discrete laplacian, i.e. it permits to approximate the laplacian of a real continuous function given its values on a square cartesian grid NxN where the distance between two adjacent nodes is h.
h^2 is just a constant dimensional-factor, you can get the matlab implementation from these formulas by setting h^2 = 4.
For example, if you want to compute the real laplacian of u(x,y) on the (0,L) x (0,L) square, what you do is writing down the values of this function on an NxN cartesian grid, i.e. you calculate u(0,0), u(L/(N-1),0), u(2L/(N-1),0) ... u( (N-1)L/(N-1) =L,0) ... u(0,L/(N-1)), u(L/(N-1),L/(N-1)) etc. and you put down these N^2 values in a matrix A.
Then you'll have
ans = 4*del2(A)/h^2, where h = L/(N-1).
del2 will return the exact value of the continuous laplacian if your starting function is linear or quadratic (x^2+y^2 fine, x^3 + y^3 not fine). If the function is not linear nor quadratic, the result will be more accurate the more points you use (i.e. in the limit h -> 0)
I hope this is more clear, notice that i used 0-based indices for accessing matrix (C/C++ array style), while matlab uses 1-based.
DEL2 in MatLab represents Discrete Laplace operator, you can find some information about it here.
The main thing about the edges is that elements in the interior of the matrix have four neighbors, while elements on the edges and corners have three or two neighbors respectfully. So you calculate the corners and edges the same way, but using less elements.
Here is a module I wrote in Fortran 90 that replicates the "del2()" operator in MATLAB implementing the above ideas. It only works for arrays that that are atleast 4x4 or larger. It works successfully when I run it so I thought I would post it so that other people dont have to waste time making their own.
module del2_mod
implicit none
real, private :: pi
integer, private :: nr, nc, i, j, k
contains
! nr is number of rows in array, while nc is the number of columns in the array.
!!----------------------------------------------------------
subroutine del2(in, out)
real, dimension(:,:) :: in, out
real, dimension(nr,nc) :: interior, left, right, top, bottom, ul_corner, br_corner, disp
integer :: i, j
real :: h, ul, ur, bl, br
! Zero out internal arrays
out = 0.0; interior=0.0; left = 0.0; right = 0.0; top = 0.0; bottom = 0.0; ul_corner = 0.0; br_corner = 0.0;
h=2.0
! Interior Points
do j=1,nc
do i=1,nr
! Interior Point Calculations
if( j>1 .and. j<nc .and. i>1 .and. i<nr )then
interior(i,j) = ((in(i-1,j) + in(i+1,j) + in(i,j-1) + in(i,j+1)) - 4*in(i,j) )/(h**2)
end if
! Boundary Conditions for Left and Right edges
left(i,1) = (-5.0*in(i,2) + 4.0*in(i,3) - in(i,4) + 2.0*in(i,1) + in(i+1,1) + in(i-1,1) - 2.0*in(i,1) )/(h**2)
right(i,nc) = (-5.0*in(i,nc-1) + 4.0*in(i,nc-2) - in(i,nc-3) + 2.0*in(i,nc) + in(i+1,nc) + in(i-1,nc) - 2.0*in(i,nc) )/(h**2)
end do
! Boundary Conditions for Top and Bottom edges
top(1,j) = (-5.0*in(2,j) + 4.0*in(3,j) - in(4,j) + 2.0*in(1,j) + in(1,j+1) + in(1,j-1) - 2.0*in(1,j) )/(h**2)
bottom(nr,j) = (-5.0*in(nr-1,j) + 4.0*in(nr-2,j) - in(nr-3,j) + 2.0*in(nr,j) + in(nr,j+1) + in(nr,j-1) - 2.0*in(nr,j) )/(h**2)
end do
out = interior + left + right + top + bottom
! Calculate BC for the corners
ul = (-5.0*in(1,2) + 4.0*in(1,3) - in(1,4) + 2.0*in(1,1) - 5.0*in(2,1) + 4.0*in(3,1) - in(4,1) + 2.0*in(1,1))/(h**2)
br = (-5.0*in(nr,nc-1) + 4.0*in(nr,nc-2) - in(nr,nc-3) + 2.0*in(nr,nc) - 5.0*in(nr-1,nc) + 4.0*in(nr-2,nc) - in(nr-3,nc) + 2.0*in(nr,nc))/(h**2)
bl = (-5.0*in(nr,2) + 4.0*in(nr,3) - in(nr,4) + 2.0*in(nr,1) - 5.0*in(nr-1,1) + 4.0*in(nr-2,1) - in(nr-3,1) + 2.0*in(nr,1))/(h**2)
ur = (-5.0*in(1,nc-1) + 4.0*in(1,nc-2) - in(1,nc-3) + 2.0*in(1,nc) - 5.0*in(2,nc) + 4.0*in(3,nc) - in(4,nc) + 2.0*in(1,nc))/(h**2)
! Apply BC for the corners
out(1,1)=ul
out(1,nc)=ur
out(nr,1)=bl
out(nr,nc)=br
end subroutine
end module
It's so hard! I wasted a few hours to understand and implement it in Java.
Here is: https://gist.github.com/emersonmoretto/dec8f7125c032775da0d
Tested and compared to the original function DEL2 (Matlab)
I've found a typo in sbabbi response:
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i,j+1) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2
is
del2(u) (i=0,j=0) = [-5 u(i,j+1) + 4 u(i,j+2) - u(i,j+3) + 2 u(i,j) + -5 u(i+1,j) + 4 u(i+2,j) - u(i+3,j) + 2 u(i,j)]/h^2

Normalizing from [0.5 - 1] to [0 - 1]

I'm kind of stuck here, I guess it's a bit of a brain teaser. If I have numbers in the range between 0.5 to 1 how can I normalize it to be between 0 to 1?
Thanks for any help, maybe I'm just a bit slow since I've been working for the past 24 hours straight O_O
Others have provided you the formula, but not the work. Here's how you approach a problem like this. You might find this far more valuable than just knowning the answer.
To map [0.5, 1] to [0, 1] we will seek a linear map of the form x -> ax + b. We will require that endpoints are mapped to endpoints and that order is preserved.
Method one: The requirement that endpoints are mapped to endpoints and that order is preserved implies that 0.5 is mapped to 0 and 1 is mapped to 1
a * (0.5) + b = 0 (1)
a * 1 + b = 1 (2)
This is a simultaneous system of linear equations and can be solved by multiplying equation (1) by -2 and adding equation (1) to equation (2). Upon doing this we obtain b = -1 and substituting this back into equation (2) we obtain that a = 2. Thus the map x -> 2x - 1 will do the trick.
Method two: The slope of a line passing through two points (x1, y1) and (x2, y2) is
(y2 - y1) / (x2 - x1).
Here we will use the points (0.5, 0) and (1, 1) to meet the requirement that endpoints are mapped to endpoints and that the map is order-preserving. Therefore the slope is
m = (1 - 0) / (1 - 0.5) = 1 / 0.5 = 2.
We have that (1, 1) is a point on the line and therefore by the point-slope form of an equation of a line we have that
y - 1 = 2 * (x - 1) = 2x - 2
so that
y = 2x - 1.
Once again we see that x -> 2x - 1 is a map that will do the trick.
Subtract 0.5 (giving you a new range of 0 - 0.5) then multiply by 2.
double normalize( double x )
{
// I'll leave range validation up to you
return (x - 0.5) * 2;
}
To add another generic answer.
If you want to map the linear range [A..B] to [C..D], you can apply the following steps:
Shift the range so the lower bound is 0. (subract A from both bounds:
[A..B] -> [0..B-A]
Scale the range so it is [0..1]. (divide by the upper bound):
[0..B-A] -> [0..1]
Scale the range so it has the length of the new range which is D-C. (multiply with D-C):
[0..1] -> [0..D-C]
Shift the range so the lower bound is C. (add C to the bounds):
[0..D-C] -> [C..D]
Combining this to a single formula, we get:
(D-C)*(X-A)
X' = ----------- + C
(B-A)
In your case, A=0.5, B=1, C=0, D=1 you get:
(X-0.5)
X' = ------- = 2X-1
(0.5)
Note, if you have to convert a lot of X to X', you can change the formula to:
(D-C) C*B - A*D
X' = ----- * X + ---------
(B-A) (B-A)
It is also interesting to take a look at non linear ranges. You can take the same steps, but you need an extra step to transform the linear range to a nonlinear range.
Lazyweb answer: To convert a value x from [minimum..maximum] to [floor..ceil]:
General case:
normalized_x = ((ceil - floor) * (x - minimum))/(maximum - minimum) + floor
To normalize to [0..255]:
normalized_x = (255 * (x - minimum))/(maximum - minimum)
To normalize to [0..1]:
normalized_x = (x - minimum)/(maximum - minimum)
× 2 − 1
should do the trick
You could always use clamp or saturate within your math to make sure your final value is between 0-1. Some saturate at the end, but I've seen it done during a computation, too.