Convolution vs signal resolution - fortran

I realized that the resolution of the input signal dramatically affects the results of the convolution. I'm wondering if there is a way to compensate somehow for this. Let me give you an example:
Lets take the Sersic equation:
Sersic
with, for example, parameters.
Now we solve this equation both for a R step of 0.1 and 0.01. For example for the 1st point (R=0) we get \mu(0) = 9.82.
The next step is to convolve the data, after converting it into counts (to convert it to counts we can use this simple equation: Data(R) = 10^((\mu(R)-25)/(-2.5)). I'm using the bellow mentioned subroutine that I wrote but I tried with others and I get the same result (the PSF is Moffat with FWHM = 0.5 arcsec and its constructed in a way that its total area equals 1):
sum1 = 0
DO i = 1,n
sum1 = 0
g = i
DO f = 1,i
sum1(f) = Data(f)*PSF(g)
g = i - f
ENDDO
convData(i) = sum(sum1)
ENDDO
convData = convData(n:1:-1)
So, for this example, for the data with 0.1 resolution after convolution (and after reconverting the counts to \mu) I get for \mu(0)* = 13.52. For the data with 0.01 resolution I get \mu(0)* = 15.52. This is 2 magnitudes difference!! What am I doing wrong or how can I somehow compensate for this effect?
Thank you so much for the help!

Related

Using gradient descent to solve a nonlinear system

I have the following code, which uses gradient descent to find the global minimum of y = (x+5)^2:
cur_x = 3 # the algorithm starts at x=3
rate = 0.01 # learning rate
precision = 0.000001 # this tells us when to stop the algorithm
previous_step_size = 1
max_iters = 10000 # maximum number of iterations
iters = 0 # iteration counter
df = lambda x: 2*(x+5) # gradient of our function
while previous_step_size > precision and iters < max_iters:
prev_x = cur_x # store current x value in prev_x
cur_x = cur_x - rate * df(prev_x) # grad descent
previous_step_size = abs(cur_x - prev_x) # change in x
iters = iters+1 # iteration count
print("Iteration",iters,"\nX value is",cur_x) # print iterations
print("The local minimum occurs at", cur_x)
The procedure is fairly simple, and among the most intuitive and brief for solving such a problem (at least, that I'm aware of).
I'd now like to apply this to solving a system of nonlinear equations. Namely, I want to use this to solve the Time Difference of Arrival problem in three dimensions. That is, given the coordinates of 4 observers (or, in general, n+1 observers for an n dimensional solution), the velocity v of some signal, and the time of arrival at each observer, I want to reconstruct the source (determine it's coordinates [x,y,z].
I've already accomplished this using approximation search (see this excellent post on the matter: ), and I'd now like to try doing so with gradient descent (really, just as an interesting exercise). I know that the problem in two dimensions can be described by the following non-linear system:
sqrt{(x-x_1)^2+(y-y_1)^2}+s(t_2-t_1) = sqrt{(x-x_2)^2 + (y-y_2)^2}
sqrt{(x-x_2)^2+(y-y_2)^2}+s(t_3-t_2) = sqrt{(x-x_3)^2 + (y-y_3)^2}
sqrt{(x-x_3)^2+(y-y_3)^2}+s(t_1-t_3) = sqrt{(x-x_1)^2 + (y-y_1)^2}
I know that it can be done, however I cannot determine how.
How might I go about applying this to 3-dimensions, or some nonlinear system in general?

Foucault Pendulum simulation

Program Foucault
IMPLICIT NONE
REAL,DIMENSION(:),ALLOCATABLE :: t, x,y
REAL,PARAMETER :: pi=3.14159265358979323846, g=9.81
REAL :: L, vitessea, lat, h, omega, beta
INTEGER :: i , zeta
zeta=1000
Allocate(x(zeta),y(zeta),t(zeta))
L=67.
lat=49/180*pi
omega=sqrt(g/L)
h=0.01
Do i= 1,zeta
IF(i==1 .OR. i==2) THEN
t(1)=0.0
t(2)=0.0
x(1)=0.1
x(2)=1
y(1)=0.0
y(2)=0.0
ELSE
t(i+1)=real(i)*h
x(i+1)=(-omega**2*x(i)+2.0*((y(i)-y(i-1))/h)*latang(lat))*h**2+2.0*x(i)-x(i-1)
y(i+1)=(-omega**2*y(i)-2.0*((x(i)-x(i-1))/h)*latang(lat))*h**2+2.0*y(i)-y(i-1)
END IF
WRITE(40,*) t(i), x(i)
WRITE(60,*) t(i), y(i)
WRITE(50,*) x(i), y(i)
END DO
Contains
REAL Function latang(alpha)
REAL, INTENT(IN) :: alpha
REAL :: sol
latang=2*pi*sin(alpha)/86400
END FUNCTION
End Program Foucault
I'm trying to code the original Foucault Pendulum in Paris. My code seems to be working but so far, I could only get the below right graphic, "the flower" evolution. Therefore, I changed my parameters constantly to get the left graphic but I couldn't.
I took parameters of Foucault Pendulum installed in Paris with L=67, angular velocity of earth =2*pi/86400 and latitude of 49/180*pi.
My initial conditions are as written in the code. I tried a way range of parameters varying all of my initial conditions, my latitude and angular velocity but i couldn't get the left desired results.
I used Foucault differential equations as below : i coded them with Finite difference method (more simple than Runge-Kutta) by replacing the 2nd order derivation by its central finite difference. And the first order one by it's backward finite difference. By then, i build my loop by isolating x(i+1) and y(i+1) in both equations.
My code is very sensitive to parameters such as h (=derivation step), earth angular velocity and latitude (which is normal). I tried to change a way big range of parameters from a big h step to a small one, to a minimal and high latitude, initial conditions...etc but i couldn't ever get the left graphic which i rather need.
What could be made to get the left one ?
I was able to get the two charts, by speeding up the earth's rotation 120× fold, and allowing the simulation to run for 32 swings of the pendulum. Also, I noticed that Euler integration added energy to the system making for bad results, so I reverted to a standard RK4 implementation.
and here is the code I used to solve this ODE:
program FoucaultOde
implicit none
integer, parameter :: sp = kind(1.0), dp = kind(1d0)
! Constants
real, parameter :: g=9.80665, pi =3.1415926536
! Variables
real, allocatable :: y(:,:), yp(:), k0(:),k1(:),k2(:),k3(:)
real :: lat, omega, h, L, earth, period
real :: t0,x0,y0,vx0,vy0
integer :: i, zeta, f1, swings
! Code starts here
swings = 32
zeta = 400*swings
L = 67
lat = 49*pi/180
period = 24*60*60 ! period = 86400
earth = (2*pi*sin(lat)/period)*120 !120 multiplier for roation
omega = sqrt(g/L)
allocate(y(5,zeta))
allocate(yp(5), k0(5),k1(5),k2(5),k3(5))
! make pendulum complete 'swings' cycles in 'zeta' steps
h = swings*2*pi/(omega*zeta)
t0 = 0
x0 = 0.5 ! Initial displacement
y0 = 0
vx0 = 0
vy0 = 0
! Initial conditions in the state vector Y
Y(:,1) = [t0,x0,y0,vx0,vy0]
do i=2, zeta
! Euler method (single step)
! Yp = ode(Y(:,i-1))
! Runge-Kutta method (four steps)
k0 = ode(Y(:,i-1))
k1 = ode(Y(:,i-1) + h/2*k0)
k2 = ode(Y(:,i-1) + h/2*k1)
k3 = ode(Y(:,i-1) + h*k2)
Yp = (k0+2*k1+2*k2+k3)/6
! Take a step
Y(:,i) = Y(:,i-1) + h*Yp
end do
open( newunit=f1, file='results.csv', status = 'replace', pad='no')
! write header
write (f1, '(a15,a,a15,a,a15,a,a15,a,a15)') 't',',', 'x',',','y',',', 'vx',',','vy'
! write rows of data, comma-separated
do i=1, zeta
write (f1, '(g,a,g,a,g,a,g,a,g)') y(1,i),',',y(2,i),',',y(3,i),',',y(4,i),',',y(5,i)
end do
close(f1)
contains
function ode(Y) result(Yp)
real, intent(in) :: Y(5)
real :: Yp(5), t,px,py,vx,vy,ax,ay
! Read state vector Y to component values
t = Y(1)
px = Y(2)
py = Y(3)
vx = Y(4)
vy = Y(5)
! Reference paper:
! http://www.legi.grenoble-inp.fr/people/Achim.Wirth/final_version.pdf
ax = -(omega**2)*px + 2*vy*earth ! (equation 53)
ay = -(omega**2)*py - 2*vx*earth ! (equation 54)
! State vector rate. Note, rate of time is aways 1.0
Yp = [1.0, vx, vy, ax, ay]
end function
end program FoucaultOde
The resulting file results.csv looks like this for me (for checking)
t, x, y, vx, vy
.000000 , 5.000000 , .000000 , .000000 , .000000
.4105792E-01, 4.999383 , .1112020E-06, -.3004657E-01, .8124921E-05
.8211584E-01, 4.997533 , .8895339E-06, -.6008571E-01, .3249567E-04
.1231738 , 4.994450 , .3001796E-05, -.9011002E-01, .7310022E-04
.1642317 , 4.990134 , .7114130E-05, -.1201121 , .1299185E-03
.2052896 , 4.984587 , .1389169E-04, -.1500844 , .2029225E-03
.2463475 , 4.977810 , .2399832E-04, -.1800197 , .2920761E-03
.2874054 , 4.969805 , .3809619E-04, -.2099106 , .3973353E-03
...
from which I plotted the 2nd and 3rd columns in one chart, and the 4th and 5th for the second chart.
There is one thing that may be wrong depending on how you manage different step sizes, and an observation on the physics of the real-world example. With the initialization of the arrays, you imply an initial velocity of about 0.9/0.01=90 [m/s] in x direction away from the center. To get compatible results for different step sizes, you would need to adapt the calculation of x(2). However, in the graphs the plot starts from a point with zero velocity. This you can implement to first order by setting x(2)=x(1)=1. As the used integration method is also first order, this is sufficient.
For the second point, note that one can write the system using complex coordinates z=x+iy as
z'' = -w^2*z - 2*i*E*z', E = Omega*sin(theta)
This is a linear ODE with constant coefficients, the solution of it is
z(t) = exp(-i*E*t) * (A*cos(w1*t)+B*sin(w1*t)), w1 = sqrt(w^2+E^2)
This describes a pendulum motion of frequency w1 whose plane rotates with frequency E clockwise. The grand rotation has period T=2*pi/E, during which w1*T/(2*pi)=w1/E pendulum swings occur.
Now insert your numbers, w=sqrt(g/L)=0.383 and E=2*pi*sin(49°)/86400=5.49e-05, so that essentially w1=w. The number of pendulum cycles per full rotation is w/E=6972, so that you can expect a densely filled circle in the plot. Or a very narrow double wedge if only a few cycles are plotted. As each cycle takes 2*pi/w=16.4 [s], and the integration goes 1000 steps of step size 0.01, in the plot as it is you can expect a swing forth and part of the swing back.
To be more realistic, set the initial velocity to zero, that is, the pendulum is taken to its start position and then let go. Also increase the time to 30 [s] to have more than one pendulum cycle in the plot.
It from this we can see that the solutions converge, and with some imagination, that they converge linearly.
To get a plot like in the cited images, one needs a much smaller fraction of w/E, counting the swings, it has to be around 15. Note that you can not get this ratio anywhere on earth with a realistically scaled pendulum. So set w=pi, E=pi/16 and integrate over 15 time units using the first order method.
This detoriorates really fast, even for the smallest step size with 40 points in a pendulum cycle.
For a better result, increase the local truncation order to the next higher by using the central difference in the first derivative approximation.
z(i+1) - 2*z(i) + z(i-1) = -w^2*z(i)*dt^2 - i*E*(z(i+1)-z(i-1))*dt
z(i+1) = ( 2*z(i) - z(i-1) - w^2*z(i)*dt^2 + i*E*z(i-1)*dt ) / (1+i*E*dt)
The division by the complex number can also be easily carried out in the real components of the trajectory,
! x(i+1)-2*x(i)+x(i-1) = h^2*(-omega**2*x(i)) + h*earth*(y(i+1)-y(i-1))
! y(i+1)-2*y(i)+y(i-1) = h^2*(-omega**2*y(i)) - h*earth*(x(i+1)-x(i-1))
t(i) = t(i-1) + h
cx = (2-(h*omega)**2)*x(i) - x(i-1) - h*earth*y(i-1)
cy = (2-(h*omega)**2)*y(i) - y(i-1) + h*earth*x(i-1)
den = 1+(h*earth)**2
x(i+1) = (cx + h*earth*cy)/den
y(i+1) = (cy - h*earth*cx)/den
Now to respect the increased order, also the initial points need to have an order of accuracy more, using again zero initial speed, this gives in the second order Taylor expansion
z(2) = z(1) - 0.5*w^2*z(1)*dt^2
All the step sizes that gave deviating and structurally deteriorating results in the first order method now give a visually identical, structurally stable results in this second order method.

Schrodinger equation not evolving properly with time?

I'm writing a code in python to evolve the time-dependent Schrodinger equation using the Crank-Nicolson scheme. I didn't know how to deal with the potential so I looked around and found a way from this question, which I have verified from a couple other sources. According to them, for a harmonic oscillator potential, the C-N scheme gives
AΨn+1=A∗Ψn
where the elements on the main diagonal of A are dj=1+[(iΔt) / (2m(Δx)^2)]+[(iΔt(xj)^2)/4] and the elements on the upper and lower diagonals are a=−iΔt/[4m(Δx)^2]
The way I understand it, I'm supposed to give an initial condition(I've chosen a coherent state) in the form of the matrix Ψn and I need to compute the matrix Ψn+1 , which is the wave function after time Δt. To obtain Ψn+1 for a given step, I'm inverting the matrix A and multiplying it with the matrix A* and then multiplying the result with Ψn. The resulting matrix then becomes Ψn for the next step.
But when I'm doing this, I'm getting an incorrect animation. The wave packet is supposed to oscillate between the boundaries but in my animation, it is barely moving from its initial mean value. I just don't understand what I'm doing wrong. Is my understanding of the problem wrong? Or is it a flaw in my code?Please help! I've posted my code below and the video of my animation here. I'm sorry for the length of the code and the question but it's driving me crazy not knowing what my mistake is.
import numpy as np
import matplotlib.pyplot as plt
L = 30.0
x0 = -5.0
sig = 0.5
dx = 0.5
dt = 0.02
k = 1.0
w=2
K=w**2
a=np.power(K,0.25)
xs = np.arange(-L,L,dx)
nn = len(xs)
mu = k*dt/(dx)**2
dd = 1.0+mu
ee = 1.0-mu
ti = 0.0
tf = 100.0
t = ti
V=np.zeros(len(xs))
u=np.zeros(nn,dtype="complex")
V=K*(xs)**2/2 #harmonic oscillator potential
u=(np.sqrt(a)/1.33)*np.exp(-(a*(xs - x0))**2)+0j #initial condition for wave function
u[0]=0.0 #boundary condition
u[-1] = 0.0 #boundary condition
A = np.zeros((nn-2,nn-2),dtype="complex") #define A
for i in range(nn-3):
A[i,i] = 1+1j*(mu/2+w*dt*xs[i]**2/4)
A[i,i+1] = -1j*mu/4.
A[i+1,i] = -1j*mu/4.
A[nn-3,nn-3] = 1+1j*mu/2+1j*dt*xs[nn-3]**2/4
B = np.zeros((nn-2,nn-2),dtype="complex") #define A*
for i in range(nn-3):
B[i,i] = 1-1j*mu/2-1j*w*dt*xs[i]**2/4
B[i,i+1] = 1j*mu/4.
B[i+1,i] = 1j*mu/4.
B[nn-3,nn-3] = 1-1j*(mu/2)-1j*dt*xs[nn-3]**2/4
X = np.linalg.inv(A) #take inverse of A
plt.ion()
l, = plt.plot(xs,np.abs(u),lw=2,color='blue') #plot initial wave function
T=np.matmul(X,B) #multiply A inverse with A*
while t<tf:
u[1:-1]=np.matmul(T,u[1:-1]) #updating u but leaving the boundary conditions unchanged
l.set_ydata((abs(u))) #update plot with new u
t += dt
plt.pause(0.00001)
After a lot of tinkering, it came down to reducing my step size. That did the job for me- I reduced the step size and the program worked. If anyone is facing the same problem as I am, I recommend playing around with the step sizes. Provided that the rest of the code is fine, this is the only possible area of error.

Difference of finite difference solution between two timesteps

I have a solution to a discretized differential equation given by
f(i)
where i is a spatial index. How can I find the difference between the solution at each adjacent time step? To be more clear:
The solution is defined by an array
real,dimension(0:10) :: f
I discretize the differential equation and solve it by stepping forward in time. If the time index is k, a portion of my code is
do k=1,25
do i = 1,10
f(i) = f(i+1)+f(i-1)+f(i)
end do
end do
I can print the solution, f(i) at each time step k by the following code
print*, "print f(i) for k=, k
print "(//(5(5x,e22.14)))", f
How can I find the difference between the solution at each adjacent time step? That is, time steps k+1,k. I will store this value in a new array g, which has a dimension given by
real,dimension(0:10) :: g
So I am trying to find
!g(i)=abs(f(i;k+1)-f(i;k))...Not correct code.
How can I do this? What is the way to implement this code? I am not sure how to do this using if /then statements or whatever code would need be needed to do this. Thanks
Typically, in explicit time integration methods or iterative methods, you have to save the last time-step last solution, the current time-step solution and possibly even some more.
So you have
real,dimension(0:10) :: f0, f
where f0 is the previous value
You iterate your Jacobi or Gauss-Seidel discretization:
f = f0
do k=1,25
do i = 1,9
f(i) = f(i+1)+f(i-1)+f(i)
end do
max_diff = maxval(abs(f-f0))
if (diff small enough) exit
f0 = f
end do
If you have a time-evolving problem like a heat equation:
f = f0
do k=1,25
do i = 1,9
f(i) = f0(i) + dt * viscosity * (f0(i+1)+f0(i-1)+f0(i))
end do
max_diff = maxval(abs(f-f0))
f0 = f
end do
You have a spatial mesh at each point time. Transient problems require that you calculate the value at the end of a time step based on the values at the start:
f(i, j+1) = f(i, j) + f(dot)(i, j)*dt // Euler integration where f(dot) = df/dt derivative
i is the spatial index; j is the temporal one.

Error using scipy.optimize nonlinear solvers

I am trying to solve a set of M simultaneous eqns with M variables. I input a M X 2 matrix in as an initial guess to my function and it returns a M X 2 matrix, where each entry would equal zero if my guess was correct. Thus my function can be represented as f_k(u1,u2,...uN) = 0 for k=1,2,...N. Below is the code for my function, (for simplicities sake I have left out the modules that go with this code, i.e. p. or phi. for instance. I was more wondering if anyone else has had this error before)
M = len(p.x_lat)
def main(u_A):
## unpack u_A
u_P = u_total[:,0]
u_W = u_total[:,1]
## calculate phi_A for all monomeric species
G_W = exp(-u_W)
phi_W = zeros(M)
phi_W[1:] = p.phi_Wb * G_W[1:]
## calculate phi_A for all polymeric species
G_P = exp(-u_P)
G_P[0] = 0.
G_fwd = phi.fwd_propagator(G_P,p.Np,0) #(function that takes G_P and propagates outward)
G_bkwd = phi.bkwd_propagator(G_P,p.Np,0) #(function that takes G_P and propagates inward)
phi_P = phi.phi_P(G_fwd,G_bkwd,p.norm_graft_density,p.Np) #(function that takes the two propagators and combines them to calculate a segment density at each point)
## calculate u_A components
u_intW = en.u_int_AB(p.chi_PW,phi_P,p.phi_Pb) + en.u_int_AB(p.chi_SW,p.phi_S,p.phi_Sb) #(fxn that calculates new potential from the new segment densities)
u_intW[0] = 0.
u_Wprime = u_W - u_intW
u_intP = en.u_int_AB(p.chi_PW,phi_W,p.phi_Wb) + en.u_int_AB(p.chi_PS,p.phi_S,p.phi_Sb) #(fxn that calculates new potential from the new segment densities)
u_intP[0] = 0.
u_Pprime = u_P - u_intP
## calculate f_A
phi_total = p.phi_S + phi_W + phi_P
u_prime = 0.5 * (u_Wprime + u_Pprime)
f_total = zeros( (M, 2) )
f_total[:,0] = 1. - 1./phi_total + u_prime - u_Wprime
f_total[:,1] = 1. - 1./phi_total + u_prime - u_Pprime
return f_total
I researched ways of solving nonlinear equations such as this one using python. I came across the scipy.optimize library with the several options for solvers http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html. I first tried to use the newton_krylov solver and received the following error message:
ValueError: Jacobian inversion yielded zero vector. This indicates a bug in the Jacobian approximation.
I also tried broyden1 solver and it never converged but simply stayed stagnant. Code for implementation of both below:
sol = newton_krylov(main, guess, verbose=1, f_tol=10e-7)
sol = broyden1(main, guess, verbose=1, f_tol=10e-7)
My initial guess is given below here:
## first guess of u_A(x)
u_P = zeros(M)
u_P[1] = -0.0001
u_P[M-1] = 0.0001
u_W = zeros(M)
u_W[1] = 0.0001
u_W[M-1] = -0.0001
u_total = zeros( (M,2) )
u_total[:,0] = u_P
u_total[:,1] = u_W
guess = u_total
Any help would be greatly appreciated!