I have a 4-4 differential equations system in a function (subsystem4) and I solved it with odeint funtion. I managed to plot the results of the system. My problem is that I want to plot and some other equations (e.g. x,y,vcxdot...) which are included in the same function (subsystem4) but I get NameError: name 'vcxdot' is not defined. Also, I want to use some of these equations (not only the results of the equation's system) as inputs in a following differential equations system and plot all the equations in the same period of time (t). I have done this using Matlab-Simulink but it was much easier because of Simulink blocks. How can I have access to and plot all the equations of a function (subsystem4) and use them as input in a following system? I am new in python and I use Python 2.7.12. Thank you in advance!
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def subsystem4(u,t):
added_mass_x = 0.03 # kg
added_mass_y = 0.04
mb = 0.3 # kg
m1 = mb-added_mass_x
m2 = mb-added_mass_y
l1 = 0.07 # m
l2 = 0.05 # m
J = 0.00050797 # kgm^2
Sa = 0.0110 # m^2
Cd = 2.44
Cl = 3.41
Kd = 0.000655 # kgm^2
r = 1000 # kg/m^3
f = 2 # Hz
c1 = 0.5*r*Sa*Cd
c2 = 0.5*r*Sa*Cl
c3 = 0.5*mb*(l1**2)
c4 = Kd/J
c5 = (1/(2*J))*(l1**2)*mb*l2
c6 = (1/(3*J))*(l1**3)*mb
vcx = u[0]
vcy = u[1]
psi = u[2]
wz = u[3]
x = 3 + 0.3*np.cos(t)
y = 0.5 + 0.3*np.sin(t)
xdot = -0.3*np.sin(t)
ydot = 0.3*np.cos(t)
xdotdot = -0.3*np.cos(t)
ydotdot = -0.3*np.sin(t)
vcx = xdot*np.cos(psi)-ydot*np.sin(psi)
vcy = ydot*np.cos(psi)+xdot*np.sin(psi)
psidot = wz
vcxdot = xdotdot*np.cos(psi)-xdot*np.sin(psi)*psidot-ydotdot*np.sin(psi)-ydot*np.cos(psi)*psidot
vcydot = ydotdot*np.cos(psi)-ydot*np.sin(psi)*psidot+xdotdot*np.sin(psi)+xdot*np.cos(psi)*psidot
g1 = -(m1/c3)*vcxdot+(m2/c3)*vcy*wz-(c1/c3)*vcx*np.sqrt((vcx**2)+(vcy**2))+(c2/c3)*vcy*np.sqrt((vcx**2)+(vcy**2))*np.arctan2(vcy,vcx)
g2 = (m2/c3)*vcydot+(m1/c3)*vcx*wz+(c1/c3)*vcy*np.sqrt((vcx**2)+(vcy**2))+(c2/c3)*vcx*np.sqrt((vcx**2)+(vcy**2))*np.arctan2(vcy,vcx)
A = 12*np.sin(2*np.pi*f*t+np.pi)
if A>=0.1:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2-c6*np.sqrt((g1**2)+(g2**2))
elif A<-0.1:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2+c6*np.sqrt((g1**2)+(g2**2))
else:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2
return [vcxdot,vcydot,psidot,wzdot]
u0 = [0,0,0,0]
t = np.linspace(0,15,1000)
u = odeint(subsystem4,u0,t)
vcx = u[:,0]
vcy = u[:,1]
psi = u[:,2]
wz = u[:,3]
plt.figure(1)
plt.subplot(211)
plt.plot(t,vcx,'r-',linewidth=2,label='vcx')
plt.plot(t,vcy,'b--',linewidth=2,label='vcy')
plt.plot(t,psi,'g:',linewidth=2,label='psi')
plt.plot(t,wz,'c',linewidth=2,label='wz')
plt.xlabel('time')
plt.legend()
plt.show()
To the immediate question of plotting the derivatives, you can get the velocities by directly calling the ODE function again on the solution,
u = odeint(subsystem4,u0,t)
udot = subsystem4(u.T,t)
and get the separate velocity arrays via
vcxdot,vcydot,psidot,wzdot = udot
In this case the function involves branching, which is not very friendly to vectorized calls of it. There are ways to vectorize branching, but the easiest work-around is to loop manually through the solution points, which is slower than a working vectorized implementation. This will again procude a list of tuples like odeint, so the result has to be transposed as a tuple of lists for "easy" assignment to the single array variables.
udot = [ subsystem4(uk, tk) for uk, tk in zip(u,t) ];
vcxdot,vcydot,psidot,wzdot = np.asarray(udot).T
This may appear to double somewhat the computation, but not really, as the solution points are usually interpolated from the internal step points of the solver. The evaluation of the ODE function during integration will usually happen at points that are different from the solution points.
For the other variables, extract the computation of position and velocities into functions to have the constant and composition in one place only:
def xy_pos(t): return 3 + 0.3*np.cos(t), 0.5 + 0.3*np.sin(t)
def xy_vel(t): return -0.3*np.sin(t), 0.3*np.cos(t)
def xy_acc(t): return -0.3*np.cos(t), -0.3*np.sin(t)
or similar that you can then use both inside the ODE function and in preparing the plots.
What Simulink most likely does is to collect all the equations of all the blocks and form this into one big ODE system which is then solved for the whole state at once. You will need to implement something similar. One big state vector, and each subsystem knows its slice of the state resp. derivatives vector to get its specific state variables from and write the derivatives to. The computation of the derivatives can then use values communicated among the subsystems.
What you are trying to do, solving the subsystems separately, works only for resp. will likely result in a order 1 integration method. All higher order methods need to be able to simultaneously shift the state in some direction computed from previous stages of the method, and evaluate the whole system there.
Related
I am trying to use PyMC3 to fit a model to some observed data. This model is based on external code (interfaced via theano.ops.as_op), and depends on multiple parameters that should be fit by the MCMC process. Since the gradient of the external code cannot be determined, I use the Metropolis-Hastings sampler.
I have established Uniform priors for my inputs, and generate a model using my custom code. However, I want to compare the simulated data to my observations (a 3D np.ndarray) using the chi-squared statistic (sum of the squares of data-model/sigma^2) to obtain a log-likelihood. When the MCMC samples are drawn, this should lead to the trace converging on the best values of each parameter.
My model is explained in the following semi-pseudocode (if that's even a word):
import pymc3 as pm
#Some stuff setting up the data, preparing some functions etc.
#theano.compile.ops.as_op(itypes=[input types],otypes = [output types])
def make_model(inputs):
#Wrapper to external code to generate simulated data
return simulated data
model = pm.model()
with model:
#priors for 13 input parameters
simData = make_model(inputs)
I now want to obtain the chi-squared logLikelihood for this model versus the data, which I think can be done using pm.ChiSquared, however I do not see how to combine the data, model and this distribution together to cause the sampler to perform correctly. I would guess it might look something like:
chiSq = pm.ChiSquared(nu=data.size, observed = (data-simData)**2/err**2)
trace = pm.sample(1000)
Is this correct? In running previous tests, I have found the samples appear to be simply drawn from the priors.
Thanks in advance.
Taking aloctavodia's advice, I was able to get parameter estimates for some toy exponential data using a pm.Normal likelihood. Using a pm.ChiSquared likelihood as the OP suggested, the model converged to correct values, but the posteriors on the parameters were roughly three times as broad. Here's the code for the model; I first generated data and then fit with PyMC3.
# Draw `nPoints` observed data points `y_obs` from the function
# 3. + 18. * numpy.exp(-.2 * x)
# with the points evaluated at `x_obs`
# x_obs = numpy.linspace(0, 100, nPoints)
# Add Normal(mu=0,sd=`cov`) noise to each point in `y_obs`
# Then instantiate PyMC3 model for fit:
def YModel(x, c, a, l):
# exponential model expected to describe the data
mu = c + a * pm.math.exp(-l * x)
return mu
def logp(y_mod, y_obs):
# Normal distribution likelihood
return pm.Normal.dist(mu = y_mod, sd = cov).logp(y_obs)
# Chi squared likelihood (to use, comment preceding line & uncomment next 2 lines)
#chi2 = chi2 = pm.math.sum( ((y_mod - y_obs)/cov)**2 )
#return pm.ChiSquared.dist(nu = nPoints).logp(chi2)
with pm.Model() as model:
c = pm.Uniform('constant', lower = 0., upper = 10., testval = 5.)
a = pm.Uniform('amplitude', lower = 0., upper = 50., testval = 25.)
l = pm.Uniform('lambda', lower = 0., upper = 10., testval = 5.)
y_mod = YModel(x_obs, c, a, l)
L = pm.DensityDist('L', logp, observed = {'y_mod': y_mod, 'y_obs': y_obs}, testval = {'y_mod': y_mod, 'y_obs': y_obs})
step = pm.Metropolis([c, a, l])
trace = pm.sample(draws = 10000, step = step)
The above model converged, but I found that success was sensitive to the bounds on the priors and the initial guesses on those parameters.
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
c 3.184397 0.111933 0.002563 2.958383 3.397741 1834.0 1.000260
a 18.276887 0.747706 0.019857 16.882025 19.762849 1343.0 1.000411
l 0.200201 0.013486 0.000361 0.174800 0.226480 1282.0 0.999991
(Edited: I had forgotten to sum the squares of the normalized residuals for chi2)
I'm writing a code in python to evolve the time-dependent Schrodinger equation using the Crank-Nicolson scheme. I didn't know how to deal with the potential so I looked around and found a way from this question, which I have verified from a couple other sources. According to them, for a harmonic oscillator potential, the C-N scheme gives
AΨn+1=A∗Ψn
where the elements on the main diagonal of A are dj=1+[(iΔt) / (2m(Δx)^2)]+[(iΔt(xj)^2)/4] and the elements on the upper and lower diagonals are a=−iΔt/[4m(Δx)^2]
The way I understand it, I'm supposed to give an initial condition(I've chosen a coherent state) in the form of the matrix Ψn and I need to compute the matrix Ψn+1 , which is the wave function after time Δt. To obtain Ψn+1 for a given step, I'm inverting the matrix A and multiplying it with the matrix A* and then multiplying the result with Ψn. The resulting matrix then becomes Ψn for the next step.
But when I'm doing this, I'm getting an incorrect animation. The wave packet is supposed to oscillate between the boundaries but in my animation, it is barely moving from its initial mean value. I just don't understand what I'm doing wrong. Is my understanding of the problem wrong? Or is it a flaw in my code?Please help! I've posted my code below and the video of my animation here. I'm sorry for the length of the code and the question but it's driving me crazy not knowing what my mistake is.
import numpy as np
import matplotlib.pyplot as plt
L = 30.0
x0 = -5.0
sig = 0.5
dx = 0.5
dt = 0.02
k = 1.0
w=2
K=w**2
a=np.power(K,0.25)
xs = np.arange(-L,L,dx)
nn = len(xs)
mu = k*dt/(dx)**2
dd = 1.0+mu
ee = 1.0-mu
ti = 0.0
tf = 100.0
t = ti
V=np.zeros(len(xs))
u=np.zeros(nn,dtype="complex")
V=K*(xs)**2/2 #harmonic oscillator potential
u=(np.sqrt(a)/1.33)*np.exp(-(a*(xs - x0))**2)+0j #initial condition for wave function
u[0]=0.0 #boundary condition
u[-1] = 0.0 #boundary condition
A = np.zeros((nn-2,nn-2),dtype="complex") #define A
for i in range(nn-3):
A[i,i] = 1+1j*(mu/2+w*dt*xs[i]**2/4)
A[i,i+1] = -1j*mu/4.
A[i+1,i] = -1j*mu/4.
A[nn-3,nn-3] = 1+1j*mu/2+1j*dt*xs[nn-3]**2/4
B = np.zeros((nn-2,nn-2),dtype="complex") #define A*
for i in range(nn-3):
B[i,i] = 1-1j*mu/2-1j*w*dt*xs[i]**2/4
B[i,i+1] = 1j*mu/4.
B[i+1,i] = 1j*mu/4.
B[nn-3,nn-3] = 1-1j*(mu/2)-1j*dt*xs[nn-3]**2/4
X = np.linalg.inv(A) #take inverse of A
plt.ion()
l, = plt.plot(xs,np.abs(u),lw=2,color='blue') #plot initial wave function
T=np.matmul(X,B) #multiply A inverse with A*
while t<tf:
u[1:-1]=np.matmul(T,u[1:-1]) #updating u but leaving the boundary conditions unchanged
l.set_ydata((abs(u))) #update plot with new u
t += dt
plt.pause(0.00001)
After a lot of tinkering, it came down to reducing my step size. That did the job for me- I reduced the step size and the program worked. If anyone is facing the same problem as I am, I recommend playing around with the step sizes. Provided that the rest of the code is fine, this is the only possible area of error.
I am trying to calculate an essential and a projection matrix from two images. I will then use them to project a 3D object onto the image. The two images I used are
I picked a few pixel correspondences, and fed that to a SVD based least square mechanism which the books say gives me the essential matrix. I used the code below for this task (code is based mostly on Eric Solem's Programming Computer Vision with Python book):
import scipy.linalg as lin
import pandas as pd
def skew(a):
return np.array([[0,-a[2],a[1]],[a[2],0,-a[0]],[-a[1],a[0],0]])
def essential(x1,x2):
n = x1.shape[1]
A = np.zeros((n,9))
for i in range(n):
A[i] = [ x1[0,i]*x2[0,i], \
x1[0,i]*x2[1,i], \
x1[0,i]*x2[2,i], \
x1[1,i]*x2[0,i], \
x1[1,i]*x2[1,i], \
x1[1,i]*x2[2,i], \
x1[2,i]*x2[0,i], \
x1[2,i]*x2[1,i], \
x1[2,i]*x2[2,i]]
U,S,V = lin.svd(A)
F = V[-1].reshape(3,3)
return F
def compute_P_from_essential(E):
U,S,V = lin.svd(E)
if lin.det(np.dot(U,V))<0: V = -V
E = np.dot(U,np.dot(np.diag([1,1,0]),V))
Z = skew([0,0,-1])
W = np.array([[0,-1,0],[1,0,0],[0,0,1]])
P2 = [np.vstack((np.dot(U,np.dot(W,V)).T,U[:,2])).T,
np.vstack((np.dot(U,np.dot(W,V)).T,-U[:,2])).T,
np.vstack((np.dot(U,np.dot(W.T,V)).T,U[:,2])).T,
np.vstack((np.dot(U,np.dot(W.T,V)).T,-U[:,2])).T]
return P2
points = [ \
[266,163,296,160],[265,237,297,266],\
[76,288,51,340],[135,31,142,4],\
[344,167,371,156],[48,165,71,164],\
[151,68,166,56],[237,26,259,19],\
[226,147,254,140]]
df = pd.DataFrame(points)
df['uno'] = 1.
x1 = np.array(df[[0,1,'uno']].T)
x2 = np.array(df[[2,3,'uno']].T)
print x1
print x2
E = essential(x1,x2)
P = compute_P_from_essential(E)
import pandas as pd
x0 = 3.; y0 = 1.; z0 = 1.
print df.shape
e = 1
cube = [[x0,y0,z0],[x0+e,y0,z0],[x0+e,y0+e,z0],[x0,y0+e,z0],
[x0,y0,z0+e],[x0+e,y0,z0+e],[x0+e,y0+e,z0+e],[x0,y0+e,z0+e]]
cube = pd.DataFrame(cube)
cube['1'] = 1.
xx = np.dot(P[1], cube.T) * 100.
xx[1,:] = 360-xx[1,:]
#xx = xx / xx[2]
print xx[0].shape
plt.plot(xx[0], xx[1],'.')
plt.xlim(0,640)
plt.ylim(0,360)
I calculated the essential matrix, then the projection matrix, then used that to project a 3D cube. The result:
This looks skewed, I am not sure why this happened. Any ideas on how to fix this?
Thanks,
First of all, it looks like you are computing the essential matrix using exactly 9 points. You can do this using only 8 (since scale is a free parameter, you can multiply the essential by a scalar and it will stay the same so you can fix one of the parameters and just use 8 points, but I digress.) However, in practice this is a very bad idea because your 8 points might have poor spatial configuration. So what you want to do is to select N matches (600 for example), and use an algorithm like RANSAC to determine the best Essential matrix. But aside from that, what I'd recommend to debug such applications is this: compute the Fundalental matrix F based on the Essential you just computed. Now you can select a point in image 1 and then display the corresponding epipolar line in the second one. That will help you visually evaluate and thus debug the estimation of the Essential.
I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.
I am trying to use the package pynfft in python 2.7 to do the non-uniform fast Fourier transform (nfft). I have learnt python for only two months, so I have some difficulties.
This is my code:
import numpy as np
from pynfft.nfft import NFFT
#loading data, 104 lines
t_diff, x_diff = np.loadtxt('data/analysis/amplitudes.dat', unpack = True)
N = [13,8]
M = 52
#fourier coefficients
f_hat = np.fft.fft(x_diff)/(2*M)
#instantiation
plan = NFFT(N,M)
#precomputation
x = t_diff
plan.x = x
plan.precompute()
# vector of non uniform samples
f = x_diff[0:M]
#execution
plan.f = f
plan.f_hat = f_hat
f = plan.trafo()
I am basically following the instructions I found in the pynfft tutorial (http://pythonhosted.org/pyNFFT/tutorial.html).
I need the nfft because the time intervals in which my data are taken are not constant (I mean, the first measure is taken at t, the second after dt, the third after dt+dt' with dt' different from dt and so on).
The pynfft package wants the vector of the fourier coefficients ("f_hat") before execution, so I calculated it using numpy.fft, but I am not sure this procedure is correct. Is there another way to do it (maybe with the nfft)?
I would like also to calculate the frquencies; I know that with numpy.fft there is a command: is ther anything like that also for pynfft? I did not find anything in the tutorial.
Thank you for any advice you can give me.
Here is a working example, taken from here:
First we define the function we want to reconstruct, which is the sum of four harmonics:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12345)
%pylab inline --no-import-all
# function we want to reconstruct
k=[1,5,10,30] # modulating coefficients
def myf(x,k):
return sum(np.sin(x*k0*(2*np.pi)) for k0 in k)
x=np.linspace(-0.5,0.5,1000) # 'continuous' time/spatial domain; -0.5<x<+0.5
y=myf(x,k) # 'true' underlying trigonometric function
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
ax.plot(x,y,'red')
ax.plot(x,y,'r.')
# we should sample at a rate of >2*~max(k)
M=256 # number of nodes
N=128 # number of Fourier coefficients
nodes =np.random.rand(M)-0.5 # non-uniform oversampling
values=myf(nodes,k) # nodes&values will be used below to reconstruct
# original function using the Solver
ax.plot(nodes,values,'bo')
ax.set_xlim(-0.5,+0.5)
The we initialize and run the Solver:
from pynfft import NFFT, Solver
f = np.empty(M, dtype=np.complex128)
f_hat = np.empty([N,N], dtype=np.complex128)
this_nfft = NFFT(N=[N,N], M=M)
this_nfft.x = np.array([[node_i,0.] for node_i in nodes])
this_nfft.precompute()
this_nfft.f = f
ret2=this_nfft.adjoint()
print this_nfft.M # number of nodes, complex typed
print this_nfft.N # number of Fourier coefficients, complex typed
#print this_nfft.x # nodes in [-0.5, 0.5), float typed
this_solver = Solver(this_nfft)
this_solver.y = values # '''right hand side, samples.'''
#this_solver.f_hat_iter = f_hat # assign arbitrary initial solution guess, default is 0
this_solver.before_loop() # initialize solver internals
while not np.all(this_solver.r_iter < 1e-2):
this_solver.loop_one_step()
Finally, we display the frequencies:
import matplotlib.pyplot as plt
fig=plt.figure(1,(20,5))
ax =fig.add_subplot(111)
foo=[ np.abs( this_solver.f_hat_iter[i][0])**2 for i in range(len(this_solver.f_hat_iter) ) ]
ax.plot(np.abs(np.arange(-N/2,+N/2,1)),foo)
cheers