Second order ODE integration using scipy - python-2.7

I am trying to integrate a second order differential equation using 'scipy.integrate.odeint'. My eqution is as follows
m*x[i]''+x[i]'= K/N*sum(j=0 to N)of sin(x[j]-x[i])
which I have converted into two first order ODEs as followed. In the below code, yinit is array of the initial values x(0) and x'(0). My question is what should be the values of x(0) and x'(0) ?
x'[i]=y[i]
y'[i]=(-y[i]+K/N*sum(j=0 to N)of sin(x[j]-x[i]))/m
from numpy import *
from scipy.integrate import odeint
N = 50
def f(theta, t):
global N
x, y = theta
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i+1,N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K*s)/N)/m
return array([y, fx])
t = linspace(0, 10, 100, endpoint=False)
Uniformly generating random number
theta = random.uniform(-180, 180, N)
Integrating function f using odeint
yinit = array([x(0), x'(0)])
y = odeint(f, yinit, t)[:,0]
print (y)

You can choose as initial condition whatever you want.
In your case, you decided to use a random initial condition for x for all the oscillators. You can use a random initial condition for 'y' as well I guess, as I did below.
There were a few errors in the above code, mostly on how to unpack x,y from theta and how to repack them at the end (see concatenate below in the corrected code). See also the concatenate for yinit.
The rest are stylish/minor changes.
from numpy import concatenate, linspace, random, mod, zeros, sin
from scipy.integrate import odeint
Nosc = 20
assert mod(Nosc, 2) == 0
def f(theta, _):
N = theta.size / 2
x, y = theta[:N], theta[N:]
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i + 1, N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K * s) / N) / m
return concatenate(([y, fx]))
t = linspace(0, 10, 50, endpoint=False)
theta = random.uniform(-180, 180, Nosc)
theta2 = random.uniform(-180, 180, Nosc) #added initial condition for the velocities of the oscillators
yinit = concatenate((theta, theta2))
res = odeint(f, yinit, t)
X = res[:, :Nosc].T
Y = res[:, Nosc:].T
To plot the time evolution of the system, you can use something like
import matplotlib.pylab as plt
fig, ax = plt.subplots()
for displacement in X:
ax.plot(t, displacement)
ax.set_xlabel('t')
ax.set_ylabel('x')
fig.show()
What are you modelling? At first the eq. looked a bit like kuramoto oscillators, but then I noticed you also have a x[i]'' term.
Notice how in your model, as you do not have a spring term in the equation, like a term x(t) at the LHS, the value of x converges to an arbitrary value:

Related

What conventions does IPOPT use to construct its Lagrangian?

I am using IPOPT via Pyomo (the AMPL interface) to solve a simple problem and am trying to validate that the primal Lagrangian gradient is zero at the solution. I'm running the following script, in which I construct what I would expect to be the gradient of the Lagrangian with respect to primal variables.
import pyomo.environ as pyo
from pyomo.common.collections import ComponentMap
m = pyo.ConcreteModel()
m.ipopt_zL_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
m.ipopt_zU_out = pyo.Suffix(direction=pyo.Suffix.IMPORT)
m.ipopt_zL_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
m.ipopt_zU_in = pyo.Suffix(direction=pyo.Suffix.EXPORT)
m.dual = pyo.Suffix(direction=pyo.Suffix.IMPORT_EXPORT)
m.v1 = pyo.Var(initialize=-2.0)
m.v2 = pyo.Var(initialize=2.0)
m.v3 = pyo.Var(initialize=2.0)
m.v1.setlb(-10.0)
m.v2.setlb(1.5)
m.v1.setub(-1.0)
m.v2.setub(10.0)
m.eq_con = pyo.Constraint(expr=m.v1*m.v2*m.v3 - 2.0 == 0)
obj_factor = 1
m.obj = pyo.Objective(
expr=obj_factor*(m.v1**2 + m.v2**2 + m.v3**2),
sense=pyo.minimize,
)
solver = pyo.SolverFactory("ipopt")
solver.solve(m, tee=True)
grad_lag_map = ComponentMap()
grad_lag_map[m.v1] = (
(obj_factor*2*m.v1) + m.dual[m.eq_con]*m.v2*m.v3 +
m.ipopt_zL_out[m.v1] + m.ipopt_zU_out[m.v1]
)
grad_lag_map[m.v2] = (
(obj_factor*2*m.v2) + m.dual[m.eq_con]*m.v1*m.v3 +
m.ipopt_zL_out[m.v2] + m.ipopt_zU_out[m.v2]
)
grad_lag_map[m.v3] = (
(obj_factor*2*m.v3) + m.dual[m.eq_con]*m.v1*m.v2
)
for var, expr in grad_lag_map.items():
print(var.name, pyo.value(expr))
According to this, however, the gradient of the Lagrangian is not zero when constructed in this way. I can get the gradient of the Lagrangian to be zero by using the following lines to construct grad_lag_map
grad_lag_map[m.v1] = (
-(obj_factor*2*m.v1) + m.dual[m.eq_con]*m.v2*m.v3 +
m.ipopt_zL_out[m.v1] + m.ipopt_zU_out[m.v1]
)
grad_lag_map[m.v2] = (
-(obj_factor*2*m.v2) + m.dual[m.eq_con]*m.v1*m.v3 +
m.ipopt_zL_out[m.v2] + m.ipopt_zU_out[m.v2]
)
grad_lag_map[m.v3] = (
-(obj_factor*2*m.v3) + m.dual[m.eq_con]*m.v1*m.v2
)
With a minus sign in front of the objective gradient, the gradient of the Lagrangian is zero. This is surprising to me. I would not expect to see this factor of -1 for minimization problems. Can anybody confirm whether IPOPT constructs its Lagrangian with this -1 factor for minimization problems, or whether this is the artifact of some other convention I am unaware of?
This is the Gradient of the Lagrangian w.r.t. x computed in Ipopt (https://github.com/coin-or/Ipopt/blob/2b1a2f9a60fb3f8426b47edbe3b3520c7335d201/src/Algorithm/IpIpoptCalculatedQuantities.cpp#L2018-L2023):
tmp->Copy(*curr_grad_f());
tmp->AddTwoVectors(1., *curr_jac_cT_times_curr_y_c(), 1., *curr_jac_dT_times_curr_y_d(), 1.);
ip_nlp_->Px_L()->MultVector(-1., *z_L, 1., *tmp);
ip_nlp_->Px_U()->MultVector(1., *z_U, 1., *tmp);
This corresponds to an Ipopt-internal representation of a NLP, which has the form
min f(x) dual vars:
s.t. c(x) = 0, y_c
d(x) - s = 0, y_d
d_L <= s <= d_U, v_L, v_U
x_L <= x <= x_U z_L, z_U
The Lagragian for Ipopt is then
f(x) + y_c c(x) + y_d (d(x) - s) + v_L (d_L-s) + v_U (s-d_U) + z_L (x_L-x) + z_U (x-x_U)
and the gradient w.r.t. x is thus
f'(x) + y_c c'(x) + y_d d'(x) - z_L + z_U
The NLP that is used by most Ipopt interfaces is
min f(x) duals:
s.t. g_L <= g(x) <= g_U lambda
x_L <= x <= x_U z_L, z_U
The Gradient of the Lagrangian would be
f'(x) + lambda g'(x) - z_L + z_U
In your code, you have a wrong sign for z_L.

Tensor contraction with Kronecker deltas in sympy

I'm trying to use sympy to do some index gymnastics for me. I'm trying to calculate the derivatives of a cost function that looks like
cost = sumi (Mii)2
where M is given by a rotation
Mij = U*ki M0kl Ulj
I've written up a parametrization for the rotation matrix, from which I get the derivatives as products of Kronecker deltas. What I've got so far is
def Uder(p,q,r,s):
return KroneckerDelta(p,r)*KroneckerDelta(q,s) - KroneckerDelta(p,s)*KroneckerDelta(q,r)
from sympy import *
# Matrix size
n = symbols('n')
p = symbols('p');
i = Dummy('i')
k = Dummy('k')
l = Dummy('l')
# Matrix elements
M0 = IndexedBase('M')
U = IndexedBase('U')
# Indices
r, s = map(tensor.Idx, ['r', 's'])
# Derivative
cost_x = Sum(Sum(Sum(M0[i,i]*(Uder(k,i,r,s)*M0[k,l]*U[l,i] + U[k,i]*M0[k,l]*Uder(l,i,r,s)),(k,1,n)),(l,1,n)),(i,1,n))
print cost_x
but sympy is not evaluating the contractions for me, which should reduce to simple sums in terms of r and s, which are the rotation indices. Instead, what I get is
Sum(((-KroneckerDelta(_i, r)*KroneckerDelta(_k, s) + KroneckerDelta(_i, s)*KroneckerDelta(_k, r))*M[_k, _l]*U[_l, _i] + (-KroneckerDelta(_i, r)*KroneckerDelta(_l, s) + KroneckerDelta(_i, s)*KroneckerDelta(_l, r))*M[_k, _l]*U[_k, _i])*M[_i, _i], (_k, 1, n), (_l, 1, n), (_i, 1, n))
I'm using the latest git snapshot 4633fd5713c434c3286e3412a2399bd40fbd9569 of sympy.

User defined SVM kernel with scikit-learn

I encounter a problem when defining a kernel by myself in scikit-learn.
I define by myself the gaussian kernel and was able to fit the SVM but not to use it to make a prediction.
More precisely I have the following code
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.utils import shuffle
import scipy.sparse as sparse
import numpy as np
digits = load_digits(2)
X, y = shuffle(digits.data, digits.target)
gamma = 1.0
X_train, X_test = X[:100, :], X[100:, :]
y_train, y_test = y[:100], y[100:]
m1 = SVC(kernel='rbf',gamma=1)
m1.fit(X_train, y_train)
m1.predict(X_test)
def my_kernel(x,y):
d = x - y
c = np.dot(d,d.T)
return np.exp(-gamma*c)
m2 = SVC(kernel=my_kernel)
m2.fit(X_train, y_train)
m2.predict(X_test)
m1 and m2 should be the same, but m2.predict(X_test) return the error :
operands could not be broadcast together with shapes (260,64) (100,64)
I don't understand the problem.
Furthermore if x is one data point, the m1.predict(x) gives a +1/-1 result, as expexcted, but m2.predict(x) gives an array of +1/-1...
No idea why.
The error is at the x - y line. You cannot subtract the two like that, because the first dimensions of both may not be equal. Here is how the rbf kernel is implemented in scikit-learn, taken from here (only keeping the essentials):
def row_norms(X, squared=False):
if issparse(X):
norms = csr_row_norms(X)
else:
norms = np.einsum('ij,ij->i', X, X)
if not squared:
np.sqrt(norms, norms)
return norms
def euclidean_distances(X, Y=None, Y_norm_squared=None, squared=False):
"""
Considering the rows of X (and Y=X) as vectors, compute the
distance matrix between each pair of vectors.
[...]
Returns
-------
distances : {array, sparse matrix}, shape (n_samples_1, n_samples_2)
"""
X, Y = check_pairwise_arrays(X, Y)
if Y_norm_squared is not None:
YY = check_array(Y_norm_squared)
if YY.shape != (1, Y.shape[0]):
raise ValueError(
"Incompatible dimensions for Y and Y_norm_squared")
else:
YY = row_norms(Y, squared=True)[np.newaxis, :]
if X is Y: # shortcut in the common case euclidean_distances(X, X)
XX = YY.T
else:
XX = row_norms(X, squared=True)[:, np.newaxis]
distances = safe_sparse_dot(X, Y.T, dense_output=True)
distances *= -2
distances += XX
distances += YY
np.maximum(distances, 0, out=distances)
if X is Y:
# Ensure that distances between vectors and themselves are set to 0.0.
# This may not be the case due to floating point rounding errors.
distances.flat[::distances.shape[0] + 1] = 0.0
return distances if squared else np.sqrt(distances, out=distances)
def rbf_kernel(X, Y=None, gamma=None):
X, Y = check_pairwise_arrays(X, Y)
if gamma is None:
gamma = 1.0 / X.shape[1]
K = euclidean_distances(X, Y, squared=True)
K *= -gamma
np.exp(K, K) # exponentiate K in-place
return K
You might want to dig deeper into the code, but look at the comments for the euclidean_distances function. A naive implementation of what you're trying to achieve would be this:
def my_kernel(x,y):
d = np.zeros((x.shape[0], y.shape[0]))
for i, row_x in enumerate(x):
for j, row_y in enumerate(y):
d[i, j] = np.exp(-gamma * np.linalg.norm(row_x - row_y))
return d

Solve for the positions of all six roots PYTHON

I'm using Newton's method, so I want to find the positions of all six roots of the sixth-order polynomial, basically the points where the function is zero.
I found the rough values on my graph with this code below but want to output those positions of all six roots. I'm thinking of using x as an array to input the values in to find those positions but not sure. I'm using 1.0 for now to locate the rough values. Any suggestions from here??
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1**-10
x = 1.0
xlast = float("inf")
while np.abs(x - xlast) > accuracy:
xlast = x
x = xlast - P(xlast)/dPdx(xlast)
print(x)
p_points = []
x_points = np.linspace(0, 1, 100)
y_points = np.zeros(len(x_points))
for i in range(len(x_points)):
y_points[i] = P(x_points[i])
p_points.append(P(x_points))
plt.plot(x_points,y_points)
plt.savefig("roots.png")
plt.show()
The traditional way is to use deflation to factor out the already found roots. If you want to avoid manipulations of the coefficient array, then you have to divide the roots out.
Having found z[1],...,z[k] as root approximations, form
g(x)=(x-z[1])*(x-z[2])*...*(x-z[k])
and apply Newtons method to h(x)=f(x)/g(x) with h'(x)=f'/g-fg'/g^2. In the Newton iteration this gives
xnext = x - f(x)/( f'(x) - f(x)*g'(x)/g(x) )
Fortunately the quotient g'/g has a simple form
g'(x)/g(x) = 1/(x-z[1])+1/(x-z[2])+...+1/(x-z[k])
So with a slight modification to the Newton step you can avoid finding the same root over again.
This all still keeps the iteration real. To get at the complex root, use a complex number to start the iteration.
Proof of concept, adding eps=1e-8j to g'(x)/g(x) allows the iteration to go complex without preventing real values. Solves the equivalent problem 0=exp(-eps*x)*f(x)/g(x)
import numpy as np
import matplotlib.pyplot as plt
def P(x):
return 924*x**6 - 2772*x**5 + 3150*x**4 - 1680*x**3 + 420*x**2 - 42*x + 1
def dPdx(x):
return 5544*x**5 - 13860*x**4 + 12600*x**3 - 5040*x**2 + 840*x - 42
accuracy = 1e-10
roots = []
for k in range(6):
x = 1.0
xlast = float("inf")
x_points = np.linspace(0.0, 1.0, 200)
y_points = P(x_points)
for rt in roots:
y_points /= (x_points - rt)
y_points = np.array([ max(-1.0,min(1.0,np.real(y))) for y in y_points ])
plt.plot(x_points,y_points,x_points,0*y_points)
plt.show()
while np.abs(x - xlast) > accuracy:
xlast = x
corr = 1e-8j
for rt in roots:
corr += 1/(xlast-rt)
Px = P(xlast)
dPx = dPdx(xlast)
x = xlast - Px/(dPx - Px*corr)
print(x)
roots.append(x)

Integration using scipy odeint

Why do below two versions of a same problem give different result y_phi, while all parameters are same and random values are seeded with same value in both versions ?
I know Version_1 gives correct result while Version_2 gives false result, why is that so?
Where am I making mistake in Version_2 ?
Version_1:
from numpy import *
import random
from scipy.integrate import odeint
def f(phi, t, alpha):
v_phi = zeros(x*y, float)
for n in range(x*y):
cmean = cos(phi[n])
smean = sin(phi[n])
v_phi[n] = smean*cos(phi[n]+alpha)-cmean*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
y = odeint(f, phi, t, args=(alpha,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
Version_2:
def f(phi, t, alpha, cmean, smean):
v_phi = zeros(x*y, float)
for n in range(x*y):
v_phi[n] = smean[n]*cos(phi[n]+alpha)-cmean[n]*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
cmean = []
smean = []
for l in range(x*y):
cmean.append(cos(phi[l]))
smean.append(sin(phi[l]))
y = odeint(f, phi, t, args=(alpha, cmean, smean,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
In the first case smean = cos(phi(t)[n]), it depends on the current value of phi, but in the second smean = cos(phi(0)[n]) and depends only on the initial value. (And similarly for cmean).
Btw, you could have found this issue also using http://en.wikipedia.org/wiki/Rubber_duck_debugging