Line integral doesn't compute - sympy

I'm asked to compute a line integral where
import sympy as sp
import sympy.vector as sv
C = sv.CoordSys3D("")
x, y, z = C.base_scalars()
i, j, k = C.base_vectors()
Ffield = sp.atan(x*y)*i + sp.acos(y*z)*j + sp.asin(x*z)*k
Gfield = x*y*z*i + sp.sqrt(x*y*z)*j
Afield = sv.gradient(Ffield.dot(Gfield))
Tpath = sv.ParametricRegion( (t,sp.exp(t),sp.ln(t)), (t,1,5) )
Now sv.vector_integrate(Afield,Tpath) never ends and I'm not sure why. The integrand looks complicated, but I think this is the method I should use.

Related

Why is this ellipse drawing program so very slow?

I have a program to draw a grid of ellipses with a uniform phase distribution. However, it is very slow.
I'd like my code to be faster, so that I can use, for example, N = 150, M = 150.
How can I speed up this code?
N = 10;
M = 10;
y = 1;
x = 1;
a = 1;
b = 2;
for k = 1:N
for m = 1:N
w = rand(1,1);
for l = 1:N
for s = 1:N
if(((l-x)*cos(w*pi)+(s-y)*sin(w*pi)).^2/a^2 + (-(l-x)*sin(w*pi) + (s-y)*cos(w*pi)).^2/b.^2 <= 1)
f(l,s) = 1*(cos(0.001)+i*sin(0.001));
end
end
end
y = y+4;
end
y = 1;
x = x+5;
end
image(arg(f),'CDataMapping','scaled');
This is what the code produces:
Updated:
N = 10;
M = 10;
y = 1;
x = 1;
a = 1;
b = 2;
for x = 1:5:N
for y = 1:4:N
w = rand(1);
for l = 1:N
for s = 1:N
if(((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1)
f(l,s) = cos(0.001)+i.*sin(0.001);
end
end
end
end
y = 1;
end
image(arg(f),'CDataMapping','scaled');
There are many things you can do to speed up the computation. One important one is to remove loops and replace them with vectorized code. Octave works much faster when doing many computations as once, as opposed to one at a time.
For example, instead of
for l = 1:N
for s = 1:N
if(((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1)
f(l,s) = cos(0.001)+i.*sin(0.001);
end
end
end
one can write
l = 1:N;
s = (1:N).';
index = ((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1;
f(index) = cos(0.001)+i.*sin(0.001);
However, here we still do too much work because we compute index at locations that we know will be outside the extend of the ellipse. Ideally we'd find a smaller region around each point (x,y) that we know the ellipse fits into.
Another important thing to do is preallocate the array. f grows within the loop iterations. Instead, one should create f with its final size before the loop starts.
There are also many redundant computations being made. For example w.*pi is computed multiple times, and the cos and sin of it too. You also assign cos(0.001)+i.*sin(0.001) to output pixels over and over again, this could be a constant computed once.
The following code runs in MATLAB in a tiny fraction of a second (though Octave will be a lot slower). I've also separated N and M properly (so the output is not always square) and made the step sizes a variable for improved understanding of the code. I'm assigning 1 to the ellipse elements, you can replace them by your constant by multiplying f = f * (cos(0.001)+i*sin(0.001)).
N = 150;
M = 200;
a = 5;
b = 10;
x_step = 25;
y_step = 25;
f = zeros(N,M);
for x = x_step/2:x_step:M
for y = y_step/2:y_step:N
phi = rand(1)*pi;
cosphi = cos(phi);
sinphi = sin(phi);
l = (1:M)-x;
s = (1:N).'-y;
index = (l*cosphi+s*sinphi).^2/a.^2 + (-l*sinphi + s*cosphi).^2/b.^2 <= 1;
f(index) = 1;
end
end
I'm not 100% sure what you're trying to do. See the code below and let me know. It took me about 30 s to run the 150 x 150 case. Not sure if that's fast enough
[h,k] = meshgrid(0:150, 0:150);
a = 0.25;
b = 0.5;
phi = reshape( linspace( 0 , 2*pi , numel(h) ), size(h));
theta = linspace(0,2*pi,50)';
x = a*cos(theta);
y = b*sin(theta);
h = h(:);
k = k(:);
phi = phi(:);
ellipseX = arrayfun(#(H,K,F) x*cos(F)-y*sin(F) + H , h,k,phi, 'uni', 0);
ellipseY = arrayfun(#(H,K,F) x*sin(F)+y*cos(F) + K , h,k,phi, 'uni', 0);
ellipse = [ellipseX, ellipseY, repmat({'r'}, size(ellipseX))]';
fill(ellipse{:})

Solving Differential Equation Sympy

I haven't been able to find particular solutions to this differential equation.
from sympy import *
m = float(raw_input('Mass:\n> '))
g = 9.8
k = float(raw_input('Drag Coefficient:\n> '))
v = Function('v')
f1 = g * m
t = Symbol('t')
v = Function('v')
equation = dsolve(f1 - k * v(t) - m * Derivative(v(t)), 0)
print equation
for m = 1000 and k = .2 it returns
Eq(f(t), C1*exp(-0.0002*t) + 49000.0)
which is correct but I want the equation solved for when v(0) = 0 which should return
Eq(f(t), 49000*(1-exp(-0.0002*t))
I believe Sympy is not yet able to take into account initial conditions. Although dsolve has the option ics for entering initial conditions (see the documentation), it appears to be of limited use.
Therefore, you need to apply the initial conditions manually. For example:
C1 = Symbol('C1')
C1_ic = solve(equation.rhs.subs({t:0}),C1)[0]
print equation.subs({C1:C1_ic})
Eq(v(t), 49000.0 - 49000.0*exp(-0.0002*t))

User defined SVM kernel with scikit-learn

I encounter a problem when defining a kernel by myself in scikit-learn.
I define by myself the gaussian kernel and was able to fit the SVM but not to use it to make a prediction.
More precisely I have the following code
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.utils import shuffle
import scipy.sparse as sparse
import numpy as np
digits = load_digits(2)
X, y = shuffle(digits.data, digits.target)
gamma = 1.0
X_train, X_test = X[:100, :], X[100:, :]
y_train, y_test = y[:100], y[100:]
m1 = SVC(kernel='rbf',gamma=1)
m1.fit(X_train, y_train)
m1.predict(X_test)
def my_kernel(x,y):
d = x - y
c = np.dot(d,d.T)
return np.exp(-gamma*c)
m2 = SVC(kernel=my_kernel)
m2.fit(X_train, y_train)
m2.predict(X_test)
m1 and m2 should be the same, but m2.predict(X_test) return the error :
operands could not be broadcast together with shapes (260,64) (100,64)
I don't understand the problem.
Furthermore if x is one data point, the m1.predict(x) gives a +1/-1 result, as expexcted, but m2.predict(x) gives an array of +1/-1...
No idea why.
The error is at the x - y line. You cannot subtract the two like that, because the first dimensions of both may not be equal. Here is how the rbf kernel is implemented in scikit-learn, taken from here (only keeping the essentials):
def row_norms(X, squared=False):
if issparse(X):
norms = csr_row_norms(X)
else:
norms = np.einsum('ij,ij->i', X, X)
if not squared:
np.sqrt(norms, norms)
return norms
def euclidean_distances(X, Y=None, Y_norm_squared=None, squared=False):
"""
Considering the rows of X (and Y=X) as vectors, compute the
distance matrix between each pair of vectors.
[...]
Returns
-------
distances : {array, sparse matrix}, shape (n_samples_1, n_samples_2)
"""
X, Y = check_pairwise_arrays(X, Y)
if Y_norm_squared is not None:
YY = check_array(Y_norm_squared)
if YY.shape != (1, Y.shape[0]):
raise ValueError(
"Incompatible dimensions for Y and Y_norm_squared")
else:
YY = row_norms(Y, squared=True)[np.newaxis, :]
if X is Y: # shortcut in the common case euclidean_distances(X, X)
XX = YY.T
else:
XX = row_norms(X, squared=True)[:, np.newaxis]
distances = safe_sparse_dot(X, Y.T, dense_output=True)
distances *= -2
distances += XX
distances += YY
np.maximum(distances, 0, out=distances)
if X is Y:
# Ensure that distances between vectors and themselves are set to 0.0.
# This may not be the case due to floating point rounding errors.
distances.flat[::distances.shape[0] + 1] = 0.0
return distances if squared else np.sqrt(distances, out=distances)
def rbf_kernel(X, Y=None, gamma=None):
X, Y = check_pairwise_arrays(X, Y)
if gamma is None:
gamma = 1.0 / X.shape[1]
K = euclidean_distances(X, Y, squared=True)
K *= -gamma
np.exp(K, K) # exponentiate K in-place
return K
You might want to dig deeper into the code, but look at the comments for the euclidean_distances function. A naive implementation of what you're trying to achieve would be this:
def my_kernel(x,y):
d = np.zeros((x.shape[0], y.shape[0]))
for i, row_x in enumerate(x):
for j, row_y in enumerate(y):
d[i, j] = np.exp(-gamma * np.linalg.norm(row_x - row_y))
return d

Second order ODE integration using scipy

I am trying to integrate a second order differential equation using 'scipy.integrate.odeint'. My eqution is as follows
m*x[i]''+x[i]'= K/N*sum(j=0 to N)of sin(x[j]-x[i])
which I have converted into two first order ODEs as followed. In the below code, yinit is array of the initial values x(0) and x'(0). My question is what should be the values of x(0) and x'(0) ?
x'[i]=y[i]
y'[i]=(-y[i]+K/N*sum(j=0 to N)of sin(x[j]-x[i]))/m
from numpy import *
from scipy.integrate import odeint
N = 50
def f(theta, t):
global N
x, y = theta
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i+1,N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K*s)/N)/m
return array([y, fx])
t = linspace(0, 10, 100, endpoint=False)
Uniformly generating random number
theta = random.uniform(-180, 180, N)
Integrating function f using odeint
yinit = array([x(0), x'(0)])
y = odeint(f, yinit, t)[:,0]
print (y)
You can choose as initial condition whatever you want.
In your case, you decided to use a random initial condition for x for all the oscillators. You can use a random initial condition for 'y' as well I guess, as I did below.
There were a few errors in the above code, mostly on how to unpack x,y from theta and how to repack them at the end (see concatenate below in the corrected code). See also the concatenate for yinit.
The rest are stylish/minor changes.
from numpy import concatenate, linspace, random, mod, zeros, sin
from scipy.integrate import odeint
Nosc = 20
assert mod(Nosc, 2) == 0
def f(theta, _):
N = theta.size / 2
x, y = theta[:N], theta[N:]
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i + 1, N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K * s) / N) / m
return concatenate(([y, fx]))
t = linspace(0, 10, 50, endpoint=False)
theta = random.uniform(-180, 180, Nosc)
theta2 = random.uniform(-180, 180, Nosc) #added initial condition for the velocities of the oscillators
yinit = concatenate((theta, theta2))
res = odeint(f, yinit, t)
X = res[:, :Nosc].T
Y = res[:, Nosc:].T
To plot the time evolution of the system, you can use something like
import matplotlib.pylab as plt
fig, ax = plt.subplots()
for displacement in X:
ax.plot(t, displacement)
ax.set_xlabel('t')
ax.set_ylabel('x')
fig.show()
What are you modelling? At first the eq. looked a bit like kuramoto oscillators, but then I noticed you also have a x[i]'' term.
Notice how in your model, as you do not have a spring term in the equation, like a term x(t) at the LHS, the value of x converges to an arbitrary value:

Integration using scipy odeint

Why do below two versions of a same problem give different result y_phi, while all parameters are same and random values are seeded with same value in both versions ?
I know Version_1 gives correct result while Version_2 gives false result, why is that so?
Where am I making mistake in Version_2 ?
Version_1:
from numpy import *
import random
from scipy.integrate import odeint
def f(phi, t, alpha):
v_phi = zeros(x*y, float)
for n in range(x*y):
cmean = cos(phi[n])
smean = sin(phi[n])
v_phi[n] = smean*cos(phi[n]+alpha)-cmean*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
y = odeint(f, phi, t, args=(alpha,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
Version_2:
def f(phi, t, alpha, cmean, smean):
v_phi = zeros(x*y, float)
for n in range(x*y):
v_phi[n] = smean[n]*cos(phi[n]+alpha)-cmean[n]*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
cmean = []
smean = []
for l in range(x*y):
cmean.append(cos(phi[l]))
smean.append(sin(phi[l]))
y = odeint(f, phi, t, args=(alpha, cmean, smean,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
In the first case smean = cos(phi(t)[n]), it depends on the current value of phi, but in the second smean = cos(phi(0)[n]) and depends only on the initial value. (And similarly for cmean).
Btw, you could have found this issue also using http://en.wikipedia.org/wiki/Rubber_duck_debugging