How to insert for loop inside pm.Deterministic() to create an array in pymc3 model? - pymc3

with pm.Model() as model:
w = pm.Uniform('w', 0, 1)
c = pm.Uniform('c', 0, 5)
b = 0.5
s = pm.Deterministic('s', pm.math.exp(-c * (w * d1 + (1-w) * d2)))
s_A = np.zeros((8))
s_A = pm.Deterministic('s_A', for i in range(8) : s_A[i] = pm.math.sum(s[i][j] for j in (0,1,2,4)))
s_B = np.zeros((8))
s_B = pm.Deterministic('s_B', for i in range(8) : s_B[i] = pm.math.sum(s[i][j] for j in (3,5,6,7)))
r = pm.Deterministic('r', b*s_A/(b*s_A + (1-b)*s_B))
y = pm.Binomial('y', n= 320, p = r, observed = y_observed )
trace = pm.sample(5000)
az.plot_posterior(trace, hdi_prob = 0.95)
I want to create a pymc3 model with the above parameters. d1 and d2 are both 8*8 matrices. The above code shows syntax error when I try to assign values to the s_A and s_B arrays. Can someone tell me what how to create variablles like s_A and s_B inside the pymc3 model.

Related

Why is this ellipse drawing program so very slow?

I have a program to draw a grid of ellipses with a uniform phase distribution. However, it is very slow.
I'd like my code to be faster, so that I can use, for example, N = 150, M = 150.
How can I speed up this code?
N = 10;
M = 10;
y = 1;
x = 1;
a = 1;
b = 2;
for k = 1:N
for m = 1:N
w = rand(1,1);
for l = 1:N
for s = 1:N
if(((l-x)*cos(w*pi)+(s-y)*sin(w*pi)).^2/a^2 + (-(l-x)*sin(w*pi) + (s-y)*cos(w*pi)).^2/b.^2 <= 1)
f(l,s) = 1*(cos(0.001)+i*sin(0.001));
end
end
end
y = y+4;
end
y = 1;
x = x+5;
end
image(arg(f),'CDataMapping','scaled');
This is what the code produces:
Updated:
N = 10;
M = 10;
y = 1;
x = 1;
a = 1;
b = 2;
for x = 1:5:N
for y = 1:4:N
w = rand(1);
for l = 1:N
for s = 1:N
if(((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1)
f(l,s) = cos(0.001)+i.*sin(0.001);
end
end
end
end
y = 1;
end
image(arg(f),'CDataMapping','scaled');
There are many things you can do to speed up the computation. One important one is to remove loops and replace them with vectorized code. Octave works much faster when doing many computations as once, as opposed to one at a time.
For example, instead of
for l = 1:N
for s = 1:N
if(((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1)
f(l,s) = cos(0.001)+i.*sin(0.001);
end
end
end
one can write
l = 1:N;
s = (1:N).';
index = ((l-x).*cos(w.*pi)+(s-y).*sin(w.*pi)).^2/a.^2 + (-(l-x).*sin(w.*pi) + (s-y).*cos(w.*pi)).^2/b.^2 <= 1;
f(index) = cos(0.001)+i.*sin(0.001);
However, here we still do too much work because we compute index at locations that we know will be outside the extend of the ellipse. Ideally we'd find a smaller region around each point (x,y) that we know the ellipse fits into.
Another important thing to do is preallocate the array. f grows within the loop iterations. Instead, one should create f with its final size before the loop starts.
There are also many redundant computations being made. For example w.*pi is computed multiple times, and the cos and sin of it too. You also assign cos(0.001)+i.*sin(0.001) to output pixels over and over again, this could be a constant computed once.
The following code runs in MATLAB in a tiny fraction of a second (though Octave will be a lot slower). I've also separated N and M properly (so the output is not always square) and made the step sizes a variable for improved understanding of the code. I'm assigning 1 to the ellipse elements, you can replace them by your constant by multiplying f = f * (cos(0.001)+i*sin(0.001)).
N = 150;
M = 200;
a = 5;
b = 10;
x_step = 25;
y_step = 25;
f = zeros(N,M);
for x = x_step/2:x_step:M
for y = y_step/2:y_step:N
phi = rand(1)*pi;
cosphi = cos(phi);
sinphi = sin(phi);
l = (1:M)-x;
s = (1:N).'-y;
index = (l*cosphi+s*sinphi).^2/a.^2 + (-l*sinphi + s*cosphi).^2/b.^2 <= 1;
f(index) = 1;
end
end
I'm not 100% sure what you're trying to do. See the code below and let me know. It took me about 30 s to run the 150 x 150 case. Not sure if that's fast enough
[h,k] = meshgrid(0:150, 0:150);
a = 0.25;
b = 0.5;
phi = reshape( linspace( 0 , 2*pi , numel(h) ), size(h));
theta = linspace(0,2*pi,50)';
x = a*cos(theta);
y = b*sin(theta);
h = h(:);
k = k(:);
phi = phi(:);
ellipseX = arrayfun(#(H,K,F) x*cos(F)-y*sin(F) + H , h,k,phi, 'uni', 0);
ellipseY = arrayfun(#(H,K,F) x*sin(F)+y*cos(F) + K , h,k,phi, 'uni', 0);
ellipse = [ellipseX, ellipseY, repmat({'r'}, size(ellipseX))]';
fill(ellipse{:})

Computing tensor of Inertia in 2D

I'm researching how to find the inertia for a 2D shape. The contour of this shape is meshed with several points, the x and y coordinate of each point is already known.
I know the expression of Ixx, Iyy and Ixy but the body has no mass. How do I proceed?
For whatever shape you have, you need to split it into triangles and handle each triangle separately. Then in the end combined the results using the following rules
Overall
% Combined total area of all triangles
total_area = SUM( area(i), i=1:n )
total_mass = SUM( mass(i), i=1:n )
% Combined centroid (center of mass) coordinates
combined_centroid_x = SUM( mass(i)*centroid_x(i), i=1:n)/total_mass
combined_centroid_y = SUM( mass(i)*centroid_y(i), i=1:n)/total_mass
% Each distance to triangle (squared)
centroid_distance_sq(i) = centroid_x(i)*centroid_x(i)+centroid_y(i)*centroid_y(i)
% Combined mass moment of inertia
combined_mmoi = SUM(mmoi(i)+mass(i)*centroid_distance_sq(i), i=1:n)
Now for each triangle.
Consider the three corner vertices with vector coordinates, points A, B and C
a=[ax,ay]
b=[bx,by]
c=[cx,cy]
and the following dot and cross product (scalar) combinations
a·a = ax*ax+ay*ay
b·b = bx*bx+by*by
c·c = cx*cx+cy*cy
a·b = ax*bx+ay*by
b·c = bx*cx+by*cy
c·a = cx*ax+cy*ay
a×b = ax*by-ay*bx
b×c = bx*cy-by*cx
c×a = cx*ay-cy*ax
The properties of the triangle are (with t(i) the thickness and rho the mass density)
area(i) = 1/2*ABS( a×b + b×c + c×a )
mass(i) = rho*t(i)*area(i)
centroid_x(i) = 1/3*(ax + bx + cx)
centroid_y(i) = 1/3*(ay + by + cy)
mmoi(i) = 1/6*mass(i)*( a·a + b·b + c·c + a·b + b·c + c·a )
By component the above are
area(i) = 1/2*ABS( ax*(by-cy)+ay*(cx-bx)+bx*cy-by*cx)
mmoi(i) = mass(i)/6*(ax^2+ax*(bx+cx)+bx^2+bx*cx+cx^2+ay^2+ay*(by+cy)+by^2+by*cy+cy^2)
Appendix
A little theory here. The area of each triangle is found using
Area = 1/2 * || (b-a) × (c-b) ||
where × is a vector cross product, and || .. || is vector norm (length function).
The triangle is parametrized by two variables t and s such that the double integral A = INT(INT(1,dx),dy) gives the total area
% position r(s,t) = [x,y]
[x,y] = [ax,ay] + t*[bx-ax, by-zy] + t*s*[cx-bx,cy-by]
% gradient directions along s and t
(dr/dt) = [bx-ax,by-ay] + s*[cx-bx,cy-by]
(dr/ds) = t*[cx-bx,cy-by]
% Integration area element
dA = || (dr/ds)×(dr/dt) || = (2*A*t)*ds*dt
%
% where A = 1/2*||(b-a)×(c-b)||
% Check that the integral returns the area
Area = INT( INT( 2*A*t,s=0..1), t=0..1) = 2*A*(1/2) = A
% Mass moment of inertia components
/ / / | y^2+z^2 -x*y -x*z |
I = 2*m*| | | t*| -x*y x^2+z^2 -y*z | dz ds dt
/ / / | -x*z -y*z x^2+y^2 |
% where [x,y] are defined from the parametrization
I want to slightly correct excellent John Alexiou answer:
Triangle mmoi algorithm expects corners to be defined relative to triangle center of mass (centroid). So subtract centroid from corners before calculating mmoi
Shape mmoi algorithm expects centroids of each single triangle to be defined relative to shape center of mass (combined centroid). So subtract combined centroid from each triangle centroid before calculating mmoi.
So result code would look like this:
public static float CalculateMMOI(Triangle[] triangles, float thickness, float density)
{
float[] mass = new float[triangles.Length];
float[] mmoi = new float[triangles.Length];
Vector2[] centroid = new Vector2[triangles.Length];
float combinedMass = 0;
float combinedMMOI = 0;
Vector2 combinedCentroid = new Vector2(0, 0);
for (var i = 0; i < triangles.Length; ++i)
{
mass[i] = triangles[i].CalculateMass(thickness, density);
mmoi[i] = triangles[i].CalculateMMOI(mass[i]);
centroid[i] = triangles[i].CalculateCentroid();
combinedMass += mass[i];
combinedCentroid += mass[i] * centroid[i];
}
combinedCentroid /= combinedMass;
for (var i = 0; i < triangles.Length; ++i) {
combinedMMOI += mmoi[i] + mass[i] * (centroid[i] - combinedCentroid).LengthSquared();
}
return combinedMMOI;
}
public struct Triangle
{
public Vector2 A, B, C;
public float CalculateMass(float thickness, float density)
{
var area = CalculateArea();
return area * thickness * density;
}
public float CalculateArea()
{
return 0.5f * Math.Abs(Vector2.Cross(A - B, A - C));
}
public float CalculateMMOI(float mass)
{
var centroid = CalculateCentroid()
var a = A - centroid;
var b = B - centroid;
var c = C - centroid;
var aa = Vector2.Dot(a, a);
var bb = Vector2.Dot(b, b);
var cc = Vector2.Dot(c, c);
var ab = Vector2.Dot(a, b);
var bc = Vector2.Dot(b, c);
var ca = Vector2.Dot(c, a);
return (aa + bb + cc + ab + bc + ca) * mass / 6f;
}
public Vector2 CalculateCentroid()
{
return (A + B + C) / 3f;
}
}
public struct Vector2
{
public float X, Y;
public float LengthSquared()
{
return X * X + Y * Y;
}
public static float Dot(Vector2 a, Vector2 b)
{
return a.X * b.X + a.Y * b.Y;
}
public static float Cross(Vector2 a, Vector2 b)
{
return a.X * b.Y - a.Y * b.X;
}
}

Python implementation of Gradient Descent Algorithm isn't working

I am trying to implement a gradient descent algorithm for simple linear regression. For some reason it doesn't seem to be working.
from __future__ import division
import random
def error(x_i,z_i, theta0,theta1):
return z_i - theta0 - theta1 * x_i
def squared_error(x_i,z_i,theta0,theta1):
return error(x_i,z_i,theta0,theta1)**2
def mse_fn(x, z, theta0,theta1):
m = 2 * len(x)
return sum(squared_error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
def mse_gradient(x, z, theta0,theta1):
m = 2 * len(x)
grad_0 = sum(error(x_i,z_i,theta0,theta1) for x_i,z_i in zip(x,z)) / m
grad_1 = sum(error(x_i,z_i,theta0,theta1) * x_i for x_i,z_i in zip(x,z)) / m
return grad_0, grad_1
def minimize_batch(x, z, mse_fn, mse_gradient_fn, theta0,theta1,tolerance=0.000001):
step_sizes = 0.01
theta0 = theta0
theta1 = theta1
value = mse_fn(x,z,theta0,theta1)
while True:
grad_0, grad_1 = mse_gradient(x,z,theta0,theta1)
next_theta0 = theta0 - step_sizes * grad_0
next_theta1 = theta1 - step_sizes * grad_1
next_value = mse_fn(x,z,next_theta0,theta1)
if abs(value - next_value) < tolerance:
return theta0, theta1
else:
theta0, theta1, value = next_theta0, next_theta1, next_value
#The data
x = [i + 1 for i in range(40)]
y = [random.randrange(1,30) for i in range(40)]
z = [2*x_i + y_i + (y_i/7) for x_i,y_i in zip(x,y)]
theta0, theta1 = [random.randint(-10,10) for i in range(2)]
q = minimize_batch(x,z,mse_fn, mse_gradient, theta0,theta1,tolerance=0.000001)
When I run I get the following error:
return error(x_i,z_i,theta0,theta1)**2 OverflowError: (34, 'Result too large')

Second order ODE integration using scipy

I am trying to integrate a second order differential equation using 'scipy.integrate.odeint'. My eqution is as follows
m*x[i]''+x[i]'= K/N*sum(j=0 to N)of sin(x[j]-x[i])
which I have converted into two first order ODEs as followed. In the below code, yinit is array of the initial values x(0) and x'(0). My question is what should be the values of x(0) and x'(0) ?
x'[i]=y[i]
y'[i]=(-y[i]+K/N*sum(j=0 to N)of sin(x[j]-x[i]))/m
from numpy import *
from scipy.integrate import odeint
N = 50
def f(theta, t):
global N
x, y = theta
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i+1,N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K*s)/N)/m
return array([y, fx])
t = linspace(0, 10, 100, endpoint=False)
Uniformly generating random number
theta = random.uniform(-180, 180, N)
Integrating function f using odeint
yinit = array([x(0), x'(0)])
y = odeint(f, yinit, t)[:,0]
print (y)
You can choose as initial condition whatever you want.
In your case, you decided to use a random initial condition for x for all the oscillators. You can use a random initial condition for 'y' as well I guess, as I did below.
There were a few errors in the above code, mostly on how to unpack x,y from theta and how to repack them at the end (see concatenate below in the corrected code). See also the concatenate for yinit.
The rest are stylish/minor changes.
from numpy import concatenate, linspace, random, mod, zeros, sin
from scipy.integrate import odeint
Nosc = 20
assert mod(Nosc, 2) == 0
def f(theta, _):
N = theta.size / 2
x, y = theta[:N], theta[N:]
m = 0.95
K = 1.0
fx = zeros(N, float)
for i in range(N):
s = 0.0
for j in range(i + 1, N):
s = s + sin(x[j] - x[i])
fx[i] = (-y[i] + (K * s) / N) / m
return concatenate(([y, fx]))
t = linspace(0, 10, 50, endpoint=False)
theta = random.uniform(-180, 180, Nosc)
theta2 = random.uniform(-180, 180, Nosc) #added initial condition for the velocities of the oscillators
yinit = concatenate((theta, theta2))
res = odeint(f, yinit, t)
X = res[:, :Nosc].T
Y = res[:, Nosc:].T
To plot the time evolution of the system, you can use something like
import matplotlib.pylab as plt
fig, ax = plt.subplots()
for displacement in X:
ax.plot(t, displacement)
ax.set_xlabel('t')
ax.set_ylabel('x')
fig.show()
What are you modelling? At first the eq. looked a bit like kuramoto oscillators, but then I noticed you also have a x[i]'' term.
Notice how in your model, as you do not have a spring term in the equation, like a term x(t) at the LHS, the value of x converges to an arbitrary value:

Integration using scipy odeint

Why do below two versions of a same problem give different result y_phi, while all parameters are same and random values are seeded with same value in both versions ?
I know Version_1 gives correct result while Version_2 gives false result, why is that so?
Where am I making mistake in Version_2 ?
Version_1:
from numpy import *
import random
from scipy.integrate import odeint
def f(phi, t, alpha):
v_phi = zeros(x*y, float)
for n in range(x*y):
cmean = cos(phi[n])
smean = sin(phi[n])
v_phi[n] = smean*cos(phi[n]+alpha)-cmean*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
y = odeint(f, phi, t, args=(alpha,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
Version_2:
def f(phi, t, alpha, cmean, smean):
v_phi = zeros(x*y, float)
for n in range(x*y):
v_phi[n] = smean[n]*cos(phi[n]+alpha)-cmean[n]*sin(phi[n]+alpha)
return v_phi
x = 5
y = 5
N = x*y
PI = pi
alpha = 0.2*PI
random.seed(1010)
phi = zeros(x*y, float)
for i in range(x*y):
phi[i] = random.uniform(0.0, 2*PI)
t = arange(0.0, 100.0, 0.1)
cmean = []
smean = []
for l in range(x*y):
cmean.append(cos(phi[l]))
smean.append(sin(phi[l]))
y = odeint(f, phi, t, args=(alpha, cmean, smean,))
y_phi = [y[len(t)-1, i] for i in range(N)]
print y_phi
In the first case smean = cos(phi(t)[n]), it depends on the current value of phi, but in the second smean = cos(phi(0)[n]) and depends only on the initial value. (And similarly for cmean).
Btw, you could have found this issue also using http://en.wikipedia.org/wiki/Rubber_duck_debugging