generate N random numbers from a skew normal distribution using numpy - python-2.7

I need a function in python to return N random numbers from a skew normal distribution. The skew needs to be taken as a parameter.
e.g. my current use is
x = numpy.random.randn(1000)
and the ideal function would be e.g.
x = randn_skew(1000, skew=0.7)
Solution needs to conform with: python version 2.7, numpy v.1.9
A similar answer is here: skew normal distribution in scipy However this generates a PDF not the random numbers.

I start by generating the PDF curves for reference:
NUM_SAMPLES = 100000
SKEW_PARAMS = [-3, 0]
def skew_norm_pdf(x,e=0,w=1,a=0):
# adapated from:
# http://stackoverflow.com/questions/5884768/skew-normal-distribution-in-scipy
t = (x-e) / w
return 2.0 * w * stats.norm.pdf(t) * stats.norm.cdf(a*t)
# generate the skew normal PDF for reference:
location = 0.0
scale = 1.0
x = np.linspace(-5,5,100)
plt.subplots(figsize=(12,4))
for alpha_skew in SKEW_PARAMS:
p = skew_norm_pdf(x,location,scale,alpha_skew)
# n.b. note that alpha is a parameter that controls skew, but the 'skewness'
# as measured will be different. see the wikipedia page:
# https://en.wikipedia.org/wiki/Skew_normal_distribution
plt.plot(x,p)
Next I found a VB implementation of sampling random numbers from the skew normal distribution and converted it to python:
# literal adaption from:
# http://stackoverflow.com/questions/4643285/how-to-generate-random-numbers-that-follow-skew-normal-distribution-in-matlab
# original at:
# http://www.ozgrid.com/forum/showthread.php?t=108175
def rand_skew_norm(fAlpha, fLocation, fScale):
sigma = fAlpha / np.sqrt(1.0 + fAlpha**2)
afRN = np.random.randn(2)
u0 = afRN[0]
v = afRN[1]
u1 = sigma*u0 + np.sqrt(1.0 -sigma**2) * v
if u0 >= 0:
return u1*fScale + fLocation
return (-u1)*fScale + fLocation
def randn_skew(N, skew=0.0):
return [rand_skew_norm(skew, 0, 1) for x in range(N)]
# lets check they at least visually match the PDF:
plt.subplots(figsize=(12,4))
for alpha_skew in SKEW_PARAMS:
p = randn_skew(NUM_SAMPLES, alpha_skew)
sns.distplot(p)
And then wrote a quick version which (without extensive testing) appears to be correct:
def randn_skew_fast(N, alpha=0.0, loc=0.0, scale=1.0):
sigma = alpha / np.sqrt(1.0 + alpha**2)
u0 = np.random.randn(N)
v = np.random.randn(N)
u1 = (sigma*u0 + np.sqrt(1.0 - sigma**2)*v) * scale
u1[u0 < 0] *= -1
u1 = u1 + loc
return u1
# lets check again
plt.subplots(figsize=(12,4))
for alpha_skew in SKEW_PARAMS:
p = randn_skew_fast(NUM_SAMPLES, alpha_skew)
sns.distplot(p)

from scipy.stats import skewnorm
a=10
data= skewnorm.rvs(a, size=1000)
Here, a is a parameter which you can refer to:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.skewnorm.html

Adapted from rsnorm function from fGarch R package
def random_snorm(n, mean = 0, sd = 1, xi = 1.5):
def random_snorm_aux(n, xi):
weight = xi/(xi + 1/xi)
z = numpy.random.uniform(-weight,1-weight,n)
xi_ = xi**numpy.sign(z)
random = -numpy.absolute(numpy.random.normal(0,1,n))/xi_ * numpy.sign(z)
m1 = 2/numpy.sqrt(2 * numpy.pi)
mu = m1 * (xi - 1/xi)
sigma = numpy.sqrt((1 - m1**2) * (xi**2 + 1/xi**2) + 2 * m1**2 - 1)
return (random - mu)/sigma
return random_snorm_aux(n, xi) * sd + mean

Related

Why is using dot product worsening the performance for PyMC3?

I am trying to run a simple linear regression using PyMC3. The below code is a snippet:
import numpy as np
from pymc3 import Model, sample, Normal, HalfCauchy
import pymc3 as pm
X = np.arange(500).reshape(500, 1)
y = np.random.normal(0, 5, [500, 1]) + X
with Model() as multiple_regression_model:
beta = Normal('beta', mu=0, sd=1000, shape=2)
sigma = HalfCauchy('sigma', 1000)
y_hat = beta[0] + X * beta[1]
exp = Normal('y', y_hat, sigma=sigma, observed=y)
with multiple_regression_model:
trace = sample(1000, tune=1000)
trace['beta'].mean(axis=0)
The above code runs in about 6 seconds and gives reasonable estimates for the betas ([-0.19646408, 1.00053091])
But when I try to use the dot product, things get really bad:
X = np.arange(500).reshape(500, 1)
y = np.random.normal(0, 5, [500, 1]) + X
X_aug_np = np.squeeze(np.dstack((np.ones((500, 1)), X)))
with Model() as multiple_regression_model:
beta = Normal('beta', mu=0, sd=1000, shape=2)
sigma = HalfCauchy('sigma', 1000)
y_hat = pm.math.dot(X_aug_np, beta)
exp = Normal('y', y_hat, sigma=sigma, observed=y)
with multiple_regression_model:
trace = sample(1000, tune=1000)
trace['beta'].mean(axis=0)
Now the code finished in 56 seconds and the estimates are totally off ([249.52363555, -0.0000481 ]).
I thought using dot product will make things faster. Why is it behaving this way? Am I doing something wrong here?
This is a subtle shape and broadcasting bug: if you change the shape of beta to (2, 1), then it works.
To see why, I renamed the two models and tidied the code a bit:
import numpy as np
import pymc3 as pm
X = np.arange(500).reshape(500, 1)
y = np.random.normal(0, 5, [500, 1]) + X
X_aug_np = np.squeeze(np.dstack((np.ones((500, 1)), X)))
with pm.Model() as basic_model:
beta = pm.Normal('beta', mu=0, sd=1000, shape=2)
sigma = pm.HalfCauchy('sigma', 1000)
y_hat = beta[0] + X * beta[1]
exp = pm.Normal('y', y_hat, sigma=sigma, observed=y)
with pm.Model() as matmul_model:
beta = pm.Normal('beta', mu=0, sd=1000, shape=(2, 1))
sigma = pm.HalfCauchy('sigma', 1000)
y_hat = pm.math.dot(X_aug_np, beta)
exp = pm.Normal('y', y_hat, sigma=sigma, observed=y)
How would you have found that out? Since it looked like the models were the same, but they were not sampling similarly, I ran
print(matmul_model.check_test_point())
print(basic_model.check_test_point())
which computes the log probability of the variables at a sensible default. This did not match up, so I checked exp.tag.test_value.shape, and found out it was (500, 500), when I expected it to be (500, 1). Shape handling is super hard in probabilistic programming, and this happened because exp broadcasts y_hat, sigma, and y together.
As an added problem, I could not get matmul_model to sample on my machine, without setting cores=1, chains=4.

error in calculating gradient use python

I use the following formule to calculate the gradient
gradient = [f(x+h) - f(x-h)] / 2h
and I test it with a linear function, but something is wrong.
The code is here:
import numpy as np
def evla_numerical_gradient(f, x):
gradient = np.zeros(x.shape, dtype=np.float64)
delta_x = 0.00001
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
index = it.multi_index
x_old = x[index]
x[index] = x_old + delta_x
fx_addh = f(x)
print(fx_addh)
x[index] = x_old - delta_x
fx_minush = f(x)
print(fx_minush)
x[index] = x_old
print((fx_addh - fx_minush) / (2 * delta_x))
gradient[index] = (fx_addh - fx_minush) / (2. * delta_x)
it.iternext()
return gradient
def lin(x):
return x
if __name__ == '__main__':
x = np.array([0.001])
grad = evla_numerical_gradient(lin, x)
print(grad)
The result is here:
[ 0.00101]
[ 0.00099]
[ 0.]
[ 0.]
Why the gradient at x is 0?
The problem with your code is on the following combination of lines (I show the example of fx_addh, the case of fx_minush is similar
fx_addh = f(x)
x[index] = x_old
You are placing the result of f(x) into fx_addh. But the problem is that the way you have defined f(x), which is just a handle to your lin(x) you are returning the argument directly.
In Python assignment operations do not copy objects, but create a binding between a target (on the left of the assignment =) and the object (on the right of the assignment =). More on this here.
To convince yourself that this is happening you can place another print(fx_addh) after the line in which you set x[index] = x_old; and you will see that it now contains the value zero.
To fix this you can modify your lin(x) function to return a copy of the object passed in as an argument:
import numpy as np
import copy
def evla_numerical_gradient(f, x):
gradient = np.zeros(x.shape, dtype=np.float64)
delta_x = 0.00001
it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
index = it.multi_index
x_old = x[index]
x[index] = x_old + delta_x
fx_addh = f(x)
print(fx_addh)
x[index] = x_old - delta_x
fx_minush = f(x)
print(fx_minush)
x[index] = x_old
print((fx_addh - fx_minush) / (2 * delta_x))
gradient[index] = (fx_addh - fx_minush) / (2. * delta_x)
it.iternext()
return gradient
def lin(x):
return copy.copy(x)
if __name__ == '__main__':
x = np.array([0.001])
grad = evla_numerical_gradient(lin, x)
print(grad)
Which returns:
[ 0.00101]
[ 0.00099]
[ 1.]
[ 1.]
Indicating a gradient of 1 as you would expect.
Because fx_addh and fx_minush are pointing to the same index of the memory. change the lin function to this:
def lin(x):
return x.copy()
result:
[ 0.00101]
[ 0.00099]
[ 1.]
[ 1.]

numerical integration python

I need to reduce the running time for quad() in python (I am integrating some thousands integrals). I found a similar question in here where they suggested to do several integrations and add the partial values. However that does not improve performance. Any thoughts? here is a simple example:
import numpy as np
from scipy.integrate import quad
from scipy.stats import norm
import time
funcB = lambda x: norm.pdf(x,0,1)
start = time.time()
good_missclasified,_ = quad(funcB, 0,3.3333)
stop = time.time()
time_elapsed = stop - start
print ('quad : ' + str(time_elapsed))
start = time.time()
num = np.linspace(0,3.3333,10)
Lv = []
last, lastG = 0, 0
for g in num:
Lval,x = quad(funcB, lastG, g)
last, lastG = last + Lval, g
Lv.append(last)
Lv = np.array(Lv)
stop = time.time()
time_elapsed = stop - start
print ('10 int : ' + str(time_elapsed))
print(good_missclasified,Lv[9])
quadpy (a project of mine) is vectorized and can integrate a function over many domains (e.g., intervals) at once. You do have to choose your own integration method though.
import numpy
import quadpy
a = 0.0
b = 1.0
n = 100
start_points = numpy.linspace(a, b, n, endpoint=False)
h = (b-a) / n
end_points = start_points + h
intervals = numpy.array([start_points, end_points])
scheme = quadpy.line_segment.gauss_kronrod(3)
vals = scheme.integrate(numpy.exp, intervals)
print(vals)
[0.10050167 0.10151173 0.10253194 0.1035624 0.10460322 0.1056545
0.10671635 0.10778886 0.10887216 0.10996634 0.11107152 0.11218781
0.11331532 0.11445416 0.11560444 0.11676628 0.1179398 0.11912512
0.12032235 0.12153161 0.12275302 0.12398671 0.12523279 0.1264914
0.12776266 0.1290467 0.13034364 0.13165362 0.13297676 0.1343132
0.13566307 0.1370265 0.13840364 0.13979462 0.14119958 0.14261866
0.144052 0.14549975 0.14696204 0.14843904 0.14993087 0.15143771
0.15295968 0.15449695 0.15604967 0.157618 0.15920208 0.16080209
0.16241818 0.16405051 0.16569924 0.16736455 0.16904659 0.17074554
0.17246156 0.17419482 0.17594551 0.17771379 0.17949985 0.18130385
0.18312598 0.18496643 0.18682537 0.188703 0.1905995 0.19251505
0.19444986 0.19640412 0.19837801 0.20037174 0.20238551 0.20441952
0.20647397 0.20854907 0.21064502 0.21276204 0.21490033 0.21706012
0.21924161 0.22144502 0.22367058 0.22591851 0.22818903 0.23048237
0.23279875 0.23513842 0.2375016 0.23988853 0.24229945 0.2447346
0.24719422 0.24967857 0.25218788 0.25472241 0.25728241 0.25986814
0.26247986 0.26511783 0.2677823 0.27047356]

assign on a tf.Variable tensor slice

I am trying to do the following
state[0,:] = state[0,:].assign( 0.9*prev_state + 0.1*( tf.matmul(inputs, weights) + biases ) )
for i in xrange(1,BATCH_SIZE):
state[i,:] = state[i,:].assign( 0.9*state[i-1,:] + 0.1*( tf.matmul(inputs, weights) + biases ) )
prev_state = prev_state.assign( state[BATCH_SIZE-1,:] )
with
state = tf.Variable(tf.zeros([BATCH_SIZE, HIDDEN_1]), name='inner_state')
prev_state = tf.Variable(tf.zeros([HIDDEN_1]), name='previous_inner_state')
As a follow-up for this question. I get an error that Tensor does not have an assign method.
What is the correct way to call the assign method on a slice of a Variable tensor?
Full current code:
import tensorflow as tf
import math
import numpy as np
INPUTS = 10
HIDDEN_1 = 20
BATCH_SIZE = 3
def create_graph(inputs, state, prev_state):
with tf.name_scope('h1'):
weights = tf.Variable(
tf.truncated_normal([INPUTS, HIDDEN_1],
stddev=1.0 / math.sqrt(float(INPUTS))),
name='weights')
biases = tf.Variable(tf.zeros([HIDDEN_1]), name='biases')
updated_state = tf.scatter_update(state, [0], 0.9 * prev_state + 0.1 * (tf.matmul(inputs[0,:], weights) + biases))
for i in xrange(1, BATCH_SIZE):
updated_state = tf.scatter_update(
updated_state, [i], 0.9 * updated_state[i-1, :] + 0.1 * (tf.matmul(inputs[i,:], weights) + biases))
prev_state = prev_state.assign(updated_state[BATCH_SIZE-1, :])
output = tf.nn.relu(updated_state)
return output
def data_iter():
while True:
idxs = np.random.rand(BATCH_SIZE, INPUTS)
yield idxs
with tf.Graph().as_default():
inputs = tf.placeholder(tf.float32, shape=(BATCH_SIZE, INPUTS))
state = tf.Variable(tf.zeros([BATCH_SIZE, HIDDEN_1]), name='inner_state')
prev_state = tf.Variable(tf.zeros([HIDDEN_1]), name='previous_inner_state')
output = create_graph(inputs, state, prev_state)
sess = tf.Session()
# Run the Op to initialize the variables.
init = tf.initialize_all_variables()
sess.run(init)
iter_ = data_iter()
for i in xrange(0, 2):
print ("iteration: ",i)
input_data = iter_.next()
out = sess.run(output, feed_dict={ inputs: input_data})
Tensorflow Variable objects have limited support for updating slices, using the tf.scatter_update(), tf.scatter_add(), and tf.scatter_sub() ops. Each of these ops allows you to specify a variable, a vector of slice indices (representing indices in the 0th dimension of the variable, which indicate the contiguous slices to be mutated) and a tensor of values (representing the new values to be applied to the variable, at the corresponding slice indices).
To update a single row of the variable, you can use tf.scatter_update(). For example, to update the 0th row of state, you would do:
updated_state = tf.scatter_update(
state, [0], 0.9 * prev_state + 0.1 * (tf.matmul(inputs, weights) + biases))
To chain multiple updates, you can use the mutable updated_state tensor that is returned from tf.scatter_update():
for i in xrange(1, BATCH_SIZE):
updated_state = tf.scatter_update(
updated_state, [i], 0.9 * updated_state[i-1, :] + ...)
prev_state = prev_state.assign(updated_state[BATCH_SIZE-1, :])
Finally, you can evaluate the resulting updated_state.op to apply all of the updates to state:
sess.run(updated_state.op) # or `sess.run(updated_state)` to fetch the result
PS. You might find it more efficient to use tf.scan() to compute the intermediate states, and just materialize prev_state in a variable.

I've done the same code on both MATLAB and Python, but ifft2 returns different values?

I've been trying to implement an homomorphic filter in frequency domain on both MATLAB and Python using OpenCV2 and NumPy, the MATLAB code gives the expected answer but the Python does not, the resulting image is very weird. I've tested all variables and came to the conclusion the only point there is a difference is the IFFT. On MATLAB, the results can be applied normally to the exp function and return the filtered original image expected, but the values of Python ifft are very different. I happened to see other posts with similar problems, but no satisfactory answer (perhaps i'm just bad at searching too...).
The MATLAB code
function [ img_r ] = homomorphic( img, D0, n )
[N, M] = size(img);
img_bk = double(img);
img_bk = log(img_bk+1);
img_freq = fftshift(fft2(img_bk));
magA = uint8(10*log(1+abs(img_freq)));
cu = M/2;
cv = N/2;
Hf = zeros(N,M);
for v = 1:N
dv = v - cv;
for u = 1:M
du = u - cu;
D = sqrt(du*du + dv*dv);
num = 1;
if D > 0
den = 1+((D0/D)^(2*n));
else
den = 0; %to replace +inf
end
if den ~= 0
H = num/den;
else
H = 0;
end
img_freq(v,u) = H*img_freq(v,u);
end
end
magB = uint8(10*log(1+abs(img_freq)));
img_r = (ifft2(ifftshift(img_freq)));
img_r = exp(img_r);
img_r = uint8(img_r);
and the Python code (might have some bugs but overall works)
import numpy as np
import cv2
def homomorphic(img, D0, n=2):
[N,M] = img.shape
img_bk = np.log(1 + np.float64(img))
img_freq = np.fft.fftshift(np.fft.fft2(img_bk))
cu = M/2.0
cv = N/2.0
for v in range(N):
dv = v - cv
for u in range(M):
du = u - cu
D = np.sqrt(du*du + dv*dv)
if D != 0:
a = 1.0 + (D0/D)**(2*n)
H = 1/a
else:
print D
H = 0
img_freq[v][u] = H*img_freq[v][u]
img_r = np.abs(np.fft.ifft2(np.fft.ifftshift(img_freq)))
eimg = np.exp(img_r)
eimg = np.uint8(eimg)
return eimg
I really don't get it, why the results are so different? Does anyone have any idea?