How to distinguish between priors and likelihood in PyMC3 - pymc3

In PyMC3 examples, priors and likelihood are defined inside with statement, but they are not explicitly defined if they are priors or likelihood. How do I define them?
In following example code, alpha and beta are priors and y_obs is likelihood(As PyMC3 examples states).
My question is: How PyMC3 internal code finds out if distribution is of prior or likelihood? There should be some explicit parameter to tell PyMC3 internals about type of distribution (prior/likelihood).
I know y_obs is likelihood, but I could define more y_obs1 y_obs2. How PyMC3 is going to identify which one is likelihood and which one is prior.
from pymc3 import Model, Normal, HalfNormal
regression_model = Model()
with regression_model:
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
mu = alpha + beta[0] * X[:,0] + beta[1] * X[:,1]
y_obs = Normal('y_obs', mu=mu, sd=sigma, observed=y)

Passing an observed argument makes it a likelihood term (in your example, P[y|mu, sigma]). The other RandomVariable variables (alpha, beta, and sigma), lacking an observed argument, are sampled as priors.

Related

Point stability error in one dimension dynamical systems

I am studying two different systems in python, looking for fixed points and their stability. Managed to solve completely for the first one, but applying the same method raises an error i dont know how to deal with in the second one.
TypeError: loop of ufunc does not support argument 0 of type Zero which has no callable exp method
I don't really know how to handle it, since when i make an exception for this error I simply skip the answers and i am certain there are possible answers and analytically i see no reasons for them not to exist
from sympy import *
from numpy import *
from matplotlib import pyplot as plt
r = symbols('r', real=True)
x = symbols('x', real =True)
#first one
fx =r*x+((x**3)/(1+x**2)) # DEf. both fet and right side in EQ
fps = solve(fx, x)
print(f"The fixed points are: {fps}")
dfx = lambdify(x,fx.diff(x))
for fp in fps:
stable_interval = solve_univariate_inequality(dfx(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
fx2 = r*x+( x* E**x)
fps2 = solve(fx2, x)
print(f"The fixed points are: {fps}")
dfx2 = lambdify(x,fx2.diff(x))
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
I expected the method i created to be applyable to the second system fx2 but i dont understand the logic behind why this doesnt remain true.
Oscar mentioned in the comment to not mix star imports: that's correct! Let's understand what you are doing:
with from sympy import * you are importing everything from sympy, like cos, sin, ...
with from numpy import * you are importing everything from numpy, like cos, sin, ... However, many things share the same names as sympy, so you are effectively overriding the previous import. Result: a complete mess that will surely raise errors down the road, as your namespace now contains names pointing to numpy and others pointing to sympy. Numpy and Sympy doesn't work well together!
Best ways to resolve the situation. Keep things separated, like this:
import sympy as sp
import numpy as np
Or import everything only from one module:
from sympy import *
import numpy as np
Now, to your actual problem. With this command:
dfx2 = lambdify(x,fx2.diff(x))
# where fx2.diff(x) results in:
# r + x*exp(x) + exp(x)
lambdify created a numerical function that will be evaluated by Numpy: note that this function contains an exponential, which is a Numpy exponential. Then, you evaluated this function with dfx2(fp), where fp is a symbolic object (meaning, it is a Sympy object). As mentioned before, Numpy and Sympy do not work well together.
Easiest solution: asks lambdify to create a function that will be evaluated by Sympy:
dfx2 = lambdify(x, fx2.diff(x), "sympy")
Now, everything works as expected.
Alternatively, you don't use lambdify. Instead, you substitute your values into the symbolic expression. For example:
dfx2 = fx2.diff(x)
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2.subs(x, fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2.subs(x, fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")

Specifying the Mean and Variance in a Scipy Distribution Python 2.7

I need to randomly sample from some distribution eventually, so I need one that allows me to readily change the mean and variance. I'm looking at using distributions from the scipy.stats library, however, I'm having difficulty seeing how the parameters "loc" and "scale" relate to the quantites I'm interested in. I'd like to be able to do something like:
x = numpy.linspace(0,5,1000)
y = scipy.stats.maxwell(x, mean, variance)
But loc and scale seem to be the only other arguments that function takes.
Can anyone specify the relationship those quantities must have to mean and variance, or suggest a better library to use?
Well, I don't have python 2.7, so answer would be for python 3.6, but it should work, it is a scipy after all.
Basically, you have to extract scale and loc parameters from given μ and σ. Here are two simple functions to do that, plus some sampling to prove we're getting right values. Basically, first printed line is what you want, and third line is result of sampling, should be roughly be the same. Second line is scale and loccomputed from μ and σ. Play with the numbers, see how it is going
import numpy as np
from scipy.stats import maxwell
def get_scale_from_sigma(sigma):
"""Compute scale from sigma based upon http://mathworld.wolfram.com/MaxwellDistribution.html"""
a2 = np.pi*sigma / (3.0*np.pi - 8.0)
return np.sqrt(a2)
def get_loc_from_mu_sigma(mu, sigma):
"""Compute loc from mu/sigma based upon http://mathworld.wolfram.com/MaxwellDistribution.html"""
scale = get_scale_from_sigma(sigma)
loc = mu - 2.0 * scale * np.sqrt(2.0 / np.pi)
return loc
sigma = 1.0
mu = 2.0 * get_scale_from_sigma(sigma) * np.sqrt(2.0 / np.pi) # + 3.0 as shift, for exampl
print(mu, sigma)
scale = get_scale_from_sigma(sigma)
loc = get_loc_from_mu_sigma(mu, sigma)
print(scale, loc)
q = maxwell.rvs(size=10000, scale = scale, loc = loc)
print(np.mean(q), np.std(q))

How to sample independently with pymc3

I am working with a simple bivariate normal model with a somewhat unconventional prior. The main issue I have is that my posteriors are inconsistent from one run to the next, which I'm guessing is related to an issue of high dependence between consecutive samples. Here are my specific questions.
What is the best way to get N independent samples? At the moment, I've been calling sample() to get a big chain (e.g. length 10,000) and then taking every 100th sample starting at 1,000. But looking now at an autocorrelation profile of one of the parameters, it looks like I need to take at least every 500th sample! (I could also use mutual information to get a better idea of dependence between lags.)
I've been following the fitting procedure described in the stochastic volatility example in the pymc3 tutorial. In particular I first find the MAP, then use it to generate a NUTS() object, then take a short sample, then use that to generate another NUTS() object, using gamma=0.25 (???), then finally get my big sample. I have no idea whether this is appropriate or whether I need the gamma=0.25.
Also, in that same example, there are testvals for the Exponential distribution. I don't know if I need these. (What is wrong with the default use of the mean?)
Here is the actual model I'm using.
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) / 300 # I have actual data!
with pymc.Model():
tau = Gamma('tau', alpha=2, beta=1 / 20000)
sigma = Deterministic('sigma', 1 / th.sqrt(tau))
corr = Uniform('corr', lower=0, upper=1)
alpha_sig = Deterministic('alpha_sig', sigma / 50)
alpha_post = Normal('alpha_post', mu=0, sd=alpha_sig)
alpha_pre = Bounded(
'alpha_pre', Normal, alpha_post, np.Inf, mu=0, sd=alpha_sig)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
MvNormal(
'data', mu=th.stack([alpha_post, alpha_pre]),
tau=tau * corr_inv, observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1], gamma=0.25)
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
I'm not sure what you're doing with the complex prior structure you have set up but I think there is something wrong there.
I simplified the model to:
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) # I have actual data!
with pymc.Model():
corr = Uniform('corr', lower=0, upper=1)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
mu = Normal('mu', mu=0, sd=1, shape=2)
MvNormal('data',
mu=mu,
tau=corr_inv,
observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1])
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
Which has great convergence. I think you can also just drop the gamma parameter.

Using the pymc3 likelihood/posterior outside of pymc3: how?

For comparison purposes, I want to utilize the posterior density function outside of PyMC3.
For my research project, I want to find out how well PyMC3 is performing compared to my own custom made code. As such, I need to compare it to our own in-house samplers and likelihood functions.
I think I figured out how to call the internal PyMC3 posterior, but it feels very awkward, and I want to know if there is a better way. Right now I am hand-transforming variables, whereas I should just be able to pass pymc a parameter dictionary and get the posterior density. Is this possible in a straightforward manner?
Thanks a lot!
Demo code:
import numpy as np
import pymc3 as pm
import scipy.stats as st
# Simple data, with sigma = 4. We want to estimate sigma
sigma_inject = 4.0
data = np.random.randn(10) * sigma_inject
# Prior interval for sigma
a, b = 0.0, 20.0
# Build PyMC model
with pm.Model() as model:
sigma = pm.Uniform('sigma', a, b) # Prior uniform between 0.0 and 20.0
likelihood = pm.Normal('data', 0.0, sd=sigma, observed=data)
# Write my own likelihood
def logpost_self(sig, data):
loglik = np.sum(st.norm(loc=0.0, scale=sig).logpdf(data)) # Gaussian
logpr = np.log(1.0 / (b-a)) # Uniform prior
return loglik + logpr
# Utilize PyMC likelihood (Have to hand-transform parameters)
def logpost_pymc(sig, model):
sigma_interval = np.log((sig - a) / (b - sig)) # Parameter transformation
ldrdx = np.log(1.0/(sig-a) + 1.0/(b-sig)) # Jacobian
return model.logp({'sigma_interval':sigma_interval}) + ldrdx
print("Own posterior: {0}".format(logpost_self(1.0, data)))
print("PyMC3 posterior: {0}".format(logpost_pymc(1.0, model)))
It's been over 5 years, but I figured this deserves an answer.
Firstly, regarding the transformations, you need to decide within the pymc3 definitions whether you want these parameters transformed. Here, sigma was being transformed with an interval transform to avoid hard boundaries. If you are interested in accessing the posterior as a function of sigma, then set transform=None. If you do transform, then the 'sigma' variable will be accessible as one of the deterministic parameters of the model.
Regarding accessing the posterior, there is a great description here. With the example given above, the code becomes:
import numpy as np
import pymc3 as pm
import theano as th
import scipy.stats as st
# Simple data, with sigma = 4. We want to estimate sigma
sigma_inject = 4.0
data = np.random.randn(10) * sigma_inject
# Prior interval for sigma
a, b = 0.1, 20.0
# Build PyMC model
with pm.Model() as model:
sigma = pm.Uniform('sigma', a, b, transform=None) # Prior uniform between 0.0 and 20.0
likelihood = pm.Normal('data', mu=0.0, sigma=sigma, observed=data)
# Write my own likelihood
def logpost_self(sig, data):
loglik = np.sum(st.norm(loc=0.0, scale=sig).logpdf(data)) # Gaussian
logpr = np.log(1.0 / (b-a)) # Uniform prior
return loglik + logpr
with model:
# Compile model posterior into a theano function
f = th.function(model.vars, [model.logpt] + model.deterministics)
def logpost_pymc3(params):
dct = model.bijection.rmap(params)
args = (dct[k.name] for k in model.vars)
results = f(*args)
return tuple(results)
print("Own posterior: {0}".format(logpost_self(1.0, data)))
print("PyMC3 posterior: {0}".format(logpost_pymc3([1.0])))
Note that if you remove the 'transform=None' part from the sigma prior, then the actual value of sigma becomes part of the tuple that is returned by the logpost_pymc3 function. It's now a deterministic of the model.

Fitting a Gaussian, getting a straight line. Python 2.7

As my title suggests, I'm trying to fit a Gaussian to some data and I'm just getting a straight line. I've been looking at these other discussion Gaussian fit for Python and Fitting a gaussian to a curve in Python which seem to suggest basically the same thing. I can make the code in those discussions work fine for the data they provide, but it won't do it for my data.
My code looks like this:
import pylab as plb
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
y = y - y[0] # to make it go to zero on both sides
x = range(len(y))
max_y = max(y)
n = len(y)
mean = sum(x*y)/n
sigma = np.sqrt(sum(y*(x-mean)**2)/n)
# Someone on a previous post seemed to think this needed to have the sqrt.
# Tried it without as well, made no difference.
def gaus(x,a,x0,sigma):
return a*exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[max_y,mean,sigma])
# It was suggested in one of the other posts I looked at to make the
# first element of p0 be the maximum value of y.
# I also tried it as 1, but that did not work either
plt.plot(x,y,'b:',label='data')
plt.plot(x,gaus(x,*popt),'r:',label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.show()
The data I am trying to fit is as follows:
y = array([ 6.95301373e+12, 9.62971320e+12, 1.32501876e+13,
1.81150568e+13, 2.46111132e+13, 3.32321345e+13,
4.45978682e+13, 5.94819771e+13, 7.88394616e+13,
1.03837779e+14, 1.35888594e+14, 1.76677210e+14,
2.28196006e+14, 2.92781632e+14, 3.73133045e+14,
4.72340762e+14, 5.93892782e+14, 7.41632194e+14,
9.19750269e+14, 1.13278296e+15, 1.38551838e+15,
1.68291212e+15, 2.02996957e+15, 2.43161742e+15,
2.89259207e+15, 3.41725793e+15, 4.00937676e+15,
4.67187762e+15, 5.40667931e+15, 6.21440313e+15,
7.09421973e+15, 8.04366842e+15, 9.05855930e+15,
1.01328502e+16, 1.12585509e+16, 1.24257598e+16,
1.36226443e+16, 1.48356404e+16, 1.60496345e+16,
1.72482199e+16, 1.84140400e+16, 1.95291969e+16,
2.05757166e+16, 2.15360187e+16, 2.23933053e+16,
2.31320228e+16, 2.37385276e+16, 2.42009864e+16,
2.45114362e+16, 2.46427484e+16, 2.45114362e+16,
2.42009864e+16, 2.37385276e+16, 2.31320228e+16,
2.23933053e+16, 2.15360187e+16, 2.05757166e+16,
1.95291969e+16, 1.84140400e+16, 1.72482199e+16,
1.60496345e+16, 1.48356404e+16, 1.36226443e+16,
1.24257598e+16, 1.12585509e+16, 1.01328502e+16,
9.05855930e+15, 8.04366842e+15, 7.09421973e+15,
6.21440313e+15, 5.40667931e+15, 4.67187762e+15,
4.00937676e+15, 3.41725793e+15, 2.89259207e+15,
2.43161742e+15, 2.02996957e+15, 1.68291212e+15,
1.38551838e+15, 1.13278296e+15, 9.19750269e+14,
7.41632194e+14, 5.93892782e+14, 4.72340762e+14,
3.73133045e+14, 2.92781632e+14, 2.28196006e+14,
1.76677210e+14, 1.35888594e+14, 1.03837779e+14,
7.88394616e+13, 5.94819771e+13, 4.45978682e+13,
3.32321345e+13, 2.46111132e+13, 1.81150568e+13,
1.32501876e+13, 9.62971320e+12, 6.95301373e+12,
4.98705540e+12])
I would show you what it looks like, but apparently I don't have enough reputation points...
Anyone got any idea why it's not fitting properly?
Thanks for your help :)
The importance of the initial guess, p0 in curve_fit's default argument list, cannot be stressed enough.
Notice that the docstring mentions that
[p0] If None, then the initial values will all be 1
So if you do not supply it, it will use an initial guess of 1 for all parameters you're trying to optimize for.
The choice of p0 affects the speed at which the underlying algorithm changes the guess vector p0 (ref. the documentation of least_squares).
When you look at the data that you have, you'll notice that the maximum and the mean, mu_0, of the Gaussian-like dataset y, are
2.4e16 and 49 respectively. With the peak value so large, the algorithm, would need to make drastic changes to its initial guess to reach that large value.
When you supply a good initial guess to the curve fitting algorithm, convergence is more likely to occur.
Using your data, you can supply a good initial guess for the peak_value, the mean and sigma, by writing them like this:
y = np.array([...]) # starting from the original dataset
x = np.arange(len(y))
peak_value = y.max()
mean = x[y.argmax()] # observation of the data shows that the peak is close to the center of the interval of the x-data
sigma = mean - np.where(y > peak_value * np.exp(-.5))[0][0] # when x is sigma in the gaussian model, the function evaluates to a*exp(-.5)
popt,pcov = curve_fit(gaus, x, y, p0=[peak_value, mean, sigma])
print(popt) # prints: [ 2.44402560e+16 4.90000000e+01 1.20588976e+01]
Note that in your code, for the mean you take sum(x*y)/n , which is strange, because this would modulate the gaussian by a polynome of degree 1 (it multiplies a gaussian with a monotonously increasing line of constant slope) before taking the mean. That will offset the mean value of y (in this case to the right). A similar remark can be made for your calculation of sigma.
Final remark: the histogram of y will not resemble a Gaussian, as y is already a Gaussian. The histogram will merely bin (count) values into different categories (answering the question "how many datapoints in y reach a value between [a, b]?").