How to sample independently with pymc3 - pymc3

I am working with a simple bivariate normal model with a somewhat unconventional prior. The main issue I have is that my posteriors are inconsistent from one run to the next, which I'm guessing is related to an issue of high dependence between consecutive samples. Here are my specific questions.
What is the best way to get N independent samples? At the moment, I've been calling sample() to get a big chain (e.g. length 10,000) and then taking every 100th sample starting at 1,000. But looking now at an autocorrelation profile of one of the parameters, it looks like I need to take at least every 500th sample! (I could also use mutual information to get a better idea of dependence between lags.)
I've been following the fitting procedure described in the stochastic volatility example in the pymc3 tutorial. In particular I first find the MAP, then use it to generate a NUTS() object, then take a short sample, then use that to generate another NUTS() object, using gamma=0.25 (???), then finally get my big sample. I have no idea whether this is appropriate or whether I need the gamma=0.25.
Also, in that same example, there are testvals for the Exponential distribution. I don't know if I need these. (What is wrong with the default use of the mean?)
Here is the actual model I'm using.
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) / 300 # I have actual data!
with pymc.Model():
tau = Gamma('tau', alpha=2, beta=1 / 20000)
sigma = Deterministic('sigma', 1 / th.sqrt(tau))
corr = Uniform('corr', lower=0, upper=1)
alpha_sig = Deterministic('alpha_sig', sigma / 50)
alpha_post = Normal('alpha_post', mu=0, sd=alpha_sig)
alpha_pre = Bounded(
'alpha_pre', Normal, alpha_post, np.Inf, mu=0, sd=alpha_sig)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
MvNormal(
'data', mu=th.stack([alpha_post, alpha_pre]),
tau=tau * corr_inv, observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1], gamma=0.25)
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])

I'm not sure what you're doing with the complex prior structure you have set up but I think there is something wrong there.
I simplified the model to:
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) # I have actual data!
with pymc.Model():
corr = Uniform('corr', lower=0, upper=1)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
mu = Normal('mu', mu=0, sd=1, shape=2)
MvNormal('data',
mu=mu,
tau=corr_inv,
observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1])
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
Which has great convergence. I think you can also just drop the gamma parameter.

Related

Concatenating variable with parameters in pyomo

I want to concatenate the two variables x and p defined as
from pyomo.environ import *
import numpy as np
model = ConcreteModel()
model.t = ContinuousSet(bounds=(0, 10))
# States
model.x = Var(model.t)
model.p = Param(initialize=2)
I tried (with not much hopes) the following:
np.concatenate((model.x, model.p) axis=0)
but I get of course a numpy array out of it. I have been looking on the internet for at least 30 minutes and I could not find anything. Which is surprising.
I need this concatenation as it makes further matrix-vector operations much easier....

Point stability error in one dimension dynamical systems

I am studying two different systems in python, looking for fixed points and their stability. Managed to solve completely for the first one, but applying the same method raises an error i dont know how to deal with in the second one.
TypeError: loop of ufunc does not support argument 0 of type Zero which has no callable exp method
I don't really know how to handle it, since when i make an exception for this error I simply skip the answers and i am certain there are possible answers and analytically i see no reasons for them not to exist
from sympy import *
from numpy import *
from matplotlib import pyplot as plt
r = symbols('r', real=True)
x = symbols('x', real =True)
#first one
fx =r*x+((x**3)/(1+x**2)) # DEf. both fet and right side in EQ
fps = solve(fx, x)
print(f"The fixed points are: {fps}")
dfx = lambdify(x,fx.diff(x))
for fp in fps:
stable_interval = solve_univariate_inequality(dfx(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
fx2 = r*x+( x* E**x)
fps2 = solve(fx2, x)
print(f"The fixed points are: {fps}")
dfx2 = lambdify(x,fx2.diff(x))
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2(fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2(fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")
I expected the method i created to be applyable to the second system fx2 but i dont understand the logic behind why this doesnt remain true.
Oscar mentioned in the comment to not mix star imports: that's correct! Let's understand what you are doing:
with from sympy import * you are importing everything from sympy, like cos, sin, ...
with from numpy import * you are importing everything from numpy, like cos, sin, ... However, many things share the same names as sympy, so you are effectively overriding the previous import. Result: a complete mess that will surely raise errors down the road, as your namespace now contains names pointing to numpy and others pointing to sympy. Numpy and Sympy doesn't work well together!
Best ways to resolve the situation. Keep things separated, like this:
import sympy as sp
import numpy as np
Or import everything only from one module:
from sympy import *
import numpy as np
Now, to your actual problem. With this command:
dfx2 = lambdify(x,fx2.diff(x))
# where fx2.diff(x) results in:
# r + x*exp(x) + exp(x)
lambdify created a numerical function that will be evaluated by Numpy: note that this function contains an exponential, which is a Numpy exponential. Then, you evaluated this function with dfx2(fp), where fp is a symbolic object (meaning, it is a Sympy object). As mentioned before, Numpy and Sympy do not work well together.
Easiest solution: asks lambdify to create a function that will be evaluated by Sympy:
dfx2 = lambdify(x, fx2.diff(x), "sympy")
Now, everything works as expected.
Alternatively, you don't use lambdify. Instead, you substitute your values into the symbolic expression. For example:
dfx2 = fx2.diff(x)
for fp in fps2:
stable_interval = solve_univariate_inequality(dfx2.subs(x, fp)<0, r, domain=Reals, relational=False)
unstable_interval = solve_univariate_inequality(dfx2.subs(x, fp)>0, r, domain=Reals, relational=False)
#print(type(stable_interval))
print(f"{fp} is stable when {stable_interval}")
#print(type(unstable))
print(f"{fp} is unstable when {unstable_interval}")

Specifying the Mean and Variance in a Scipy Distribution Python 2.7

I need to randomly sample from some distribution eventually, so I need one that allows me to readily change the mean and variance. I'm looking at using distributions from the scipy.stats library, however, I'm having difficulty seeing how the parameters "loc" and "scale" relate to the quantites I'm interested in. I'd like to be able to do something like:
x = numpy.linspace(0,5,1000)
y = scipy.stats.maxwell(x, mean, variance)
But loc and scale seem to be the only other arguments that function takes.
Can anyone specify the relationship those quantities must have to mean and variance, or suggest a better library to use?
Well, I don't have python 2.7, so answer would be for python 3.6, but it should work, it is a scipy after all.
Basically, you have to extract scale and loc parameters from given μ and σ. Here are two simple functions to do that, plus some sampling to prove we're getting right values. Basically, first printed line is what you want, and third line is result of sampling, should be roughly be the same. Second line is scale and loccomputed from μ and σ. Play with the numbers, see how it is going
import numpy as np
from scipy.stats import maxwell
def get_scale_from_sigma(sigma):
"""Compute scale from sigma based upon http://mathworld.wolfram.com/MaxwellDistribution.html"""
a2 = np.pi*sigma / (3.0*np.pi - 8.0)
return np.sqrt(a2)
def get_loc_from_mu_sigma(mu, sigma):
"""Compute loc from mu/sigma based upon http://mathworld.wolfram.com/MaxwellDistribution.html"""
scale = get_scale_from_sigma(sigma)
loc = mu - 2.0 * scale * np.sqrt(2.0 / np.pi)
return loc
sigma = 1.0
mu = 2.0 * get_scale_from_sigma(sigma) * np.sqrt(2.0 / np.pi) # + 3.0 as shift, for exampl
print(mu, sigma)
scale = get_scale_from_sigma(sigma)
loc = get_loc_from_mu_sigma(mu, sigma)
print(scale, loc)
q = maxwell.rvs(size=10000, scale = scale, loc = loc)
print(np.mean(q), np.std(q))

Using the pymc3 likelihood/posterior outside of pymc3: how?

For comparison purposes, I want to utilize the posterior density function outside of PyMC3.
For my research project, I want to find out how well PyMC3 is performing compared to my own custom made code. As such, I need to compare it to our own in-house samplers and likelihood functions.
I think I figured out how to call the internal PyMC3 posterior, but it feels very awkward, and I want to know if there is a better way. Right now I am hand-transforming variables, whereas I should just be able to pass pymc a parameter dictionary and get the posterior density. Is this possible in a straightforward manner?
Thanks a lot!
Demo code:
import numpy as np
import pymc3 as pm
import scipy.stats as st
# Simple data, with sigma = 4. We want to estimate sigma
sigma_inject = 4.0
data = np.random.randn(10) * sigma_inject
# Prior interval for sigma
a, b = 0.0, 20.0
# Build PyMC model
with pm.Model() as model:
sigma = pm.Uniform('sigma', a, b) # Prior uniform between 0.0 and 20.0
likelihood = pm.Normal('data', 0.0, sd=sigma, observed=data)
# Write my own likelihood
def logpost_self(sig, data):
loglik = np.sum(st.norm(loc=0.0, scale=sig).logpdf(data)) # Gaussian
logpr = np.log(1.0 / (b-a)) # Uniform prior
return loglik + logpr
# Utilize PyMC likelihood (Have to hand-transform parameters)
def logpost_pymc(sig, model):
sigma_interval = np.log((sig - a) / (b - sig)) # Parameter transformation
ldrdx = np.log(1.0/(sig-a) + 1.0/(b-sig)) # Jacobian
return model.logp({'sigma_interval':sigma_interval}) + ldrdx
print("Own posterior: {0}".format(logpost_self(1.0, data)))
print("PyMC3 posterior: {0}".format(logpost_pymc(1.0, model)))
It's been over 5 years, but I figured this deserves an answer.
Firstly, regarding the transformations, you need to decide within the pymc3 definitions whether you want these parameters transformed. Here, sigma was being transformed with an interval transform to avoid hard boundaries. If you are interested in accessing the posterior as a function of sigma, then set transform=None. If you do transform, then the 'sigma' variable will be accessible as one of the deterministic parameters of the model.
Regarding accessing the posterior, there is a great description here. With the example given above, the code becomes:
import numpy as np
import pymc3 as pm
import theano as th
import scipy.stats as st
# Simple data, with sigma = 4. We want to estimate sigma
sigma_inject = 4.0
data = np.random.randn(10) * sigma_inject
# Prior interval for sigma
a, b = 0.1, 20.0
# Build PyMC model
with pm.Model() as model:
sigma = pm.Uniform('sigma', a, b, transform=None) # Prior uniform between 0.0 and 20.0
likelihood = pm.Normal('data', mu=0.0, sigma=sigma, observed=data)
# Write my own likelihood
def logpost_self(sig, data):
loglik = np.sum(st.norm(loc=0.0, scale=sig).logpdf(data)) # Gaussian
logpr = np.log(1.0 / (b-a)) # Uniform prior
return loglik + logpr
with model:
# Compile model posterior into a theano function
f = th.function(model.vars, [model.logpt] + model.deterministics)
def logpost_pymc3(params):
dct = model.bijection.rmap(params)
args = (dct[k.name] for k in model.vars)
results = f(*args)
return tuple(results)
print("Own posterior: {0}".format(logpost_self(1.0, data)))
print("PyMC3 posterior: {0}".format(logpost_pymc3([1.0])))
Note that if you remove the 'transform=None' part from the sigma prior, then the actual value of sigma becomes part of the tuple that is returned by the logpost_pymc3 function. It's now a deterministic of the model.

scikit-learn PCA doesn't have 'score' method

I am trying to identify the type of noise based on that article:
Model selection with Probabilistic (PCA) and Factor Analysis (FA)
I am using scikit-learn-0.14.1.win32-py2.7 on win8 64bit
I know that it refers on version 0.15, however at the version 0.14 documentation it mentions that the score method is available for PCA so I guess it should normally work:
sklearn.decomposition.ProbabilisticPCA
The problem is that no matter which PCA I will use for the *cross_val_score*, I always get a type error message saying that the estimator PCA does not have a score method:
*TypeError: If no scoring is specified, the estimator passed should have a 'score' method. The estimator PCA(copy=True, n_components=None, whiten=False) does not.*
Any ideas why is that happening?
Many thanks in advance
Christos
X has 1000 samples of 40 features
here is a portion of the code:
import numpy as np
import csv
from scipy import linalg
from sklearn.decomposition import PCA, FactorAnalysis
from sklearn.cross_validation import cross_val_score
from sklearn.grid_search import GridSearchCV
from sklearn.covariance import ShrunkCovariance, LedoitWolf
#read in the training data
train_path = '<train data path>/train.csv'
reader = csv.reader(open(train_path,"rb"),delimiter=',')
train = list(reader)
X = np.array(train).astype('float')
n_samples = 1000
n_features = 40
n_components = np.arange(0, n_features, 4)
def compute_scores(X):
pca = PCA()
pca_scores = []
for n in n_components:
pca.n_components = n
pca_scores.append(np.mean(cross_val_score(pca, X, n_jobs=1)))
return pca_scores
pca_scores = compute_scores(X)
n_components_pca = n_components[np.argmax(pca_scores)]
Ok, I think I found the problem. it is not working with PCA, but it does work with PPCA
However, by not providing a cv number the cross_val_score automatically sets 3-fold cross validation
that created 3 sets with sizes 334, 333 and 333 (my initial training set contains 1000 samples)
Since nympy.mean cannot make a comparison between sets with different sizes (334 vs 333), python rises an exception.
thx