I have a model that I need to solve many times, with different objective function coefficients.
Naturally, I want to spend as little time as possible on updating the model.
My current setup is as follows (simplified):
Abstract model:
def obj_rule(m):
return sum(Dist[m, n] * m.flow[m,n] for (m,n) in m.From * m.To)
m.obj = Objective(rule=obj_rule, sense=minimize)
After solve, I update the concrete model instance mi this way, we new values in Dist:
mi.obj = sum(Dist[m, n] * mi.flow[m,n] for (m,n) in mi.From * mi.To)
However, I see that the update line takes a lot of time - ca. 1/4 of the overall solution time, several seconds for bigger cases.
Is there some faster way of updating the objective function?
(After all, in the usual way of saving an LP model, the objective function coefficients are in a separate vector, so changing them should not affect anything else.)
Do you have a reason to define an Abstract model before creating your Concrete model? If you define your concrete model with the rule you show above, you should be able to just update your data and re-solve the model without a lot of overhead since you do don't redefine the objective object. Here's a simple example where I change the values of the cost parameter and re-solve.
import pyomo.environ as pyo
a = list(range(2)) # set the variables define over
#%% Begin basic model
model = pyo.ConcreteModel()
model.c = pyo.Param(a,initialize={0:5,1:3},mutable=True)
model.x = pyo.Var(a,domain = pyo.Binary)
model.con = pyo.Constraint(expr=model.x[0] + model.x[1] <= 1)
def obj_rule(model):
return(sum(model.x[ind] * model.c[ind] for ind in a))
model.obj = pyo.Objective(rule=obj_rule,sense=pyo.maximize)
#%% Solve the first time
solver = pyo.SolverFactory('glpk')
res=solver.solve(model)
print('x[0]: {} \nx[1]: {}'.format(pyo.value(model.x[0]),
pyo.value(model.x[1])))
# x[0]: 1
# x[1]: 0
#%% Update data and re-solve
model.c.reconstruct({0:0,1:5})
res=solver.solve(model)
print('x[0]: {} \nx[1]: {}'.format(pyo.value(model.x[0]),
pyo.value(model.x[1])))
# x[0]: 0
# x[1]: 1
Related
I have a model trained to classify rgb values into 1000 categories.
#Model architecture
model = Sequential()
model.add(Dense(512,input_shape=(3,),activation="relu"))
model.add(BatchNormalization())
model.add(Dense(512,activation="relu"))
model.add(BatchNormalization())
model.add(Dense(1000,activation="relu"))
model.add(Dense(1000,activation="softmax"))
I want to be able to extract the output before the softmax layer so I can conduct analyses on different samples of categories within the model. I want execute softmax for each sample, and conduct analyses using a function named getinfo().
Model
Initially, I enter X_train data into model.predict, to get a vector of 1000 probabilities for each input. I execute getinfo() on this array to get the desired result.
Pop1
I then use model.pop() to remove the softmax layer. I get new predictions for the popped model, and execute scipy.special.softmax. However, getinfo() produces an entirely different result on this array.
Pop2
I write my own softmax function to validate the 2nd result, and I receive an almost identical answer to Pop1.
Pop3
However, when I simply calculate getinfo() on the output of model.pop() with no softmax function, I get the same result as the initial Model.
data = np.loadtxt("allData.csv",delimiter=",")
model = load_model("model.h5")
def getinfo(data):
objects = scipy.stats.entropy(np.mean(data, axis=0), base=2)
print(('objects_mean',objects))
colours_entropy = []
for i in data:
e = scipy.stats.entropy(i, base=2)
colours_entropy.append(e)
colours = np.mean(np.array(colours_entropy))
print(('colours_mean',colours))
info = objects - colours
print(('objects-colours',info))
return info
def softmax_max(data):
# calculate softmax whilst subtracting the max values (axis=1)
sm = []
count = 0
for row in data:
max = np.argmax(row)
e = np.exp(row-data[count,max])
s = np.sum(e)
sm.append(e/s)
sm = np.asarray(sm)
return sm
#model
preds = model.predict(X_train)
getinfo(preds)
#pop1
model.pop()
preds1 = model.predict(X_train)
sm1 = scipy.special.softmax(preds1,axis=1)
getinfo(sm1)
#pop2
sm2 = softmax_max(preds1)
getinfo(sm2)
#pop3
getinfo(preds1)
I expect to get the same output from Model, Pop1 and Pop2, but a different answer to Pop3, as I did not compute softmax here. I wonder if the issue is with computing softmax after model.predict? And whether I am getting the same result in Model and Pop3 because softmax is constraining the values between 0-1, so for the purpose of the getinfo() function, the result is mathematically equivalent?
If this is the case, then how do I execute softmax before model.predict?
I've gone around in circles with this, so any help or insight would be much appreciated. Please let me know if anything is unclear. Thank you!
model.pop() does not immediately have an effect. You need to run model.compile() again to recompile the new model that doesn't include the last layer.
Without the recompile, you're essentially running model.predict() twice in a row on the exact same model, which explains why Model and Pop3 give the same result. Pop1 and Pop2 give weird results because they are calculating the softmax of a softmax.
In addition, your model does not have the softmax as a separate layer, so pop takes off the entire last Dense layer. To fix this, add the softmax as a separate layer like so:
model.add(Dense(1000)) # softmax removed from this layer...
model.add(Activation('softmax')) # ...and added to its own layer
I am trying to build a model for the likelihood function of a particular outcome of a Langevin equation (Brownian particle in a harmonic potential):
Here is my model in pymc2 that seems to work:
https://github.com/hstrey/BayesianAnalysis/blob/master/Langevin%20simulation.ipynb
#define the model/function to be fitted.
def model(x):
t = pm.Uniform('t', 0.1, 20, value=2.0)
A = pm.Uniform('A', 0.1, 10, value=1.0)
#pm.deterministic(plot=False)
def S(t=t):
return 1-np.exp(-4*delta_t/t)
#pm.deterministic(plot=False)
def s(t=t):
return np.exp(-2*delta_t/t)
path = np.empty(N, dtype=object)
path[0]=pm.Normal('path_0',mu=0, tau=1/A, value=x[0], observed=True)
for i in range(1,N):
path[i] = pm.Normal('path_%i' % i,
mu=path[i-1]*s,
tau=1/A/S,
value=x[i],
observed=True)
return locals()
mcmc = pm.MCMC( model(x) )
mcmc.sample( 20000, 2000, 10 )
The basic idea is that each point depends on the previous point in the chain (Markov chain). Btw, x is an array of data, N is its length, delta_t is the time step =0.01. Any idea how to implement this in pymc3? I tried:
# define the model/function for diffusion in a harmonic potential
DHP_model = pm.Model()
with DHP_model:
t = pm.Uniform('t', 0.1, 20)
A = pm.Uniform('A', 0.1, 10)
S=1-pm.exp(-4*delta_t/t)
s=pm.exp(-2*delta_t/t)
path = np.empty(N, dtype=object)
path[0]=pm.Normal('path_0',mu=0, tau=1/A, observed=x[0])
for i in range(1,N):
path[i] = pm.Normal('path_%i' % i,
mu=path[i-1]*s,
tau=1/A/S,
observed=x[i])
Unfortunately the model crashes as soon as I try to run it. I tried some pymc3 examples (tutorial) on my machine and this is working.
Thanks in advance. I am really hoping that the new samplers in pymc3 will help me with this model. I am trying to apply Bayesian methods to single-molecule experiments.
Rather than creating many individual normally-distributed 1-D variables in a loop, you can make a custom distribution (by extending Continuous) that knows the formula for computing the log likelihood of your entire path. You can bootstrap this likelihood formula off of the Normal likelihood formula that pymc3 already knows. See the built-in AR1 class for an example.
Since your particle follows the Markov property, your likelihood looks like
import theano.tensor as T
def logp(path):
now = path[1:]
prev = path[:-1]
loglik_first = pm.Normal.dist(mu=0., tau=1./A).logp(path[0])
loglik_rest = T.sum(pm.Normal.dist(mu=prev*ss, tau=1./A/S).logp(now))
loglik_final = loglik_first + loglik_rest
return loglik_final
I'm guessing that you want to draw a value for ss at every time step, in which case you should make sure to specify ss = pm.exp(..., shape=len(x)-1), so that prev*ss in the block above gets interpreted as element-wise multiplication.
Then you can just specify your observations with
path = MyLangevin('path', ..., observed=x)
This should run much faster.
Since I did not see an answer to my question, let me answer it myself. I came up with the following solution:
# now lets model this data using pymc
# define the model/function for diffusion in a harmonic potential
DHP_model = pm.Model()
with DHP_model:
D = pm.Gamma('D',mu=mu_D,sd=sd_D)
A = pm.Gamma('A',mu=mu_A,sd=sd_A)
S=1.0-pm.exp(-2.0*delta_t*D/A)
ss=pm.exp(-delta_t*D/A)
path=pm.Normal('path_0',mu=0.0, tau=1/A, observed=x[0])
for i in range(1,N):
path = pm.Normal('path_%i' % i,
mu=path*ss,
tau=1.0/A/S,
observed=x[i])
start = pm.find_MAP()
print(start)
trace = pm.sample(100000, start=start)
unfortunately, this code takes at N=50 anywhere between 6hours to 2 days to compile. I am running on a pretty fast PC (24Gb RAM) running Ubuntu. I tried to using the GPU but that runs slightly slower. I suspect memory problems since it uses 99.8% of the memory when running. I tried the same calculation with Stan and it only takes 2min to run.
I have a small data set, it has less than 2000 rows. I am trying to fit a LinearRegressionModel using ML, well the data set has only one feature (which I already normalized), after the model was fitted, I evaluated it using a RegressionEvaluator and measuring metrics R2 and RMSE. Then I noticed the error was high, and hence decided to create more artificial features, in order to describe better the phenomena. To achieve this I created the following UDF (notice I check it works).
numberFeatures = 12
def addFeatures(value):
v = value.toArray()[0]
return Vectors.dense([v ** (1.0 / x) for x in xrange(2, 10)] +
[v ** x for x in xrange(1, numberFeatures)])
addFeaturesUDF = udf(addFeatures, VectorUDT())
# Here I test it
print(addFeatures(Vectors.dense(2)))
# [1.0,0.666666666667,0.5,0.4,0.333333333333,0.285714285714,0.25,0.222222222222,2.0,4.0,8.0,16.0,32.0,64.0,128.0,256.0,512.0,1024.0,2048.0]
After this I modify my DataFrame to add more features, using addFeaturesUDF, I can show it bellow.
dtBoosted = dt.withColumn("features", addFeaturesUDF(col("features")))
dtBoosted.show(5)
#+--------+-----+----------+--------------------+
#| date|price| feature| features|
#+--------+-----+----------+--------------------+
#|733946.0| 9.92|[733946.0]|[0.0,0.0,0.0,0.0,...|
#|733948.0| 8.05|[733948.0]|[4.88997555012224...|
#|733949.0| 8.05|[733949.0]|[7.33496332518337...|
#|733950.0| 7.91|[733950.0]|[9.77995110024449...|
#|733951.0| 7.91|[733951.0]|[0.00122249388753...|
#+--------+-----+----------+--------------------+
# only showing top 5 rows
And works, but when I attempt to fit the model it shows an AssertError.
dtTrain, dtValidation = dtBoosted.randomSplit([0.75, 0.25], seed=107)
lr = LinearRegression(maxIter=100, labelCol="price", featuresCol="features")
lrm = lr.fit(dtTrain)
What is the problem? What am I doing wrong? It worked with one feature and some other features!
I am working with a simple bivariate normal model with a somewhat unconventional prior. The main issue I have is that my posteriors are inconsistent from one run to the next, which I'm guessing is related to an issue of high dependence between consecutive samples. Here are my specific questions.
What is the best way to get N independent samples? At the moment, I've been calling sample() to get a big chain (e.g. length 10,000) and then taking every 100th sample starting at 1,000. But looking now at an autocorrelation profile of one of the parameters, it looks like I need to take at least every 500th sample! (I could also use mutual information to get a better idea of dependence between lags.)
I've been following the fitting procedure described in the stochastic volatility example in the pymc3 tutorial. In particular I first find the MAP, then use it to generate a NUTS() object, then take a short sample, then use that to generate another NUTS() object, using gamma=0.25 (???), then finally get my big sample. I have no idea whether this is appropriate or whether I need the gamma=0.25.
Also, in that same example, there are testvals for the Exponential distribution. I don't know if I need these. (What is wrong with the default use of the mean?)
Here is the actual model I'm using.
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) / 300 # I have actual data!
with pymc.Model():
tau = Gamma('tau', alpha=2, beta=1 / 20000)
sigma = Deterministic('sigma', 1 / th.sqrt(tau))
corr = Uniform('corr', lower=0, upper=1)
alpha_sig = Deterministic('alpha_sig', sigma / 50)
alpha_post = Normal('alpha_post', mu=0, sd=alpha_sig)
alpha_pre = Bounded(
'alpha_pre', Normal, alpha_post, np.Inf, mu=0, sd=alpha_sig)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
MvNormal(
'data', mu=th.stack([alpha_post, alpha_pre]),
tau=tau * corr_inv, observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1], gamma=0.25)
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
I'm not sure what you're doing with the complex prior structure you have set up but I think there is something wrong there.
I simplified the model to:
import pymc3 as pymc
import numpy as np
import theano.tensor as th
from pymc3.distributions.continuous import Gamma, Uniform, Normal, Bounded
from pymc3.distributions.multivariate import MvNormal
from pymc3.model import Deterministic
data = np.random.randn(3000, 2) # I have actual data!
with pymc.Model():
corr = Uniform('corr', lower=0, upper=1)
corr_inv = th.stack([th.stack([1, -corr]),
th.stack([-corr, 1])]) / (1 - th.sqr(corr))
mu = Normal('mu', mu=0, sd=1, shape=2)
MvNormal('data',
mu=mu,
tau=corr_inv,
observed=data)
map_ = pymc.find_MAP()
step1 = pymc.NUTS(scaling=map_)
trace1 = pymc.sample(1000, step=step1)
step2 = pymc.NUTS(scaling=trace1[-1])
trace2 = pymc.sample(10000, step=step2, start=trace1[-1])
Which has great convergence. I think you can also just drop the gamma parameter.
I am trying to calculate the perplexity for the data I have. The code I am using is:
import sys
sys.path.append("/usr/local/anaconda/lib/python2.7/site-packages/nltk")
from nltk.corpus import brown
from nltk.model import NgramModel
from nltk.probability import LidstoneProbDist, WittenBellProbDist
estimator = lambda fdist, bins: LidstoneProbDist(fdist, 0.2)
lm = NgramModel(3, brown.words(categories='news'), True, False, estimator)
print lm
But I am receiving the error,
File "/usr/local/anaconda/lib/python2.7/site-packages/nltk/model/ngram.py", line 107, in __init__
cfd[context][token] += 1
TypeError: 'int' object has no attribute '__getitem__'
I have already performed Latent Dirichlet Allocation for the data I have and I have generated the unigrams and their respective probabilities (they are normalized as the sum of total probabilities of the data is 1).
My unigrams and their probability looks like:
Negroponte 1.22948976891e-05
Andreas 7.11290670484e-07
Rheinberg 7.08255885794e-07
Joji 4.48481435106e-07
Helguson 1.89936727391e-07
CAPTION_spot 2.37395965468e-06
Mortimer 1.48540253778e-07
yellow 1.26582575863e-05
Sugar 1.49563800878e-06
four 0.000207196011781
This is just a fragment of the unigrams file I have. The same format is followed for about 1000s of lines. The total probabilities (second column) summed gives 1.
I am a budding programmer. This ngram.py belongs to the nltk package and I am confused as to how to rectify this. The sample code I have here is from the nltk documentation and I don't know what to do now. Please help on what I can do. Thanks in advance!
Perplexity is the inverse probability of the test set, normalized by the number of words. In the case of unigrams:
Now you say you have already constructed the unigram model, meaning, for each word you have the relevant probability. Then you only need to apply the formula. I assume you have a big dictionary unigram[word] that would provide the probability of each word in the corpus. You also need to have a test set. If your unigram model is not in the form of a dictionary, tell me what data structure you have used, so I could adapt it to my solution accordingly.
perplexity = 1
N = 0
for word in testset:
if word in unigram:
N += 1
perplexity = perplexity * (1/unigram[word])
perplexity = pow(perplexity, 1/float(N))
UPDATE:
As you asked for a complete working example, here's a very simple one.
Suppose this is our corpus:
corpus ="""
Monty Python (sometimes known as The Pythons) were a British surreal comedy group who created the sketch comedy show Monty Python's Flying Circus,
that first aired on the BBC on October 5, 1969. Forty-five episodes were made over four series. The Python phenomenon developed from the television series
into something larger in scope and impact, spawning touring stage shows, films, numerous albums, several books, and a stage musical.
The group's influence on comedy has been compared to The Beatles' influence on music."""
Here's how we construct the unigram model first:
import collections, nltk
# we first tokenize the text corpus
tokens = nltk.word_tokenize(corpus)
#here you construct the unigram language model
def unigram(tokens):
model = collections.defaultdict(lambda: 0.01)
for f in tokens:
try:
model[f] += 1
except KeyError:
model [f] = 1
continue
N = float(sum(model.values()))
for word in model:
model[word] = model[word]/N
return model
Our model here is smoothed. For words outside the scope of its knowledge, it assigns a low probability of 0.01. I already told you how to compute perplexity:
#computes perplexity of the unigram model on a testset
def perplexity(testset, model):
testset = testset.split()
perplexity = 1
N = 0
for word in testset:
N += 1
perplexity = perplexity * (1/model[word])
perplexity = pow(perplexity, 1/float(N))
return perplexity
Now we can test this on two different test sets:
testset1 = "Monty"
testset2 = "abracadabra gobbledygook rubbish"
model = unigram(tokens)
print perplexity(testset1, model)
print perplexity(testset2, model)
for which you get the following result:
>>>
49.09452736318415
99.99999999999997
Note that when dealing with perplexity, we try to reduce it. A language model that has less perplexity with regards to a certain test set is more desirable than one with a bigger perplexity. In the first test set, the word Monty was included in the unigram model, so the respective number for perplexity was also smaller.
Thanks for the code snippet! Shouldn't:
for word in model:
model[word] = model[word]/float(sum(model.values()))
be rather:
v = float(sum(model.values()))
for word in model:
model[word] = model[word]/v
Oh ... I see was already answered ...