I am trying to infer the generator of a continuous markov process observed at discrete intervals. If the generator of the markov process is $T$, then the stochastic matrix for the discrete time intervals is given by $ P = \exp(T \Delta t)$. To implement this using pymc, I wrote the custom distribution class
import pymc3
from pymc3.distributions import Discrete
from pymc3.distributions.dist_math import bound
class ContinuousMarkovChain(Discrete):
def __init__(self, t10=None, t01=None, dt=None, *args, **kwargs):
super(ContinuousMarkovChain, self).__init__(*args, **kwargs)
# self.p = p
# self.q = q
self.p = tt.slicetype
self.gt0 = (t01 >0) & (t10> 0)
T = tt.stacklists([[-t01, t01], [t10,-t10]])
self.p = ts.expm(T*dt)
def logp(self, x):
return bound(tt.log(self.p[x[:-1],x[1:]]).sum(), self.gt0)
I can use find_MAP and the Slice sampler with this class, but it fails with NUTS. The error message is:
AttributeError: 'ExpmGrad' object has no attribute 'grad'
I thought that NUTS only needed information about the gradient, so why is it trying to take the Hessian of expm?
I thought Pymc3 needs Hessian in parameter space to set the step-size and directionality for the parameters when using NUTS algorithm. Maybe you can define the grad of ExpmGrad yourself.
A relative discussing is here https://github.com/pymc-devs/pymc3/issues/1226
Related
I have trained pytorch model. I am trying to import it in C++. I have followed the steps mentioned in Pytorch website for this but i am unable to do so. Can anyone please tell me what should i do? I am using this piece of neural network.
class MLP(nn.Module):
def __init__(self, layers):
super(MLP,self).__init__()
'activation function'
self.activation = nn.Tanh()
'loss function'
self.loss_function = nn.MSELoss(reduction ='mean')
'Initialise neural network as a list using nn.Modulelist'
self.linears = nn.ModuleList([nn.Linear(layers[i], layers[i+1]) for i in range(len(layers)-1)])
'Xavier Normal Initialization'
for i in range(len(layers)-1):
nn.init.xavier_normal_(self.linears[i].weight.data, gain=1.0)
# set biases to zero
nn.init.zeros_(self.linears[i].bias.data)
'forward pass'
def forward(self,x):
x = (x-l_b)/(u_b-l_b)
x=x.float()
for i in range(len(layers)-2):
z = self.linears[i](x)
x = self.activation(z)
x = self.linears[-1](x)
return x
def loss_bc_init(self,x,y):
loss_u = self.loss_function(self.forward(x), y)
return loss_u
Please help.
I am using the DQN for a resource allocation where the agent should assign the arrival requests to the best Virtual Machine.
I am modifying a Cartpole code as follow:
import random
import gym
import numpy as np
from collections import deque
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
import os
class DQNAgent:
def __init__(self, state_size, action_size):
self.state_size = state_size
self.action_size = action_size
self.memory = deque(maxlen=2000)
self.gamma = 0.95
self.epsilon = 1.0
self.epsilon_decay = 0.995
self.epsilon_min = 0.01
self.learning_rate = 0.001
self.model = self._build_model()
def _build_model(self):
model = Sequential()
model.add(Dense(24, input_dim=self.state_size, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(self.action_size, activation='linear'))
model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate))
return model
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.action_size)
act_values = self.model.predict(state)
return np.argmax(act_values[0])
def replay(self, batch_size):
minibatch = random.sample(self.memory, batch_size)
for state, action, reward, next_state, done in minibatch:
target = reward
if not done:
target = (reward + self.gamma * np.amax(self.model.predict(next_state)[0]))
target_f = self.model.predict(state)
target_f[0][action] = target
self.model.fit(state, target_f, epochs=1, verbose=0)
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
def load(self, name):
self.model.load_weights(name)
def save(self, name):
self.model.save_weights(name)
The Cartpole states as the inputs of the Q network are given by the environment.
0 Cart Position
1 Cart Velocity -Inf Inf
2 Pole Angle ~ -41.8° ~ 41.8°
3 Pole Velocity At Tip
The question is that in my code what are the inputs of the Q network?
Since the agent should take the best possible action based on the size of the arrival request but this is not given by the environment. Shall I feed the Q network by this input value, the size?
The inputs of the Deep Q-Network architecture is fed by the replay memory, in the following part of the code:
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
The dynamic of this system as shown in the original paper Deepmind paper, is that you interact with the system, store the transition in the replay memory, and then use it for the training step. In the lines above you're storing these experiences.
Basically, the input of the network is the states and outputs the Q-values. In your code, there's no interaction with the environment, that's when you can get these transitions (experiences) to feed the replay memory. So, if you can't extract some information in the environment to be represented as states, you're not able to make assumptions about that.
What im trying to do is that whenever cursor is on label it must show the time elapsed since when it is created it does well by subtracting (def on_enter(i)) the value but i want it to be ticking while cursor is still on label.
I tried using after function as newbie i do not understand it well to use on dynamic labels.
any help will be appreciated thx
code:
from Tkinter import *
import datetime
date = datetime.datetime
now = date.now()
master=Tk()
list_label=[]
k=[]
time_var=[]
result=[]
names=[]
def delete(i):
k[i]=max(k)+1
time_var[i]='<deleted>'
list_label[i].pack_forget()
def create():#new func
i=k.index(max(k))
for j in range(i+1,len(k)):
if k[j]==0:
list_label[j].pack_forget()
list_label[i].pack(anchor='w')
time_var[i]=time_now()
for j in range(i+1,len(k)):
if k[j]==0:
list_label[j].pack(anchor='w')
k[i]=0
###########################
def on_enter(i):
list_label[i].configure(text=time_now()-time_var[i])
def on_leave(i):
list_label[i].configure(text=names[i])
def time_now():
now = date.now()
return date(now.year,now.month,now.day,now.hour,now.minute,now.second)
############################
for i in range(11):
lb=Label(text=str(i),anchor=W)
list_label.append(lb)
lb.pack(anchor='w')
lb.bind("<Button-3>",lambda event,i=i:delete(i))
k.append(0)
names.append(str(i))
lb.bind("<Enter>",lambda event,i=i: on_enter(i))
lb.bind("<Leave>",lambda event,i=i: on_leave(i))
time_var.append(time_now())
master.bind("<Control-Key-z>",lambda event: create())
mainloop()
You would use after like this:
###########################
def on_enter(i):
list_label[i].configure(text=time_now()-time_var[i])
list_label[i].timer = list_label[i].after(1000, on_enter, i)
def on_leave(i):
list_label[i].configure(text=names[i])
list_label[i].after_cancel(list_label[i].timer)
However, your approach here is all wrong. You currently have some functions and a list of data. What you should do is make a single object that contains the functions and data together and make a list of those. That way you can write your code for a single Label and just duplicate that. It makes your code a lot simpler partly because you no longer need to keep track of "i". Like this:
import Tkinter as tk
from datetime import datetime
def time_now():
now = datetime.now()
return datetime(now.year,now.month,now.day,now.hour,now.minute,now.second)
class Kiran(tk.Label):
"""A new type of Label that shows the time since creation when the mouse hovers"""
hidden = []
def __init__(self, master=None, **kwargs):
tk.Label.__init__(self, master, **kwargs)
self.name = self['text']
self.time_var = time_now()
self.bind("<Enter>", self.on_enter)
self.bind("<Leave>", self.on_leave)
self.bind("<Button-3>", self.hide)
def on_enter(self, event=None):
self.configure(text=time_now()-self.time_var)
self.timer = self.after(1000, self.on_enter)
def on_leave(self, event=None):
self.after_cancel(self.timer) # cancel the timer
self.configure(text=self.name)
def hide(self, event=None):
self.pack_forget()
self.hidden.append(self) # add this instance to the list of hidden instances
def show(self):
self.time_var = time_now() # reset time
self.pack(anchor='w')
def undo(event=None):
'''if there's any hidden Labels, show one'''
if Kiran.hidden:
Kiran.hidden.pop().show()
def main():
root = tk.Tk()
root.geometry('200x200')
for i in range(11):
lb=Kiran(text=i)
lb.pack(anchor='w')
root.bind("<Control-Key-z>",undo)
root.mainloop()
if __name__ == '__main__':
main()
More notes:
Don't use lambda unless you are forced to; it's known to cause bugs.
Don't use wildcard imports (from module import *), they cause bugs and are against PEP8.
Put everything in functions.
Use long, descriptive names. Single letter names just waste time. Think of names as tiny comments.
Add a lot more comments to your code so that other people don't have to guess what the code is supposed to do.
Try a more beginner oriented forum for questions like this, like learnpython.reddit.com
Below is my pipeline and it seems that I can't pass the parameters to my models by using the ModelTransformer class, which I take it from the link (http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html)
The error message makes sense to me, but I don't know how to fix this. Any idea how to fix this? Thanks.
# define a pipeline
pipeline = Pipeline([
('vect', DictVectorizer(sparse=False)),
('scale', preprocessing.MinMaxScaler()),
('ess', FeatureUnion(n_jobs=-1,
transformer_list=[
('rfc', ModelTransformer(RandomForestClassifier(n_jobs=-1, random_state=1, n_estimators=100))),
('svc', ModelTransformer(SVC(random_state=1))),],
transformer_weights=None)),
('es', EnsembleClassifier1()),
])
# define the parameters for the pipeline
parameters = {
'ess__rfc__n_estimators': (100, 200),
}
# ModelTransformer class. It takes it from the link
(http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html)
class ModelTransformer(TransformerMixin):
def __init__(self, model):
self.model = model
def fit(self, *args, **kwargs):
self.model.fit(*args, **kwargs)
return self
def transform(self, X, **transform_params):
return DataFrame(self.model.predict(X))
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, refit=True)
Error Message:
ValueError: Invalid parameter n_estimators for estimator ModelTransformer.
GridSearchCV has a special naming convention for nested objects. In your case ess__rfc__n_estimators stands for ess.rfc.n_estimators, and, according to the definition of the pipeline, it points to the property n_estimators of
ModelTransformer(RandomForestClassifier(n_jobs=-1, random_state=1, n_estimators=100)))
Obviously, ModelTransformer instances don't have such property.
The fix is easy: in order to access underlying object of ModelTransformer one needs to use model field. So, grid parameters become
parameters = {
'ess__rfc__model__n_estimators': (100, 200),
}
P.S. it's not the only problem with your code. In order to use multiple jobs in GridSearchCV, you need to make all objects you're using copy-able. This is achieved by implementing methods get_params and set_params, you can borrow them from BaseEstimator mixin.
fairly intermediate programmer but Python beginner here. I've been working on a game for a while and I restructured all of my classes yesterday. Where I was initially using only compositional data structure, I'm now using a mix of both. My issues come when I want to spawn the player. Here's the relevant code.
class Object(object):
def __init__(self, **kwargs):
DeaultValues={'x':0, 'y':0, 'name':None, 'texture':None, 'blocks':False, 'ObjectID':None, 'Fighter':None, 'Corpse':None, 'Skill':None, 'ai':None}
for key,value in DeaultValues.items():
try:
vars(self)[key] = kwargs[key]
except ValueError:
vars(self)[key] = value
except KeyError:
vars(self)[key] = value
self.x = kwargs['x']
self.y = kwargs['y']
self.name=kwargs['name']
self.blocks=kwargs['blocks']
self.ObjectID=self.AttachID()
self.texture = kwargs['texture']
#This section binds an actors compenents to itself
self.Corpse = kwargs['Corpse']
if self.Corpse:
self.Corpse.owner = self
self.Skill=kwargs['Skill']
if self.Skill:
self.Skill.owner = self
self.Fighter = kwargs['Fighter']
if self.Fighter:
self.Fighter.owner = self
self.ai = kwargs['ai']
if self.ai:
self.ai.owner = self
class HighActor(Object):
def __init__(self, **kwargs):
super(HighActor, self).__init__(**kwargs)
class Player(HighActor):
def __init__(self, Level=1, Xp=0, PreviousLevel=0, PreviousLevelThreshold=100, LevelThreshold=500, **kwargs):
super(Player, self).__init__(**kwargs)
self.LevelThreshold = LevelThreshold
self.PreviousLevelThreshold=PreviousLevelThreshold
self.PreviousLevel=PreviousLevel
self.Level = Level
self.Xp = Xp
def SpawnPlayer():
global player
FighterComponent = Fighter(MaxHp=100, Hp=100, IsHasted=[False, False], death_function=None)
CorpseComponent = Corpse()
SkillComponent = HighActorSkill()
player=Player(name="player", x=None, y=None, texture="player.png", blocks=True, ObjectID=None, Fighter=FighterComponent, Corpse=CorpseComponent, Skill=SkillComponent, ai=None)
The above code works just fine, however its not really inheriting anything. To get the player object to not error I had to add to add all of the attributes of the base object class to the Player initialization. If I remove any of the values that are set to none in the player=Player() statement I get value errors or key errors. I tried to correct this by having a dict of default values that looped through all kwargs the init was given and if they had no value, set them to the default found. This worked until I got to any of the components. So in the case of not specifying ai=none, I got key errors. I would really love to have my code in such a format that if I do not specify a value for any of the base object class attributes the default values would be passed in, but if I do specify a value, that gets passed up to the base class. My ideal end result would be to have my player instancing look like this:
def SpawnPlayer():
global player
FighterComponent = Fighter(MaxHp=100, Hp=100, IsHasted=[False, False], death_function=None)
CorpseComponent = Corpse()
SkillComponent = HighActorSkill()
player=Player(name="player", texture="player.png", blocks=True, Fighter=FighterComponent, Corpse=CorpseComponent, Skill=SkillComponent)
I have a suspicion that my inheritance isn't working 100% because I get errors if I leave out ObjectID even though that should be assigned since in the init of the bass class its set equal to getid(self). So I'm either having issues with my inheritance (I'm really struggling with Super), or the signatures of my objects, and I'm not quite sure what my problem is, and more importantly why. I'm not opposed to changing the codes signature dramatically, as I'm still writing the engine so nothing is reliant on this yet. What should be done?
I think your class structure should be different. Each class should only have the attributes it needs, add new ones as you build up the inheritance, e.g.:
class Object(object):
def __init__(self, x=0, y=0, name=None, **kwargs):
self.x = x
self.y = y
self.name = name
class HighActor(Object):
def __init__(self, corpse=None, skill=None, **kwargs):
super(HighActor, self).__init__(**kwargs)
self.corpse = corpse
self.skill = skill
class Player(HighActor):
def __init__(self, level=1, xp=0, **kwargs):
super(Player, self).__init__(**kwargs)
self.level = level
self.xp = xp
At each level you specify the attributes - all Objects should have x, y and name, all HighActors should also have corpse and skill, etc. Now you can specify arguments to supply to any of the three levels of the hierarchy, or leave them out to get defaults:
player = Player(name="player one", skill=100, xp=12)
You may have things that don't fit into this inheritance scheme - it is OK to have more than one separate set of inheritance relationships in your model, don't force it!
This works because the **kwargs at the end of each __init__ "mops up" any keyword arguments that method isn't expecting into a dictionary kwargs, and can then pass them all to the next level. When you do so, super(...).__init__(**kwargs), this unpacks the dictionary back into keyword arguments, and any that aren't present will take the specified default value.