name 'plot' is not defined - python-2.7

I installed succesfully scitools_no_easyviz from conda (I work on Spyder), but I cannot import plot. To be more specific, here's my code
from scitools.std import *
def f(t):
return t**2*exp(-t**2)
t = linspace(0, 3, 51)
y = f(t)
plot(t, y)
savefig('tmp1.pdf') # produce PDF
savefig('tmp1.png') # produce PNG
figure()
def f(t):
return t**2*exp(-t**2)
t = linspace(0, 3, 51)
y = f(t)
plot(t, y)
xlabel('t')
ylabel('y')
legend('t^2*exp(-t^2)')
axis([0, 3, -0.05, 0.6]) # [tmin, tmax, ymin, ymax]
title('My First Easyviz Demo')
figure()
plot(t, y)
xlabel('sss')
When I run the code, I get the following error
NameError: name 'plot' is not defined
What could be the problem?

Using import * is not considered a best practice, although very practical. Try importing the functions you need, such as:
from scitools.std import plot
Additionally, this way you will know if "plot" is valid when you import it along side any other function.
Ensure you have the dependencies installed in order to use the package as noted here at https://code.google.com/archive/p/scitools/wikis/Installation.wiki
Additionally, installed following these instruction latest package and your code runs perfectly well with it.

Related

Strange behavior of Inception_v3

I am trying to create a generative network based on the pre-trained Inception_v3.
1) I fix all the weights in the model
2) create a Variable whose size is (2, 3, 299, 299)
3) create targets of size (2, 1000) that I want my final layer activations to become as close as possible to by optimizing the Variable.
(I do not set the batchsize of 1, because unlike VGG16, Inception_v3 doesn't take batchsize=1, but that's not the point).
The following code should work, but gives me the error: «RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation».
# minimalist code with Inception_v3 that throws the error:
import torch
from torch.autograd import Variable
import torch.optim as optim
import torch.nn as nn
import torchvision
torch.set_default_tensor_type('torch.FloatTensor')
Iv3 = torchvision.models.inception_v3(pretrained=True)
for i in Iv3.parameters():
i.requires_grad = False
criterion = nn.CrossEntropyLoss()
x = Variable(torch.randn(2, 3, 299, 299), requires_grad=True)
target = torch.empty(2, dtype=torch.long).random_(1000)
output = Iv3(x)
loss = criterion(output[0], target)
loss.backward()
print(x.grad)
This is very strange, because if I do the same thing with VGG16, everything works fine:
# minimalist working code with VGG16:
import torch
from torch.autograd import Variable
import torch.optim as optim
import torch.nn as nn
import torchvision
# torch.cuda.empty_cache()
# vgg16 = torchvision.models.vgg16(pretrained=True).cuda()
# torch.set_default_tensor_type('torch.cuda.FloatTensor')
torch.set_default_tensor_type('torch.FloatTensor')
vgg16 = torchvision.models.vgg16(pretrained=True)
for i in vgg16.parameters():
i.requires_grad = False
criterion = nn.CrossEntropyLoss()
x = Variable(torch.randn(2, 3, 229, 229), requires_grad=True)
target = torch.empty(2, dtype=torch.long).random_(1000)
output = vgg16(x)
loss = criterion(output, target)
loss.backward()
print(x.grad)
Please help.
Thanks to #iacolippo the issue is solved. Turns out the problem was due to Pytorch 1.0.0. No problem with Pytorch 0.4.1. though.

Saving data from traceplot in PyMC3

Below is the code for a simple Bayesian Linear regression. After I obtain the trace and the plots for the parameters, is there any way in which I can save the data that created the plots in a file so that if I need to plot it again I can simply plot it from the data in the file rather than running the whole simulation again?
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,9,5)
y = 2*x + 5
yerr=np.random.rand(len(x))
def soln(x, p1, p2):
return p1+p2*x
with pm.Model() as model:
# Define priors
intercept = pm.Normal('Intercept', 15, sd=5)
slope = pm.Normal('Slope', 20, sd=5)
# Model solution
sol = soln(x, intercept, slope)
# Define likelihood
likelihood = pm.Normal('Y', mu=sol,
sd=yerr, observed=y)
# Sampling
trace = pm.sample(1000, nchains = 1)
pm.traceplot(trace)
print pm.summary(trace, ['Slope'])
print pm.summary(trace, ['Intercept'])
plt.show()
There are two easy ways of doing this:
Use a version after 3.4.1 (currently this means installing from master, with pip install git+https://github.com/pymc-devs/pymc3). There is a new feature that allows saving and loading traces efficiently. Note that you need access to the model that created the trace:
...
pm.save_trace(trace, 'linreg.trace')
# later
with model:
trace = pm.load_trace('linreg.trace')
Use cPickle (or pickle in python 3). Note that pickle is at least a little insecure, don't unpickle data from untrusted sources:
import cPickle as pickle # just `import pickle` on python 3
...
with open('trace.pkl', 'wb') as buff:
pickle.dump(trace, buff)
#later
with open('trace.pkl', 'rb') as buff:
trace = pickle.load(buff)
Update for someone like me who is still coming over to this question:
load_trace and save_trace functions were removed. Since version 4.0 even the deprecation waring for these functions were removed.
The way to do it is now to use arviz:
with model:
trace = pymc.sample(return_inferencedata=True)
trace.to_netcdf("filename.nc")
And it can be loaded with:
trace = arviz.from_netcdf("filename.nc")
This way works for me :
# saving trace
pm.save_trace(trace=trace_nb, directory=r"c:\Users\xxx\Documents\xxx\traces\trace_nb")
# loading saved traces
with model_nb:
t_nb = pm.load_trace(directory=r"c:\Users\xxx\Documents\xxx\traces\trace_nb")

How can I specify a non-theano based likelihood?

I saw a post from a few days ago by someone else: pymc3 likelihood math with non-theano function. Even though I think the problem at its core is the same, I thought I would ask with a simpler example:
Inside logp_wrap, I put some made up definition of a likelihood function. It depends on the rv and an observation. In this case I could do this with theano operations, but let's say that I want this function to be more complex and so I cannot use theano.
The problem comes when I try to define the likelihood both in terms of an RV and observations. From what I have seen, this format would work if I was specifying everything in 'logp_wrap' as theano operations.
I have searched around for a solution to this, but haven't found anything where this problem is fully addressed.
The problem in my attempt to do this is actually that the logp_ function is correctly decorated, but the logp_wrap function is only correctly decorated for its input, and not for its output, so I get the error
TypeError: 'TensorVariable' object is not callable.
Would be great if someone had a solution - don't think I am the only one with this problem.
The theano version of this that works (and uses the same function within a function definition) without the #as_op code is here: https://pymc-devs.github.io/pymc3/notebooks/lda-advi-aevb.html?highlight=densitydist (Specifically the sections: "Log-likelihood of documents for LDA" and "LDA model section")
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pymc3 as pm
from theano import as_op
import theano.tensor as T
from scipy.stats import norm
#Some data that we observed
g_observed = [0.0, 1.0, 2.0, 3.0]
#Define a function to calculate the logp without using theano
#This as_op is where the problem is - the input is an rv but the output is a
#function.
#as_op(itypes=[T.dscalar],otypes=[T.dscalar])
def logp_wrap(rv):
#We are not using theano so we wrap the function.
#as_op(itypes=[T.dvector],otypes=[T.dscalar])
def logp_(ob):
#Some made up likelihood -
#The key here is that lp depends on the rv input and the observations
lp = np.log(norm.pdf(rv + ob))
return lp
return logp_
hb1_model = pm.Model()
with hb1_model:
I_mean = pm.Normal('I_mean', mu=0.1, sd=0.05)
xs = pm.DensityDist('x', logp_wrap(I_mean),observed = g_observed)
with hb1_model:
step = pm.Metropolis()
trace = pm.sample(1000, step)

Making predictions with TensorFlow trained model and C API

I have built the C API by building the libtensorflow.so target. I want to load a pre-trained model with and run inference on it to make predictions. I was told I can do this by including the 'c_api.h' header file (along with copying that file plus 'libtensorflow.so' to the appropriate place), however, I had no luck finding any examples on that on the web. All I could find are examples which use the Bazel build system whereas I want to use another build system and use TensorFlow as a library. Can somebody help me with an example on how to import either a) a meta graph file; b) a protobuf graph file plus a checkpoint file, to make predictions? A C++ equivalent of the Python file below and built with g++?
#!/usr/bin/env python
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
saver = tf.train.import_meta_graph('./metagraph.meta')
saver.restore(sess, './checkpoint.ckpt')
x = tf.get_collection("x")[0]
yhat = tf.get_collection("yhat")[0]
print sess.run(yhat, feed_dict={x : np.array([[2, 3], [4, 5]])})
Thanks in Advance!
p.s.: For the sake of completeness I have did the following to build the files:
#!/usr/bin/env python
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[None, 2], name='x')
tf.add_to_collection("x", x)
y = tf.placeholder(tf.float32, shape=[None, 1], name='y')
w = tf.Variable(np.array([[10.0], [100.0]]), dtype=tf.float32, name='w')
b = tf.Variable(0.0, dtype=tf.float32, name='b')
yhat = tf.add(tf.matmul(x, w), b)
tf.add_to_collection("yhat", yhat)
mse_loss = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(y, yhat))))
step_size = tf.constant(0.01)
optimizer = tf.train.GradientDescentOptimizer(step_size)
init_op = tf.initialize_all_variables()
train_op = optimizer.minimize(mse_loss)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init_op)
for i in xrange(10000):
train_x = np.random.random([100, 2]) * 10
train_y = np.dot(train_x, np.array([[100.0], [10.0]])) + 1.0
sess.run(train_op, feed_dict={x : train_x, y : train_y})
print sess.run(w)
print sess.run(b)
saver.save(sess, './checkpoint.ckpt')
saver.export_meta_graph('./metagraph.meta')
tf.train.write_graph(sess.graph_def, './', 'graph')
I used Eclipse and added c_api.h to my project file and libtensorflow.so to /usr/local/bin. I then added the reference to the libtensorflow shared object to libraries on my GCC C++ Linker, finally created a simple program.
#include <iostream>
#include "c_api.h"
using namespace std;
int main() {
cout << TF_Version();
return 0;
}
This then allowed me to compile and use Tensorflow functions, including those that you want.

Simple matplotlib Annotating example not working in Python 2.7

Code
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
line, = ax.plot(t, s, lw=2)
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
ax.set_ylim(-2,2)
plt.show()
from http://matplotlib.org/1.2.0/users/annotations_intro.html
return
TypeError: 'dict' object is not callable
I manged to fixed it with
xxx={'facecolor':'black', 'shrink':0.05}
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=xxx,
)
Is this the best way ?
Also what caused this problem ? ( I know that this started with Python 2.7)
So if somebody know more, please share.
Since the code looks fine and runs ok on my machine, it seems that you may have a variable named "dict" (see this answer for reference). A couple of ideas on how to check:
use Pylint.
if you suspect one specific builtin, try checking it's type (type(dict)), or look at the properties/functions it has (dir(dict)).
open a fresh notebook and try again, if you only observe the problem in interactive session.
try alternate syntax to initialise the dictionary
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops={'facecolor':'black', 'shrink':0.05})
try explicitly instancing a variable of this type, using the alternate syntax (as you did already).