Code
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111)
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
line, = ax.plot(t, s, lw=2)
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict(facecolor='black', shrink=0.05),
)
ax.set_ylim(-2,2)
plt.show()
from http://matplotlib.org/1.2.0/users/annotations_intro.html
return
TypeError: 'dict' object is not callable
I manged to fixed it with
xxx={'facecolor':'black', 'shrink':0.05}
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=xxx,
)
Is this the best way ?
Also what caused this problem ? ( I know that this started with Python 2.7)
So if somebody know more, please share.
Since the code looks fine and runs ok on my machine, it seems that you may have a variable named "dict" (see this answer for reference). A couple of ideas on how to check:
use Pylint.
if you suspect one specific builtin, try checking it's type (type(dict)), or look at the properties/functions it has (dir(dict)).
open a fresh notebook and try again, if you only observe the problem in interactive session.
try alternate syntax to initialise the dictionary
ax.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops={'facecolor':'black', 'shrink':0.05})
try explicitly instancing a variable of this type, using the alternate syntax (as you did already).
Related
Below is the code for a simple Bayesian Linear regression. After I obtain the trace and the plots for the parameters, is there any way in which I can save the data that created the plots in a file so that if I need to plot it again I can simply plot it from the data in the file rather than running the whole simulation again?
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,9,5)
y = 2*x + 5
yerr=np.random.rand(len(x))
def soln(x, p1, p2):
return p1+p2*x
with pm.Model() as model:
# Define priors
intercept = pm.Normal('Intercept', 15, sd=5)
slope = pm.Normal('Slope', 20, sd=5)
# Model solution
sol = soln(x, intercept, slope)
# Define likelihood
likelihood = pm.Normal('Y', mu=sol,
sd=yerr, observed=y)
# Sampling
trace = pm.sample(1000, nchains = 1)
pm.traceplot(trace)
print pm.summary(trace, ['Slope'])
print pm.summary(trace, ['Intercept'])
plt.show()
There are two easy ways of doing this:
Use a version after 3.4.1 (currently this means installing from master, with pip install git+https://github.com/pymc-devs/pymc3). There is a new feature that allows saving and loading traces efficiently. Note that you need access to the model that created the trace:
...
pm.save_trace(trace, 'linreg.trace')
# later
with model:
trace = pm.load_trace('linreg.trace')
Use cPickle (or pickle in python 3). Note that pickle is at least a little insecure, don't unpickle data from untrusted sources:
import cPickle as pickle # just `import pickle` on python 3
...
with open('trace.pkl', 'wb') as buff:
pickle.dump(trace, buff)
#later
with open('trace.pkl', 'rb') as buff:
trace = pickle.load(buff)
Update for someone like me who is still coming over to this question:
load_trace and save_trace functions were removed. Since version 4.0 even the deprecation waring for these functions were removed.
The way to do it is now to use arviz:
with model:
trace = pymc.sample(return_inferencedata=True)
trace.to_netcdf("filename.nc")
And it can be loaded with:
trace = arviz.from_netcdf("filename.nc")
This way works for me :
# saving trace
pm.save_trace(trace=trace_nb, directory=r"c:\Users\xxx\Documents\xxx\traces\trace_nb")
# loading saved traces
with model_nb:
t_nb = pm.load_trace(directory=r"c:\Users\xxx\Documents\xxx\traces\trace_nb")
I've read some answers on this question here and here, however I'm still a bit puzzled by tf.Variable being and/or not being a tf.Tensor.
The linked answers deal with a mutability of tf.Variable and mentioning that tf.Variables maintains their states (when instantiated with default parameter trainable=True).
What makes me still a bit confused is a test case I came across when writing simple unit tests using tf.test.TestCase
Consider the following code snippet. We have a simple class called Foo which has only one property, a tf.Variable initialized to w:
import tensorflow as tf
import numpy as np
class Foo:
def __init__(self, w):
self.w = tf.Variable(w)
Now, let's say you want to test that the instance of Foo has w initialized with tensor of the same dimension as passed in via w. The simplest test case could be written as follows:
import tensorflow as tf
import numpy as np
from foo import Foo
class TestFoo(tf.test.TestCase):
def test_init(self):
w = np.random.rand(3,2)
foo = Foo(w)
init = tf.global_variables_initializer()
with self.test_session() as sess:
sess.run(init)
self.assertShapeEqual(w, foo.w)
if __name__ == '__main__':
tf.test.main()
Now when you run the test you'll get the following error:
======================================================================
ERROR: test_init (__main__.TestFoo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_foo.py", line 12, in test_init
self.assertShapeEqual(w, foo.w)
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/test_util.py", line 1100, in assertShapeEqual
raise TypeError("tf_tensor must be a Tensor")
TypeError: tf_tensor must be a Tensor
----------------------------------------------------------------------
Ran 2 tests in 0.027s
FAILED (errors=1)
You can "get around" this unit test error by doing something like this (i.e. note assertShapeEqual was replaced with assertEqual):
self.assertEqual(list(w.shape), foo.w.get_shape().as_list())
What I'm interested in, though, is the tf.Variable vs tf.Tensor relationship.
What the test error seems to be suggesting is that foo.w is NOT a tf.Tensor, meaning you probably can't use tf.Tensor API on it. Consider, however, the following interactive python session:
$ python3
Python 3.6.3 (default, Oct 4 2017, 06:09:15)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> import numpy as np
>>> w = np.random.rand(3,2)
>>> var = tf.Variable(w)
>>> var.get_shape().as_list()
[3, 2]
>>> list(w.shape)
[3, 2]
>>>
In the session above, we create a variable and run the get_shape() method on it to retrieve its shape dimensions. Now, get_shape() method is a tf.Tensor API method as you can see here.
So to get back to my question, what parts of tf.Tensor API does tf.Variable implement. If the answer is ALL of them, why does the above test case fail?
self.assertShapeEqual(w, foo.w)
with
raise TypeError("tf_tensor must be a Tensor")
I'm pretty sure I'm missing something fundamental here or maybe it's a bug in assertShapeEqual ? I would appreciate if someone could shed some light on this.
I'm using following version of tensorflow on macOS with python3:
tensorflow (1.4.1)
That testing utility function is checking whether a variable implements tf.Tensor
>>> import tensorflow as tf
>>> v = tf.Variable('v')
>>> v
<tf.Variable 'Variable:0' shape=() dtype=string_ref>
>>> isinstance(v, tf.Tensor)
False
The answer appears to be 'no'.
Update:
According to the documentation that is correct:
https://www.tensorflow.org/programmers_guide/variables
Unlike tf.Tensor objects, a tf.Variable exists outside the context of
a single session.run call.
Although:
A tf.Variable represents a tensor whose value can be changed by
running ops on it.
(Not quite sure what 'represents a tensor' means - sounds like a design 'feature')
I saw a post from a few days ago by someone else: pymc3 likelihood math with non-theano function. Even though I think the problem at its core is the same, I thought I would ask with a simpler example:
Inside logp_wrap, I put some made up definition of a likelihood function. It depends on the rv and an observation. In this case I could do this with theano operations, but let's say that I want this function to be more complex and so I cannot use theano.
The problem comes when I try to define the likelihood both in terms of an RV and observations. From what I have seen, this format would work if I was specifying everything in 'logp_wrap' as theano operations.
I have searched around for a solution to this, but haven't found anything where this problem is fully addressed.
The problem in my attempt to do this is actually that the logp_ function is correctly decorated, but the logp_wrap function is only correctly decorated for its input, and not for its output, so I get the error
TypeError: 'TensorVariable' object is not callable.
Would be great if someone had a solution - don't think I am the only one with this problem.
The theano version of this that works (and uses the same function within a function definition) without the #as_op code is here: https://pymc-devs.github.io/pymc3/notebooks/lda-advi-aevb.html?highlight=densitydist (Specifically the sections: "Log-likelihood of documents for LDA" and "LDA model section")
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pymc3 as pm
from theano import as_op
import theano.tensor as T
from scipy.stats import norm
#Some data that we observed
g_observed = [0.0, 1.0, 2.0, 3.0]
#Define a function to calculate the logp without using theano
#This as_op is where the problem is - the input is an rv but the output is a
#function.
#as_op(itypes=[T.dscalar],otypes=[T.dscalar])
def logp_wrap(rv):
#We are not using theano so we wrap the function.
#as_op(itypes=[T.dvector],otypes=[T.dscalar])
def logp_(ob):
#Some made up likelihood -
#The key here is that lp depends on the rv input and the observations
lp = np.log(norm.pdf(rv + ob))
return lp
return logp_
hb1_model = pm.Model()
with hb1_model:
I_mean = pm.Normal('I_mean', mu=0.1, sd=0.05)
xs = pm.DensityDist('x', logp_wrap(I_mean),observed = g_observed)
with hb1_model:
step = pm.Metropolis()
trace = pm.sample(1000, step)
I installed succesfully scitools_no_easyviz from conda (I work on Spyder), but I cannot import plot. To be more specific, here's my code
from scitools.std import *
def f(t):
return t**2*exp(-t**2)
t = linspace(0, 3, 51)
y = f(t)
plot(t, y)
savefig('tmp1.pdf') # produce PDF
savefig('tmp1.png') # produce PNG
figure()
def f(t):
return t**2*exp(-t**2)
t = linspace(0, 3, 51)
y = f(t)
plot(t, y)
xlabel('t')
ylabel('y')
legend('t^2*exp(-t^2)')
axis([0, 3, -0.05, 0.6]) # [tmin, tmax, ymin, ymax]
title('My First Easyviz Demo')
figure()
plot(t, y)
xlabel('sss')
When I run the code, I get the following error
NameError: name 'plot' is not defined
What could be the problem?
Using import * is not considered a best practice, although very practical. Try importing the functions you need, such as:
from scitools.std import plot
Additionally, this way you will know if "plot" is valid when you import it along side any other function.
Ensure you have the dependencies installed in order to use the package as noted here at https://code.google.com/archive/p/scitools/wikis/Installation.wiki
Additionally, installed following these instruction latest package and your code runs perfectly well with it.
Following this example:
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
for i, label in enumerate(('A', 'B', 'C', 'D')):
ax = fig.add_subplot(2,2,i+1)
ax.text(0.05, 0.95, label, transform=ax.transAxes,
fontsize=16, fontweight='bold', va='top')
plt.show()
I get this output:
Why are my labels normal weight, while the documentation shows this should create bold letters A, B, C, D?
I also get this warning:
Warning (from warnings module):
File "C:\Python27\lib\site-packages\matplotlib\font_manager.py", line 1228
UserWarning)
UserWarning: findfont: Could not match :family=Bitstream Vera Sans:style=italic:variant=normal:weight=bold:stretch=normal:size=x-small. Returning C:\Python27\lib\site-packages\matplotlib\mpl-data\fonts\ttf\Vera.ttf
OP Resolution
From a deleted answer posted by the OP on Sep 15, 2013
Ok, it was a problem with the installation of matplotlib
Try using weight instead of fontweight.
Maybe try using this -
plt.rcParams['axes.labelsize'] = 16
plt.rcParams['axes.labelweight'] = 'bold'
Do this at a global level in your program.
The example from your question works on my machine. Hence you definately have a library problem. Have you considered using latex to make bold text? Here an example
Code
import numpy as np
import matplotlib.pyplot as plt
fig, axs = plt.subplots(3, 1)
ax0, ax1, ax2 = axs
ax0.text(0.05, 0.95, 'example from question',
transform=ax0.transAxes, fontsize=16, fontweight='bold', va='top')
ax1.text(0.05, 0.8, 'you can try \\textbf{this} using \\LaTeX', usetex=True,
transform=ax1.transAxes, fontsize=16, va='top')
ax2.text(0.05, 0.95,
'or $\\bf{this}$ (latex math mode with things like '
'$x_\mathrm{test}^2$)',
transform=ax2.transAxes, fontsize=10, va='top')
plt.show()
Not sure if you're still having the issue. I tried your code in Anaconda/Spyder, Python 2.7. The plots appear with Bold labels (A,B,C,D). I agree the issue is probably with the library. Try replacing / updating font_manager.py or confirming font files are present:
Lib\site-packages\matplotlib\mpl-data\fonts\ttf\
I had the same problem and spent quite a few hours today on that. Here`s the solution that helped me:
import matplotlib
matplotlib.font_manager._rebuild()
With this, the font_manager could be upgraded easily.