What is the # doing in the following code? [duplicate] - python-2.7

What does the # symbol do in Python?

An # symbol at the beginning of a line is used for class and function decorators:
PEP 318: Decorators
Python Decorators
The most common Python decorators are:
#property
#classmethod
#staticmethod
An # in the middle of a line is probably matrix multiplication:
# as a binary operator.

Example
class Pizza(object):
def __init__(self):
self.toppings = []
def __call__(self, topping):
# When using '#instance_of_pizza' before a function definition
# the function gets passed onto 'topping'.
self.toppings.append(topping())
def __repr__(self):
return str(self.toppings)
pizza = Pizza()
#pizza
def cheese():
return 'cheese'
#pizza
def sauce():
return 'sauce'
print pizza
# ['cheese', 'sauce']
This shows that the function/method/class you're defining after a decorator is just basically passed on as an argument to the function/method immediately after the # sign.
First sighting
The microframework Flask introduces decorators from the very beginning in the following format:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
This in turn translates to:
rule = "/"
view_func = hello
# They go as arguments here in 'flask/app.py'
def add_url_rule(self, rule, endpoint=None, view_func=None, **options):
pass
Realizing this finally allowed me to feel at peace with Flask.

In Python 3.5 you can overload # as an operator. It is named as __matmul__, because it is designed to do matrix multiplication, but it can be anything you want. See PEP465 for details.
This is a simple implementation of matrix multiplication.
class Mat(list):
def __matmul__(self, B):
A = self
return Mat([[sum(A[i][k]*B[k][j] for k in range(len(B)))
for j in range(len(B[0])) ] for i in range(len(A))])
A = Mat([[1,3],[7,5]])
B = Mat([[6,8],[4,2]])
print(A # B)
This code yields:
[[18, 14], [62, 66]]

This code snippet:
def decorator(func):
return func
#decorator
def some_func():
pass
Is equivalent to this code:
def decorator(func):
return func
def some_func():
pass
some_func = decorator(some_func)
In the definition of a decorator you can add some modified things that wouldn't be returned by a function normally.

What does the “at” (#) symbol do in Python?
In short, it is used in decorator syntax and for matrix multiplication.
In the context of decorators, this syntax:
#decorator
def decorated_function():
"""this function is decorated"""
is equivalent to this:
def decorated_function():
"""this function is decorated"""
decorated_function = decorator(decorated_function)
In the context of matrix multiplication, a # b invokes a.__matmul__(b) - making this syntax:
a # b
equivalent to
dot(a, b)
and
a #= b
equivalent to
a = dot(a, b)
where dot is, for example, the numpy matrix multiplication function and a and b are matrices.
How could you discover this on your own?
I also do not know what to search for as searching Python docs or Google does not return relevant results when the # symbol is included.
If you want to have a rather complete view of what a particular piece of python syntax does, look directly at the grammar file. For the Python 3 branch:
~$ grep -C 1 "#" cpython/Grammar/Grammar
decorator: '#' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
--
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '#=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
--
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'#'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
We can see here that # is used in three contexts:
decorators
an operator between factors
an augmented assignment operator
Decorator Syntax:
A google search for "decorator python docs" gives as one of the top results, the "Compound Statements" section of the "Python Language Reference." Scrolling down to the section on function definitions, which we can find by searching for the word, "decorator", we see that... there's a lot to read. But the word, "decorator" is a link to the glossary, which tells us:
decorator
A function returning another function, usually applied as a function transformation using the #wrapper syntax. Common
examples for decorators are classmethod() and staticmethod().
The decorator syntax is merely syntactic sugar, the following two
function definitions are semantically equivalent:
def f(...):
...
f = staticmethod(f)
#staticmethod
def f(...):
...
The same concept exists for classes, but is less commonly used there.
See the documentation for function definitions and class definitions
for more about decorators.
So, we see that
#foo
def bar():
pass
is semantically the same as:
def bar():
pass
bar = foo(bar)
They are not exactly the same because Python evaluates the foo expression (which could be a dotted lookup and a function call) before bar with the decorator (#) syntax, but evaluates the foo expression after bar in the other case.
(If this difference makes a difference in the meaning of your code, you should reconsider what you're doing with your life, because that would be pathological.)
Stacked Decorators
If we go back to the function definition syntax documentation, we see:
#f1(arg)
#f2
def func(): pass
is roughly equivalent to
def func(): pass
func = f1(arg)(f2(func))
This is a demonstration that we can call a function that's a decorator first, as well as stack decorators. Functions, in Python, are first class objects - which means you can pass a function as an argument to another function, and return functions. Decorators do both of these things.
If we stack decorators, the function, as defined, gets passed first to the decorator immediately above it, then the next, and so on.
That about sums up the usage for # in the context of decorators.
The Operator, #
In the lexical analysis section of the language reference, we have a section on operators, which includes #, which makes it also an operator:
The following tokens are operators:
+ - * ** / // % #
<< >> & | ^ ~
< > <= >= == !=
and in the next page, the Data Model, we have the section Emulating Numeric Types,
object.__add__(self, other)
object.__sub__(self, other)
object.__mul__(self, other)
object.__matmul__(self, other)
object.__truediv__(self, other)
object.__floordiv__(self, other)
[...]
These methods are called to implement the binary arithmetic operations (+, -, *, #, /, //, [...]
And we see that __matmul__ corresponds to #. If we search the documentation for "matmul" we get a link to What's new in Python 3.5 with "matmul" under a heading "PEP 465 - A dedicated infix operator for matrix multiplication".
it can be implemented by defining __matmul__(), __rmatmul__(), and
__imatmul__() for regular, reflected, and in-place matrix multiplication.
(So now we learn that #= is the in-place version). It further explains:
Matrix multiplication is a notably common operation in many fields of
mathematics, science, engineering, and the addition of # allows
writing cleaner code:
S = (H # beta - r).T # inv(H # V # H.T) # (H # beta - r)
instead of:
S = dot((dot(H, beta) - r).T,
dot(inv(dot(dot(H, V), H.T)), dot(H, beta) - r))
While this operator can be overloaded to do almost anything, in numpy, for example, we would use this syntax to calculate the inner and outer product of arrays and matrices:
>>> from numpy import array, matrix
>>> array([[1,2,3]]).T # array([[1,2,3]])
array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
>>> array([[1,2,3]]) # array([[1,2,3]]).T
array([[14]])
>>> matrix([1,2,3]).T # matrix([1,2,3])
matrix([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
>>> matrix([1,2,3]) # matrix([1,2,3]).T
matrix([[14]])
Inplace matrix multiplication: #=
While researching the prior usage, we learn that there is also the inplace matrix multiplication. If we attempt to use it, we may find it is not yet implemented for numpy:
>>> m = matrix([1,2,3])
>>> m #= m.T
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: In-place matrix multiplication is not (yet) supported. Use 'a = a # b' instead of 'a #= b'.
When it is implemented, I would expect the result to look like this:
>>> m = matrix([1,2,3])
>>> m #= m.T
>>> m
matrix([[14]])

What does the “at” (#) symbol do in Python?
# symbol is a syntactic sugar python provides to utilize decorator,
to paraphrase the question, It's exactly about what does decorator do in Python?
Put it simple decorator allow you to modify a given function's definition without touch its innermost (it's closure).
It's the most case when you import wonderful package from third party. You can visualize it, you can use it, but you cannot touch its innermost and its heart.
Here is a quick example,
suppose I define a read_a_book function on Ipython
In [9]: def read_a_book():
...: return "I am reading the book: "
...:
In [10]: read_a_book()
Out[10]: 'I am reading the book: '
You see, I forgot to add a name to it.
How to solve such a problem? Of course, I could re-define the function as:
def read_a_book():
return "I am reading the book: 'Python Cookbook'"
Nevertheless, what if I'm not allowed to manipulate the original function, or if there are thousands of such function to be handled.
Solve the problem by thinking different and define a new_function
def add_a_book(func):
def wrapper():
return func() + "Python Cookbook"
return wrapper
Then employ it.
In [14]: read_a_book = add_a_book(read_a_book)
In [15]: read_a_book()
Out[15]: 'I am reading the book: Python Cookbook'
Tada, you see, I amended read_a_book without touching it inner closure. Nothing stops me equipped with decorator.
What's about #
#add_a_book
def read_a_book():
return "I am reading the book: "
In [17]: read_a_book()
Out[17]: 'I am reading the book: Python Cookbook'
#add_a_book is a fancy and handy way to say read_a_book = add_a_book(read_a_book), it's a syntactic sugar, there's nothing more fancier about it.

If you are referring to some code in a python notebook which is using Numpy library, then # operator means Matrix Multiplication. For example:
import numpy as np
def forward(xi, W1, b1, W2, b2):
z1 = W1 # xi + b1
a1 = sigma(z1)
z2 = W2 # a1 + b2
return z2, a1

Decorators were added in Python to make function and method wrapping (a function that receives a function and returns an enhanced one) easier to read and understand. The original use case was to be able to define the methods as class methods or static methods on the head of their definition. Without the decorator syntax, it would require a rather sparse and repetitive definition:
class WithoutDecorators:
def some_static_method():
print("this is static method")
some_static_method = staticmethod(some_static_method)
def some_class_method(cls):
print("this is class method")
some_class_method = classmethod(some_class_method)
If the decorator syntax is used for the same purpose, the code is shorter and easier to understand:
class WithDecorators:
#staticmethod
def some_static_method():
print("this is static method")
#classmethod
def some_class_method(cls):
print("this is class method")
General syntax and possible implementations
The decorator is generally a named object ( lambda expressions are not allowed) that accepts a single argument when called (it will be the decorated function) and returns another callable object. "Callable" is used here instead of "function" with premeditation. While decorators are often discussed in the scope of methods and functions, they are not limited to them. In fact, anything that is callable (any object that implements the _call__ method is considered callable), can be used as a decorator and often objects returned by them are not simple functions but more instances of more complex classes implementing their own __call_ method.
The decorator syntax is simply only a syntactic sugar. Consider the following decorator usage:
#some_decorator
def decorated_function():
pass
This can always be replaced by an explicit decorator call and function reassignment:
def decorated_function():
pass
decorated_function = some_decorator(decorated_function)
However, the latter is less readable and also very hard to understand if multiple decorators are used on a single function.
Decorators can be used in multiple different ways as shown below:
As a function
There are many ways to write custom decorators, but the simplest way is to write a function that returns a subfunction that wraps the original function call.
The generic patterns is as follows:
def mydecorator(function):
def wrapped(*args, **kwargs):
# do some stuff before the original
# function gets called
result = function(*args, **kwargs)
# do some stuff after function call and
# return the result
return result
# return wrapper as a decorated function
return wrapped
As a class
While decorators almost always can be implemented using functions, there are some situations when using user-defined classes is a better option. This is often true when the decorator needs complex parametrization or it depends on a specific state.
The generic pattern for a nonparametrized decorator as a class is as follows:
class DecoratorAsClass:
def __init__(self, function):
self.function = function
def __call__(self, *args, **kwargs):
# do some stuff before the original
# function gets called
result = self.function(*args, **kwargs)
# do some stuff after function call and
# return the result
return result
Parametrizing decorators
In real code, there is often a need to use decorators that can be parametrized. When the function is used as a decorator, then the solution is simple—a second level of wrapping has to be used. Here is a simple example of the decorator that repeats the execution of a decorated function the specified number of times every time it is called:
def repeat(number=3):
"""Cause decorated function to be repeated a number of times.
Last value of original function call is returned as a result
:param number: number of repetitions, 3 if not specified
"""
def actual_decorator(function):
def wrapper(*args, **kwargs):
result = None
for _ in range(number):
result = function(*args, **kwargs)
return result
return wrapper
return actual_decorator
The decorator defined this way can accept parameters:
>>> #repeat(2)
... def foo():
... print("foo")
...
>>> foo()
foo
foo
Note that even if the parametrized decorator has default values for its arguments, the parentheses after its name is required. The correct way to use the preceding decorator with default arguments is as follows:
>>> #repeat()
... def bar():
... print("bar")
...
>>> bar()
bar
bar
bar
Finally lets see decorators with Properties.
Properties
The properties provide a built-in descriptor type that knows how to link an attribute to a set of methods. A property takes four optional arguments: fget , fset , fdel , and doc . The last one can be provided to define a docstring that is linked to the attribute as if it were a method. Here is an example of a Rectangle class that can be controlled either by direct access to attributes that store two corner points or by using the width , and height properties:
class Rectangle:
def __init__(self, x1, y1, x2, y2):
self.x1, self.y1 = x1, y1
self.x2, self.y2 = x2, y2
def _width_get(self):
return self.x2 - self.x1
def _width_set(self, value):
self.x2 = self.x1 + value
def _height_get(self):
return self.y2 - self.y1
def _height_set(self, value):
self.y2 = self.y1 + value
width = property(
_width_get, _width_set,
doc="rectangle width measured from left"
)
height = property(
_height_get, _height_set,
doc="rectangle height measured from top"
)
def __repr__(self):
return "{}({}, {}, {}, {})".format(
self.__class__.__name__,
self.x1, self.y1, self.x2, self.y2
)
The best syntax for creating properties is using property as a decorator. This will reduce the number of method signatures inside of the class
and make code more readable and maintainable. With decorators the above class becomes:
class Rectangle:
def __init__(self, x1, y1, x2, y2):
self.x1, self.y1 = x1, y1
self.x2, self.y2 = x2, y2
#property
def width(self):
"""rectangle height measured from top"""
return self.x2 - self.x1
#width.setter
def width(self, value):
self.x2 = self.x1 + value
#property
def height(self):
"""rectangle height measured from top"""
return self.y2 - self.y1
#height.setter
def height(self, value):
self.y2 = self.y1 + value

Starting with Python 3.5, the '#' is used as a dedicated infix symbol for MATRIX MULTIPLICATION (PEP 0465 -- see https://www.python.org/dev/peps/pep-0465/)

# can be a math operator or a DECORATOR but what you mean is a decorator.
This code:
def func(f):
return f
func(lambda :"HelloWorld")()
using decorators can be written like:
def func(f):
return f
#func
def name():
return "Hello World"
name()
Decorators can have arguments.
You can see this GeeksforGeeks post: https://www.geeksforgeeks.org/decorators-in-python/

It indicates that you are using a decorator. Here is Bruce Eckel's example from 2008.

Python decorator is like a wrapper of a function or a class. It’s still too conceptual.
def function_decorator(func):
def wrapped_func():
# Do something before the function is executed
func()
# Do something after the function has been executed
return wrapped_func
The above code is a definition of a decorator that decorates a function.
function_decorator is the name of the decorator.
wrapped_func is the name of the inner function, which is actually only used in this decorator definition. func is the function that is being decorated.
In the inner function wrapped_func, we can do whatever before and after the func is called. After the decorator is defined, we simply use it as follows.
#function_decorator
def func():
pass
Then, whenever we call the function func, the behaviours we’ve defined in the decorator will also be executed.
EXAMPLE :
from functools import wraps
def mydecorator(f):
#wraps(f)
def wrapped(*args, **kwargs):
print "Before decorated function"
r = f(*args, **kwargs)
print "After decorated function"
return r
return wrapped
#mydecorator
def myfunc(myarg):
print "my function", myarg
return "return value"
r = myfunc('asdf')
print r
Output :
Before decorated function
my function asdf
After decorated function
return value

To say what others have in a different way: yes, it is a decorator.
In Python, it's like:
Creating a function (follows under the # call)
Calling another function to operate on your created function. This returns a new function. The function that you call is the argument of the #.
Replacing the function defined with the new function returned.
This can be used for all kinds of useful things, made possible because functions are objects and just necessary just instructions.

# symbol is also used to access variables inside a plydata / pandas dataframe query, pandas.DataFrame.query.
Example:
df = pandas.DataFrame({'foo': [1,2,15,17]})
y = 10
df >> query('foo > #y') # plydata
df.query('foo > #y') # pandas

Related

how to fit a method belonging to an instance with pymc3?

I failed to fit a method belonging to an instance of a class, as a Deterministic function, with PyMc3. Can you show me how to do that ?
For simplicity, my case is summarised below with a simple example. In reality, my constraint is that everything is made through a GUI and actions like ‘find_MAP’ should be inside methods linked to pyqt buttons.
I want to fit the function ‘FunctionIWantToFit’ over the data points. Problem, the following code:
import numpy as np
import pymc3 as pm3
from scipy.interpolate import interp1d
import theano.tensor as tt
import theano.compile
class cprofile:
def __init__(self):
self.observed_x = np.array([0.3,1.4,3.1,5,6.8,9,13.4,17.1])
self.observations = np.array([6.25,2.75,1.25,1.25,1.5,1.75,1.5,1])
self.x = np.arange(0,18,0.5)
#theano.compile.ops.as_op(itypes=[tt.dscalar,tt.dscalar,tt.dscalar],
otypes=[tt.dvector])
def FunctionIWantToFit(self,t,y,z):
# can be complicated but simple in this example
# among other things, this FunctionIWantToFit depends on a bunch of
# variables and methods that belong to this instance of the class cprofile,
# so it cannot simply be put outside the class ! (like in the following example)
val=t+y*self.x+z*self.x**2
interp_values = interp1d(self.x,val)
return interp_values(self.observed_x)
def doMAP(self):
model = pm3.Model()
with model:
t = pm3.Uniform("t",0,5)
y = pm3.Uniform("y",0,5)
z = pm3.Uniform("z",0,5)
MyModel = pm3.Deterministic('MyModel',self.FunctionIWantToFit(t,y,z))
obs = pm3.Normal('obs',mu=MyModel,sd=0.1,observed=self.observations)
start = pm3.find_MAP()
print('start: ',start)
test=cprofile()
test.doMAP()
gives the following error:
Traceback (most recent call last):
File "<ipython-input-15-3dfb7aa09f84>", line 1, in <module>
runfile('/Users/steph/work/profiles/GUI/pymc3/so.py', wdir='/Users/steph/work/profiles/GUI/pymc3')
File "/Users/steph/anaconda/lib/python3.5/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/steph/anaconda/lib/python3.5/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/steph/work/profiles/GUI/pymc3/so.py", line 44, in <module>
test.doMAP()
File "/Users/steph/work/profiles/GUI/pymc3/so.py", line 38, in doMAP
MyModel = pm3.Deterministic('MyModel',self.FunctionIWantToFit(x,y,z))
File "/Users/steph/anaconda/lib/python3.5/site-packages/theano/gof/op.py", line 668, in __call__
required = thunk()
File "/Users/steph/anaconda/lib/python3.5/site-packages/theano/gof/op.py", line 912, in rval
r = p(n, [x[0] for x in i], o)
File "/Users/steph/anaconda/lib/python3.5/site-packages/theano/compile/ops.py", line 522, in perform
outs = self.__fn(*inputs)
TypeError: FunctionIWantToFit() missing 1 required positional argument: 'z'
What’s wrong ?
remark 1: I systematically get an error message concerning the last parameter of ‘FunctionIWantToFit’. here it’s ‘z’ but if I remove z from the signature, the error message concerns ‘y’ (identical except from the name of the variable). if I add a 4th variable ‘w’ in the signature, the error message concerns ‘w’ (identical except from the name of the variable).
rk2: it looks like I missed something very basic in ‘theano’ or ‘pymc3’, because when I put ‘FunctionIWantToFit’ outside the class, it works. See the following example.
class cprofile:
def __init__(self):
self.observations = np.array([6.25,2.75,1.25,1.25,1.5,1.75,1.5,1])
def doMAP(self):
model = pm3.Model()
with model:
t = pm3.Uniform("t",0,5)
y = pm3.Uniform("y",0,5)
z = pm3.Uniform("z",0,5)
MyModel = pm3.Deterministic('MyModel',FunctionIWantToFit(t,y,z))
obs = pm3.Normal('obs',mu=MyModel,sd=0.1,observed=self.observations)
start = pm3.find_MAP()
print('start: ',start)
#theano.compile.ops.as_op(itypes=[tt.dscalar,tt.dscalar,tt.dscalar],
otypes=[tt.dvector])
def FunctionIWantToFit(t,y,z):
observed_x = np.array([0.3,1.4,3.1,5,6.8,9,13.4,17.1])
x = np.arange(0,18,0.5)
val=t+y*x+z*x**2
interp_values = interp1d(x,val)
return interp_values(observed_x)
test=cprofile()
test.doMAP()
gives:
Warning: gradient not available.(E.g. vars contains discrete variables). MAP estimates may not be accurate for the default parameters. Defaulting to non-gradient minimization fmin_powell.
WARNING:pymc3:Warning: gradient not available.(E.g. vars contains discrete variables). MAP estimates may not be accurate for the default parameters. Defaulting to non-gradient minimization fmin_powell.
Optimization terminated successfully.
Current function value: 1070.673818
Iterations: 4
Function evaluations: 179
start: {'t_interval_': array(-0.27924150484602733), 'y_interval_': array(-9.940000425802811), 'z_interval_': array(-12.524909223913992)}
Except that I don’t know how to do that without big modifications in several modules, since the real ‘FunctionIWantToFit’ depends on a bunch of variables and methods that belong to this instance of the class profile.
In fact I 'm not even sure I know how to do that since ‘FunctionIWantToFit’ should then have objects in arguments (that I currently use via self) and I'm not sure how to do that with the theano decorator.
So I would prefer to avoid this solution... unless necessary. then I need explanations on how to implement it...
added on april 9, 2017:
Even without the interpolation question, it doesn't work because I must have missed something obvious with theano and/or pymc3. Please can you explain the problem ? I just want to compare model and data. First, it's such a shame being stuck to pymc2. ; second, I'm sure I'm not the only one with such a basic problem.
For example, let's consider variations around this very basic code:
import numpy as np
import theano
import pymc3
theano.config.compute_test_value = 'ignore'
theano.config.on_unused_input = 'ignore'
class testclass:
x = np.arange(0,18,0.5)
observed_x = np.array([0.3,1.4,3.1,5,6.8,9,13.4,17.1])
observations = np.array([6.25,2.75,1.25,1.25,1.5,1.75,1.5,1])
def testfunc(self,t,y,z):
t2 = theano.tensor.dscalar('t2')
y2 = theano.tensor.dscalar('y2')
z2 = theano.tensor.dscalar('z2')
val = t2 + y2 * self.observed_x + z2 * self.observed_x**2
f = theano.function([t2,y2,z2],val)
return f
test=testclass()
model = pymc3.Model()
with model:
t = pymc3.Uniform("t",0,5)
y = pymc3.Uniform("y",0,5)
z = pymc3.Uniform("z",0,5)
with model:
MyModel = pymc3.Deterministic('MyModel',test.testfunc(t,y,z))
with model:
obs = pymc3.Normal('obs',mu=MyModel,sd=0.1,observed=test.observations)
this code fails at the last line with the error message: TypeError: unsupported operand type(s) for -: 'TensorConstant' and 'Function'
if I change 'testfunc' into:
def testfunc(self,t,y,z):
t2 = theano.tensor.dscalar('t2')
y2 = theano.tensor.dscalar('y2')
z2 = theano.tensor.dscalar('z2')
val = t2 + y2 * self.observed_x + z2 * self.observed_x**2
f = theano.function([t2,y2,z2],val)
fval = f(t,y,z,self.observed_x)
return fval
the code fails at the 'MyModel =' line with error TypeError: ('Bad input argument to theano function with name "/Users/steph/work/profiles/GUI/pymc3/theanotest170409.py:32" at index 0(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')
if I go back to the original 'testfunc' but change the last 'with model' lines with:
with model:
fval = test.testfunc(t,y,z)
obs = pymc3.Normal('obs',mu=fval,sd=0.1,observed=test.observations)
the error is the same as the first one.
I presented here only 3 tries but I would like to underline that I tried many many combinations, simpler and simpler until these ones, during hours. I have the feeling pymc3 shows a huge change of spirit, compared to pymc2, that I didn't get and is poorly documented...
Ok, let's do this by parts. First I'll explain the error messages that you got, and then I'll tell you how I would proceed.
On the first question, the direct reason why you're getting a complaint on the missing parameters is because your function, defined inside the class, takes as input (self, t, y, z), while you're declaring it in the op decorator as having only three inputs (t, y, z). You would have to declare the inputs as being four in your decorator to account for the class instance itself.
On "added on april 9, 2017:", the first code will not work because the output of test.testfunc(t,y,z) is a theano function itself. pymc3.Deterministic is expecting it to output theano variables (or python variables). Instead, make test.testfun output val = t2 + y2 * self.observed_x + z2 * self.observed_x**2 directly.
Then, on "if I change 'testfunc' into:", you get that error because of the way pymc3 is trying to work with theano functions. Long story short, the problem is that when pymc3 is making use of this function, it will send it theano variables, while fval is expecting numerical variables (numpy arrays or other). As in the previous paragraph, you just need to output val directly: no need to compile any theano function for this.
As for how I would proceed, I would try to declare the class instance as input to the theano decorator. Unfortunately, I can't find any documentation on how to do this and it might actually be impossible (see this old post, for example).
Then I would try to pass everything the function needs as inputs and define it outside of the class. This could be quite cumbersome and if it needs methods as input, then you run into additional problems.
Another way of doing this is to create a child class of theano.gof.Op whose init method takes your class (or rather an instance of it) as input and then define your perform() method. This would look something like this:
class myOp(theano.gof.Op):
""" These are the inputs/outputs you used in your as_op
decorator.
"""
itypes=[tt.dscalar,tt.dscalar,tt.dscalar]
otypes=[tt.dvector]
def __init__(self, myclass):
""" myclass would be the class you had from before, which
you called cprofile in your first block of code."""
self.myclass = myclass
def perform(self,node, inputs, outputs):
""" Here you define your operations, but instead of
calling everyting from that class with self.methods(), you
just do self.myclass.methods().
Here, 'inputs' is a list with the three inputs you declared
so you need to unpack them. 'outputs' is something similar, so
the function doesn't actually return anything, but saves all
to outputs. 'node' is magic juice that keeps the world
spinning around; you need not do anything with it, but always
include it.
"""
t, y, z = inputs[0][0], inputs[0][1], inputs[0][2]
outputs[0][0] = t+y*self.myclass.x+z*self.myclass.x**2
myop = myOp(myclass)
Once you have done this, you can use myop as your Op for the rest of your code. Note that some parts are missing. You can check my example for more details.
As for the example, you do not need to define the grad() method. Because of this, you can do all operations in perform() in pure python, if that helps.
Alternatively, and I say this with a smirk on my face, if you have access to the definition of the class you're using, you can also make it inherit from theano.gof.Op, create the perform() method (as in my other example, where you left a message) and try to use it like that. It could create conflicts with whatever else you're doing with that class and it's probably quite hard to get right, but might be fun to try.
theano.compile.ops.as_op is just a short-hand for defining simple Theano Ops. If you want to code more involved ones, it is better to define it in a separate class. Objects of this class could of course take a reference to an instance of your cprofile if that really is necessary.
http://deeplearning.net/software/theano/extending/extending_theano.html
I finally converged toward the successful code below:
import numpy as np
import theano
from scipy.interpolate import interp1d
import pymc3 as pm3
theano.config.compute_test_value = 'ignore'
theano.config.on_unused_input = 'ignore'
class cprofile:
observations = np.array([6.25,2.75,1.25,1.25,1.5,1.75,1.5,1])
x = np.arange(0,18,0.5)
observed_x = np.array([0.3,1.4,3.1,5,6.8,9,13.4,17.1])
def doMAP(self):
model = pm3.Model()
with model:
t = pm3.Uniform("t",0,5)
y = pm3.Uniform("y",0,5)
z = pm3.Uniform("z",0,5)
obs=pm3.Normal('obs',
mu=FunctionIWantToFit(self)(t,y,z),
sd=0.1,observed=self.observations)
start = pm3.find_MAP()
print('start: ',start)
class FunctionIWantToFit(theano.gof.Op):
itypes=[theano.tensor.dscalar,
theano.tensor.dscalar,
theano.tensor.dscalar]
otypes=[theano.tensor.dvector]
def __init__(self, cp):
self.cp = cp # note cp is an instance of the 'cprofile' class
def perform(self,node, inputs, outputs):
t, y, z = inputs[0], inputs[1], inputs[2]
xxx = self.cp.x
temp = t+y*xxx+z*xxx**2
interpolated_concentration = interp1d(xxx,temp)
outputs[0][0] = interpolated_concentration(self.cp.observed_x)
testcp=cprofile()
testcp.doMAP()
thanks to the answer by Dario because I was too slow to understand the first answer by myself. I get it retrospectively but I strongly think the pymc3 doc is painfully unclear. It should contain very simple and illustrative examples.
However I didn’t succed in doing anything that work following the comment by Chris. Could anyone explain and/or give an example ?
One more thing: I don’t know whether my example above is efficient or could be simplified. In particular it gives me the impression the instance ‘testcp’ is copied twice in memory. More comments/answers are welcome to go further.

How to convert string to function reference in python

I have a class that transforms some values via a user-specified function. The reference to the function is passed in the constructor and saved as an attribute. I want to be able to pickle or make copies of the class. In the __getstate__() method, I convert the dictionary entry to a string to make it safe for pickling or copying. However, in the __setstate__() method I'd like to convert back from string to function reference, so the new class can transform values.
class transformer(object):
def __init__(self, values=[1], transform_fn=np.sum):
self.values = deepcopy(values)
self.transform_fn = transform_fn
def transform(self):
return self.transform_fn(self.values)
def __getstate__(self):
obj_dict = self.__dict__.copy()
# convert function reference to string
obj_dict['transform_fn'] = str(self.transform_fn)
return obj_dict
def __setstate__(self, obj_dict):
self.__dict__.update(obj_dict)
# how to convert back from string to function reference?
The function reference that is passed can be any function, so solutions involving a dictionary with a fixed set of function references is not practical/flexible enough. I would use it like the following.
from copy import deepcopy
import numpy as np
my_transformer = transformer(values=[0,1], transform_fn=np.exp)
my_transformer.transform()
This outputs: array([ 1. , 2.71828183])
new_transformer = deepcopy(my_transformer)
new_transformer.transform()
This gives me: TypeError: 'str' object is not callable, as expected.
You could use dir to access names in a given scope, and then getattr to retrieve them.
For example, if you know the function is in numpy:
import numpy
attrs = [x for x in dir(numpy) if '__' not in x] # I like to ignore private vars
if obj_dict['transform_fn'] in attrs:
fn = getattr(numpy, obj_dict['transform_fn'])
else:
print 'uhoh'
This could be extended to look in other modules / scopes.
If you want to search in the current scope, you can do the following (extended from here):
import sys
this = sys.modules[__name__]
attrs = dir(this)
if obj_dict['transform_fn'] in attrs:
fn = getattr(this, obj_dict['transform_fn'])
else:
print 'Damn, well that sucks.'
To search submodules / imported modules you could iterate over attrs based on type (potentially recursively, though note that this is an attr of this).
I think if you are asking the same question, I came here for.
The answer is simply use eval() to evaluate the name.
>>> ref = eval('name')
This is going to return what 'name' references in the scope where the eval is
executed, then you can determine if that references is a function.

can django view function override each other?

I'm going through the django tutorials and I was wondering what happens when you have 2 functions with the same name in views.py?
for example:
def results(request, poll_id):
p = get_object_or_404(Poll, pk=poll_id)
return render_to_response('polls/results.html', {'poll': p})
def results(request, poll_id):
return HttpResponse("You're looking at the results of poll %s." % poll_id)
when i ran the code, the bottom function was the one that was called. How does this work?
In Python, methods and functions can take any number of arguments; which negates the need to have different function "signatures" to support different types of arguments passed; which is the common use case for function overloading. See 4.7.3. Arbitrary Argument Lists in the python documentation.
The reason the second method gets called is because you simply overwrite the method definition when you define it with the same name (and same argument list). For python, it is the same as:
>>> x = 1
>>> x = 'Hello'
>>> print x
Hello
You just re-defined the same method again, so it uses the last definition.
If I'm not mistaking, you need to use classes if you need extend or override the view method ... Or use "if" statement :)
https://docs.djangoproject.com/en/dev/topics/class-based-views/
In you're example, thats just python's normal behaviour ... reads the file from the top left .. then it compiles it and use it ...

Extending SWIG builtin classes

The -builtin option of SWIG has the advantage of being faster, and of being exempt of a bug with multiple inheritance.
The setback is I can't set any attribute on the generated classes or any subclass :
-I can extend a python builtin type like list, without hassle, by subclassing it :
class Thing(list):
pass
Thing.myattr = 'anything' # No problem
-However using the same approach on a SWIG builtin type, the following happens :
class Thing(SWIGBuiltinClass):
pass
Thing.myattr = 'anything'
AttributeError: type object 'Thing' has no attribute 'myattr'
How could I work around this problem ?
I found a solution quite by accident. I was experimenting with metaclasses, thinking I could manage to override the setattr and getattr functions of the builtin type in the subclass.
Doing this I discovered the builtins already have a metaclass (SwigPyObjectType), so my metaclass had to inherit it.
And that's it. This alone solved the problem. I would be glad if someone could explain why :
SwigPyObjectType = type(SWIGBuiltinClass)
class Meta(SwigPyObjectType):
pass
class Thing(SWIGBuiltinClass):
__metaclass__ = Meta
Thing.myattr = 'anything' # Works fine this time
The problem comes from how swig implemented the classes in "-builtin" to be just like builtin classes (hence the name).
builtin classes are not extensible - try to add or modify a member of "str" and python won't let you modify the attribute dictionary.
I do have a solution I've been using for several years.
I'm not sure I can recommend it because:
It's arguably evil - the moral equivalent of casting away const-ness in C/C++
It's unsupported and could break in future python releases
I haven't tried it with python3
I would be a bit uncomfortable using "black-magic" like this in production code - it could break and is certainly obscure - but at least one giant corporation IS using this in production code
But.. I love how well it works to solve some obscure features we wanted for debugging.
The original idea is not mine, I got it from:
https://gist.github.com/mahmoudimus/295200 by Mahmoud Abdelkader
The basic idea is to access the const dictionary in the swig-created type object as a non-const dictionary and add/override any desired methods.
FYI, the technique of runtime modification of classes is called monkeypatching, see https://en.wikipedia.org/wiki/Monkey_patch
First - here's "monkeypatch.py":
''' monkeypatch.py:
I got this from https://gist.github.com/mahmoudimus/295200 by Mahmoud Abdelkader,
his comment: "found this from Armin R. on Twitter, what a beautiful gem ;)"
I made a few changes for coding style preferences
- Rudy Albachten April 30 2015
'''
import ctypes
from types import DictProxyType, MethodType
# figure out the size of _Py_ssize_t
_Py_ssize_t = ctypes.c_int64 if hasattr(ctypes.pythonapi, 'Py_InitModule4_64') else ctypes.c_int
# python without tracing
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
]
# fixup for python with tracing
if object.__basicsize__ != ctypes.sizeof(_PyObject):
class _PyObject(ctypes.Structure):
pass
_PyObject._fields_ = [
('_ob_next', ctypes.POINTER(_PyObject)),
('_ob_prev', ctypes.POINTER(_PyObject)),
('ob_refcnt', _Py_ssize_t),
('ob_type', ctypes.POINTER(_PyObject))
]
class _DictProxy(_PyObject):
_fields_ = [('dict', ctypes.POINTER(_PyObject))]
def reveal_dict(proxy):
if not isinstance(proxy, DictProxyType):
raise TypeError('dictproxy expected')
dp = _DictProxy.from_address(id(proxy))
ns = {}
ctypes.pythonapi.PyDict_SetItem(ctypes.py_object(ns), ctypes.py_object(None), dp.dict)
return ns[None]
def get_class_dict(cls):
d = getattr(cls, '__dict__', None)
if d is None:
raise TypeError('given class does not have a dictionary')
if isinstance(d, DictProxyType):
return reveal_dict(d)
return d
def test():
import random
d = get_class_dict(str)
d['foo'] = lambda x: ''.join(random.choice((c.upper, c.lower))() for c in x)
print "and this is monkey patching str".foo()
if __name__ == '__main__':
test()
Here's a contrived example using monkeypatch:
I have a class "myclass" in module "mystuff" wrapped with swig -python -builtin
I want to add an extra runtime method "namelen" that returns the length of the name returned by myclass.getName()
import mystuff
import monkeypatch
# add a "namelen" method to all "myclass" objects
def namelen(self):
return len(self.getName())
d = monkeypatch.get_class_dict(mystuff.myclass)
d['namelen'] = namelen
x = mystuff.myclass("xxxxxxxx")
print "namelen:", x.namelen()
Note that this can also be used to extend or override methods on builtin python classes, as is demonstrated in the test in monkeypatch.py: it adds a method "foo" to the builtin str class that returns a copy of the original string with random upper/lower case letters
I would probably replace:
# add a "namelen" method to all "myclass" objects
def namelen(self):
return len(self.getName())
d = monkeypatch.get_class_dict(mystuff.myclass)
d['namelen'] = namelen
with
# add a "namelen" method to all "myclass" objects
monkeypatch.get_class_dict(mystuff.myclass)['namelen'] = lambda self: return len(self.getName())
to avoid extra global variables

How do I redefine functions in python?

I got a function in a certain module that I want to redefine(mock) at runtime for testing purposes. As far as I understand, function definition is nothing more than an assignment in python(the module definition itself is a kind of function being executed). As I said, I wanna do this in the setup of a test case, so the function to be redefined lives in another module. What is the syntax for doing this?
For example, 'module1' is my module and 'func1' is my function, in my testcase I have tried this (no success):
import module1
module1.func1 = lambda x: return True
import module1
import unittest
class MyTest(unittest.TestCase):
def setUp(self):
# Replace othermod.function with our own mock
self.old_func1 = module1.func1
module1.func1 = self.my_new_func1
def tearDown(self):
module1.func1 = self.old_func1
def my_new_func1(self, x):
"""A mock othermod.function just for our tests."""
return True
def test_func1(self):
module1.func1("arg1")
Lots of mocking libraries provide tools for doing this sort of mocking, you should investigate them as you will likely get a good deal of help from them.
import foo
def bar(x):
pass
foo.bar = bar
Just assign a new function or lambda to the old name:
>>> def f(x):
... return x+1
...
>>> f(3)
4
>>> def new_f(x):
... return x-1
...
>>> f = new_f
>>> f(3)
2
It works also when a function is from another module:
### In other.py:
# def f(x):
# return x+1
###
import other
other.f = lambda x: x-1
print other.f(1) # prints 0, not 2
Use redef: http://github.com/joeheyming/redef
import module1
from redef import redef
rd_f1 = redef(module1, 'func1', lambda x: True)
When rd_f1 goes out of scope or is deleted, func1 will go back to being back to normal
If you want to reload into the interpreter file foo.py that you are editing, you can make a simple-to-type function and use execfile(), but I just learned that it doesn't work without the global list of all functions (sadly), unless someone has a better idea:
Somewhere in file foo.py:
def refoo ():
global fooFun1, fooFun2
execfile("foo.py")
In the python interpreter:
refoo() # You now have your latest edits from foo.py