Theano scan function - python-2.7

Example taken from: http://deeplearning.net/software/theano/library/scan.html
k = T.iscalar("k")
A = T.vector("A")
# Symbolic description of the result
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
# We only care about A**k, but scan has provided us with A**1 through A**k.
# Discard the values that we don't care about. Scan is smart enough to
# notice this and not waste memory saving them.
final_result = result[-1]
# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)
print power(range(10),2)
print power(range(10),4)
What is prior_result? More accurately, where is prior_result defined?
I have this same question for lot of the examples given on:http://deeplearning.net/software/theano/library/scan.html
For example,
components, updates = theano.scan(fn=lambda coefficient, power, free_variable: coefficient * (free_variable ** power),
outputs_info=None,
sequences=[coefficients, theano.tensor.arange(max_coefficients_supported)],
non_sequences=x)
Where is power and free_variables defined?

This is using a Python feature call "lambda". lambda are unnamed python function of 1 line. They have this forme:
lambda [param...]: code
In your example it is:
lambda prior_result, A: prior_result * A
This is a function that take prior_result and A as input. This function, is passed to the scan() function as the fn parameter. scan() will call it with 2 variables. The first one will be the correspondance of what was provided in the output_info parameter. The other is what is provided in the non_sequence parameter.

Related

How to Use MCMC with a Custom Log-Probability and Solve for a Matrix

The code is in PyMC3, but this is a general problem. I want to find which matrix (combination of variables) gives me the highest probability. Taking the mean of the trace of each element is meaningless because they depend on each other.
Here is a simple case; the code uses a vector rather than a matrix for simplicity. The goal is to find a vector of length 2, where the each value is between 0 and 1, so that the sum is 1.
import numpy as np
import theano
import theano.tensor as tt
import pymc3 as mc
# define a theano Op for our likelihood function
class LogLike_Matrix(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, loglike):
self.likelihood = loglike # the log-p function
def perform(self, node, inputs, outputs):
# the method that is used when calling the Op
theta, = inputs # this will contain my variables
# call the log-likelihood function
logl = self.likelihood(theta)
outputs[0][0] = np.array(logl) # output the log-likelihood
def logLikelihood_Matrix(data):
"""
We want sum(data) = 1
"""
p = 1-np.abs(np.sum(data)-1)
return np.log(p)
logl_matrix = LogLike_Matrix(logLikelihood_Matrix)
# use PyMC3 to sampler from log-likelihood
with mc.Model():
"""
Data will be sampled randomly with uniform distribution
because the log-p doesn't work on it
"""
data_matrix = mc.Uniform('data_matrix', shape=(2), lower=0.0, upper=1.0)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable(data_matrix)
# use a DensityDist (use a lamdba function to "call" the Op)
mc.DensityDist('likelihood_matrix', lambda v: logl_matrix(v), observed={'v': theta})
trace_matrix = mc.sample(5000, tune=100, discard_tuned_samples=True)
If you only want the highest likelihood parameter values, then you want the Maximum A Posteriori (MAP) estimate, which can be obtained using pymc3.find_MAP() (see starting.py for method details). If you expect a multimodal posterior, then you will likely need to run this repeatedly with different initializations and select the one that obtains the largest logp value, but that still only increases the chances of finding the global optimum, though cannot guarantee it.
It should be noted that at high parameter dimensions, the MAP estimate is usually not part of the typical set, i.e., it is not representative of typical parameter values that would lead to the observed data. Michael Betancourt discusses this in A Conceptual Introduction to Hamiltonian Monte Carlo. The fully Bayesian approach is to use posterior predictive distributions, which effectively averages over all the high-likelihood parameter configurations rather than using a single point estimate for parameters.

How to get a value from multiple functions in Pyomo

Let's suppose that the objective function is
max z(x,y) = f1(x) - f2(y)
where f1 is function of variables x and f2 is functions of variables y.
This could be written in Pyomo as
def z(model):
return f1(model) - f2(model)
def f1(model):
return [some summation of x variables with some coefficients]
def f2(model):
return [some summation of y variables with some coefficients]
model.objective = Objective(rule=z)
I know it is possible to get the numeric value of z(x,y) easily by calling (since it is the objective function) :
print(model.objective())
but is there a way to get the numeric value of any of these sub-functions separetedly after the optimization, even if they are not explicitly defined as objectives?
I'll answer your question in terms of a ConcreteModel, since rules in Pyomo, for the most part, are nothing more than a mechanism to delay building a ConcereteModel. For now, they are also required to define indexed objects, but that will likely change soon.
First, there is nothing stopping you from defining those "rules" as standard functions that take in some argument and return a value. E.g.,
def z(x, y):
return f1(x) - f2(y)
def f1(x):
return x + 1
def f2(x):
return y**2
Now if you call any of these functions with a built-in type (e.g., f(1,5)), you will get a number back. However, if you call them with Pyomo variables (or Pyomo expressions) you will get a Pyomo expression back, which you can assign to an objective or constraint. This works because Pyomo modeling components, such as variables, overload the standard algebraic operators like +, -, *, etc. Here is an example of how you can build an objective with these functions:
import pyomo.environ as aml
m = aml.ConcreteModel()
m.x = aml.Var()
m.y = aml.Var()
m.o = aml.Objective(expr= z(m.x, m.y))
Now if m.x and m.y have a value loaded into them (i.e., the .value attribute is something other than None), then you can call one of the sub-functions with them and evaluate the returned expression (slower)
aml.value(f1(m.x))
aml.value(f2(m.y))
or you can extract the value from them and pass that to the sub-functions (faster)
f1(m.x.value)
f2(m.y.value)
You can also use the Expression object to store sub-expressions that you want to evaluate on the fly or share inside multiple other expression on a model (all of which you can update by changing what expression is stored under the Expression object).

how can i improve bulk calculation from file data

I have a file of binary values. The section I am looking at is 4 byte int with the values in the pattern of MW1, MVAR1, MW2, MVAR2,...
I read the values in with
temp = array.array("f")
temp.fromfile(file, length *2)
mw_mvar = temp.tolist()
I then calculate the magnitude like this.
mag = [0] * length
for x in range(0,length * 2, 2):
a = mw_mvar[x]
b = mw_mvar[x + 1]
mag[(x / 2)] = sqrt(a*a + b*b)
The calculations (not the read) are doubling the total length of my script. I know there is (theoretically) a way to do this faster because am mimicking a script that ultimately calls fortran (pyd to call function dlls in fortran i think) which is able to do this calculation with negligible affect on run time.
This is the best i can come up with. any suggestions for improvements?
I have also tried math.pow(), **.5, **2 with no differences.
with no luck improving the calculations, I went around the problem. I realised that I only needed 1% of those calculated values so I created a class to calculate them on demand. It was important (to me) that the resulting code act similar to as if it were a list of calculated values. A lot of the remainder of the process uses the values and different versions of the data are pre-calculated. The class means i don't need a set of procedures for each version of data
class mag:
def __init__(self,mw_mvar):
self._mw_mvar = mw_mvar
#_sgn = sgn
def __len__(self):
return len(self._mw_mvar/2)
def __getitem__(self, item):
return sqrt(self._mw_mvar[2*item] ** 2 + self._mw_mvar[2*item+1] ** 2)
ps this could also be done in a function and take both versions. i would have had to make more changes to the overall script.
function (a,b,x):
if b[x]==0:
return a[x]
else:
return sqrt(a[x]**2 + b[x]**2)

Lua functions use "self" in source but no metamethod allows to use them

I've been digging into Lua's source code, both the C source from their website and the lua files from Lua on Windows. I found something odd that I can't find any information about, as to why they chose to do this.
There are some methods in the string library that allows OOP calling, by attaching the method to the string like this:
string.format(s, e1, e2, ...)
s:format(e1, e2, ...)
So I dug into the source code for the module table, and found that functions like table.remove(), also allows for the same thing.
Here's the source code from UnorderedArray.lua:
function add(self, value)
self[#self + 1] = value
end
function remove(self, index)
local size = #self
if index == size then
self[size] = nil
elseif (index > 0) and (index < size) then
self[index], self[size] = self[size], nil
end
end
Which indicate that the functions should support the colon method. Lo' and behold when I copy table into my new list, the methods carry over. Here's an example using table.insert as a method:
function copy(obj, seen) -- Recursive function to copy a table with tables
if type(obj) ~= 'table' then return obj end
if seen and seen[obj] then return seen[obj] end
local s = seen or {}
local res = setmetatable({}, getmetatable(obj))
s[obj] = res
for k, v in pairs(obj) do res[copy(k, s)] = copy(v, s) end
return res
end
function count(list) -- Count a list because #table doesn't work on keyindexed tables
local sum = 0; for i,v in pairs(list) do sum = sum + 1 end; print("Length: " .. sum)
end
function pts(s) print(tostring(s)) end -- Macro function
local list = {1, 2, 3}
pts(list.insert) --> nil
pts(table["insert"]) --> function: 0xA682A8
pts(list["insert"]) --> nil
list = copy(_G.table)
pts(table["insert"]) --> function: 0xA682A8
pts(list["insert"]) --> function: 0xA682A8
count(list) --> Length: 9
list:insert(-1, "test")
count(list) --> Length: 10
Was Lua 5.1 and newer supposed to support table methods like the string library but they decided to not implement the meta method?
EDIT:
I'll explain it a little further so people understand.
Strings have metamethods attached that you can use on the strings OOP style.
s = "test"
s:sub(1,1)
But tables doesn't. Even though the methods in the table's source code allow for it using "self" functions. So the following code doesn't work:
t = {1,2,3}
t:remove(#t)
The function has a self member defined in the argument (UnorderedArray.lua:25: function remove(self,index)).
You can find the metamethods of strings by using:
for i,v in pairs(getmetatable('').__index) do
print(i, tostring(v))
end
which prints the list of all methods available for strings:
sub function: 0xB4ABC8
upper function: 0xB4AB08
len function: 0xB4A110
gfind function: 0xB4A410
rep function: 0xB4AD88
find function: 0xB4A370
match function: 0xB4AE08
char function: 0xB4A430
dump function: 0xB4A310
gmatch function: 0xB4A410
reverse function: 0xB4AE48
byte function: 0xB4A170
format function: 0xB4A0F0
gsub function: 0xB4A130
lower function: 0xB4AC28
If you attach the module/library table to a table like Oka showed in the example, you can use the methods that table has just the same way the string metamethods work.
The question is: Why would Lua developers allow metamethods of strings by default but tables doesn't even though table's library and it's methods allow it in the source code?
The question was answered: It would allow a developer of a module or program to alter the metatables of all tables in the program, leading to the result where a table would behave differently from vanilla Lua when used in a program. It's different if you implement a class of a data type (say: vectors) and change the metamethods of that specific class and table, instead of changing all of Lua's standard table metamethods. This also slightly overlaps with operator overloading.
If I'm understanding your question correctly, you're asking why it is not possible to do the following:
local tab = {}
tab:insert('value')
Having tables spawn with a default metatable and __index breaks some assumptions that one would have about tables.
Mainly, empty tables should be empty. If tables were to spawn with an __index metamethod lookup for the insert, sort, etc., methods, it would break the assumption that an empty table should not respond to any members.
This becomes an issue if you're using a table as a cache or memo, and you need to check if the 'insert', or 'sort' strings exist or not (think arbitrary user input). You'd need to use rawget to solve a problem that didn't need to be there in the first place.
Empty tables should also be orphans. Meaning that they should have no relations without the programmer explicitly giving them relations. Tables are the only complex data structure available in Lua, and are the foundation for a lot of programs. They need to be free and flexible. Pairing them with the the table table as a default metatable creates some inconsistencies. For example, not all tables can make use of the generic sort function - a weird cruft for dictionary-like tables.
Additionally, consider that you're utilizing a library, and that library's author has told you that a certain function returns a densely packed table (i.e., an array), so you figure that you can call :sort(...) on the returned table. What if the library author has changed the metatable of that return table? Now your code no longer works, and any generic functions built on top of a _:sort(...) paradigm can't accept these tables.
Basically put, strings and tables are two very different beasts. Strings are immutable, static, and their contents are predictable. Tables are mutable, transient, and very unpredictable.
It's much, much easier to add this in when you need it, instead of baking it into the language. A very simple function:
local meta = { __index = table }
_G.T = function (tab)
if tab ~= nil then
local tab_t = type(tab)
if tab_t ~= 'table' then
error(("`table' expected, got: `%s'"):format(tab_t), 0)
end
end
return setmetatable(tab or {}, meta)
end
Now any time you want a table that responds to functions found in the table table, just prefix it with a T.
local foo = T {}
foo:insert('bar')
print(#foo) --> 1

Python: store the function's return value in a variable if called using thread

I'm using the threading library in order to perform some parallel data retrieval (then I need to concatenate the results obtained) however I'm not able to store the return of my function
Here's a simple example
def test(i):
return i + 1
threading.Timer(0, x = test(0))
print(x) #Should Print 1
The problem that its forbidden to put x=test(0) while calling the thread
Is there a way to store the function's return value in a variable?
Thanks
import threading
def test(i):
global x
x = i + 1
x = 0
threading.Timer(0, test(0))
print(x)
Place a global variable outside of the function which can then be altered by the individual thread.
If you are however doing this to gather data, you will need to have a way of testing if the threads are complete before concatenating the results.