I'm new to TensorFlow and have difficulty understanding how the computations works. I could not find the answer to my question on the web.
For the following piece of code, the last time I print "d" in the for loop of the "train_neural_net()" function, I'm expecting the values to be identical to when I print "test_distance.eval". But they are way different. Can anyone tell me why this is happening? Isn't TensorFlow supposed to cache the Variable results learned in the for loop and use them when I run "test_distance.eval"?
def neural_network_model1(data):
nn1_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn1_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn1_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn1_l1 = tf.add(tf.matmul(data, nn1_hidden_1_layer["weights"]), nn1_hidden_1_layer["biasses"])
nn1_l1 = tf.sigmoid(nn1_l1)
nn1_l2 = tf.add(tf.matmul(nn1_l1, nn1_hidden_2_layer["weights"]), nn1_hidden_2_layer["biasses"])
nn1_l2 = tf.sigmoid(nn1_l2)
nn1_output = tf.add(tf.matmul(nn1_l2, nn1_output_layer["weights"]), nn1_output_layer["biasses"])
return nn1_output
def neural_network_model2(data):
nn2_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn2_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn2_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn2_l1 = tf.add(tf.matmul(data, nn2_hidden_1_layer["weights"]), nn2_hidden_1_layer["biasses"])
nn2_l1 = tf.sigmoid(nn2_l1)
nn2_l2 = tf.add(tf.matmul(nn2_l1, nn2_hidden_2_layer["weights"]), nn2_hidden_2_layer["biasses"])
nn2_l2 = tf.sigmoid(nn2_l2)
nn2_output = tf.add(tf.matmul(nn2_l2, nn2_output_layer["weights"]), nn2_output_layer["biasses"])
return nn2_output
def train_neural_net():
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(prediction1, prediction2)), reduction_indices=1))
cost = tf.reduce_mean(tf.multiply(y, distance))
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 500
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
test_distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(test_result1, test_result2)), reduction_indices=1))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
_, d = sess.run([optimizer, distance], feed_dict = {x1: train_x1, x2: train_x2, y: train_y})
print("Epoch", epoch, "distance", d)
print("test distance", test_distance.eval({x3: train_x1, x4: train_x2}))
train_neural_net()
Each time you call the functions neural_network_model1() or neural_network_model2(), you create a new set of variables, so there are four sets of variables in total.
The call to sess.run(tf.global_variables_initializer()) initializes all four sets of variables.
When you train in the for loop, you only update the first two sets of variables, created with these lines:
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
When you evaluate with test_distance.eval(), the tensor test_distance depends only on the variables that were created in the last two sets of variables, which were created with these lines:
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
These variables were never updated in the training loop, so the evaluation results will be based on the random initial values.
TensorFlow does include some code for sharing weights between multiple calls to the same function, using with tf.variable_scope(...): blocks. For more information on how to use these, see the tutorial on variables and sharing on the TensorFlow website.
You don't need to define two function for generating models, you can use tf.name_scope, and pass a model name to the function to use it as a prefix for variable declaration. On the other hand, you defined two variables for distance, first is distance and second is test_distance . But your model will learn from train data to minimize cost which is only related to first distance variable. Therefore, test_distance is never used and the model which is related to it, will never learn anything! Again there is no need for two distance functions. You only need one. When you want to calculate train distance, you should feed it with train data and when you want to calculate test distance you should feed it with test data.
Anyway, if you want second distance to work, you should declare another optimizer for it and also you have to learn it as you have done for first one. Also you should consider the fact that models are learning base on their initial values and training data. Even if you feed both models with exactly same training batches, you can't expect to have exactly similar characteristics models since initial values for weights are different and this could cause falling into different local minimum of error surface. At the end notice that whenever you call neural_network_model1 or neural_network_model2 you will generate new weights and biases, because tf.Variable is generating new variables for you.
Related
I have a vector of nominal values and I need to know the probability of occurring each of the nominal values. Basically, I need those to obtain the min, max, mean, std of the probability of observing the nominal values and to get the Class Entropy value.
For example, lets assume there is a data-set in which the target is predicting 0, 1, or 2. In the training data-set. We can count the number of records which their target is 1, and call it n_1 and similarly we can define n_0 and n_2. Then, the probability of observing class 1 in the training data-set is simply p_1=n_1/(n_0 + n_2). Once p_0, p_1, and p_2 are obtained, one can get min, max, mean, and std of the these probabilitis.
It is easy to get that in python by pandas, but want to avoid reading the data-set twice. I was wondering if there is any CAS-action in SAS that can provide it to me. Note that I use the Python API of SAS through swat and I need to have the API in python.
I found the following solution and it works fine. It uses s.dataPreprocess.highcardinality to get the number of classes and then uses s.dataPreprocess.binning to obtain the number of observations within each class. Then, there is just some straightforward calculation.
import swat
# create a CAS server
s = swat.CAS(server, port)
# load the table
tbl_name = 'hmeq'
s.upload("./data/hmeq.csv", casout=dict(name=tbl_name, replace=True))
# call to get the number of classes
cardinality_result = s.dataPreprocess.highcardinality(table=tbl_name, vars=[target_var])
cardinality_result_df = pd.DataFrame(cardinality_result["HighCardinalityDetails"])
number_of_classes = int(cardinality_result_df["CardinalityEstimate"])
# call dataPreprocess.binning action to get the probability of each class
s.loadactionset(actionset="dataPreprocess")
result_binning = s.dataPreprocess.binning(table=tbl_name, vars=[target_var], nBinsArray=[number_of_classes])
result_binning_df = pd.DataFrame(result_binning["BinDetails"])
probs = result_binning_df["NInBin"]/result_binning_df["NInBin"].sum()
prob_min = probs.min()
prob_max = probs.max()
prob_mean = probs.mean()
prob_std = probs.std()
entropy = -sum(probs*np.log2(probs))
I have a Pyomo model connected to a Django-created website.
My decision variable has 4 indices and I have a huge amount of constraints running on it.
Since Pyomo takes a ton of time to read in the constraints with so many variables, I want to sparse out the index set to only contain variables that actually could be 1 (i have some conditions on that)
I saw this post
Create a variable with sparse index in pyomo
and tried a for loop for all my conditions. I created a set "AllowedVariables" to later put this inside my constraints.
But Django's server takes so long to create this set while performing the system check, it never comes out.
Currently i have this model:
model = AbstractModel()
model.x = Var(model.K, model.L, model.F, model.Z, domain=Boolean)
def ObjRule(model):
# some rule, sense maximize
model.Obj = pyomo.environ.Objective(rule=ObjRule, sense=maximize)
def ARule(model,l):
maxA = sum(model.x[k,l,f,z] * for k in model.K for f in model.F
for z in model.Z and (k,l,f,z) in model.AllowedVariables)
return maxA <= 1
model.maxA = Constraint(model.L, rule=ARule)
The constraint is exemplary, I have 15 more similar ones. I currently create "AllowedVariables" this way:
AllowedVariables = []
for k in model.K:
for l in model.L:
..... check all sorts of conditions, break if not valid
AllowedVaraibles.append((k,l,f,z))
model.AllowedVariables = Set(initialize=AllowedVariables)
Using this, the Django server starts checking....and never stops
performing system checks...
Sadly, I somehow need some restriction on the variables or else the reading for the solver will take way to long since the constraints contain so many unnecessary variables that have to be 0 anyways.
Any ideas on how I can sparse my variable set?
The code is in PyMC3, but this is a general problem. I want to find which matrix (combination of variables) gives me the highest probability. Taking the mean of the trace of each element is meaningless because they depend on each other.
Here is a simple case; the code uses a vector rather than a matrix for simplicity. The goal is to find a vector of length 2, where the each value is between 0 and 1, so that the sum is 1.
import numpy as np
import theano
import theano.tensor as tt
import pymc3 as mc
# define a theano Op for our likelihood function
class LogLike_Matrix(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, loglike):
self.likelihood = loglike # the log-p function
def perform(self, node, inputs, outputs):
# the method that is used when calling the Op
theta, = inputs # this will contain my variables
# call the log-likelihood function
logl = self.likelihood(theta)
outputs[0][0] = np.array(logl) # output the log-likelihood
def logLikelihood_Matrix(data):
"""
We want sum(data) = 1
"""
p = 1-np.abs(np.sum(data)-1)
return np.log(p)
logl_matrix = LogLike_Matrix(logLikelihood_Matrix)
# use PyMC3 to sampler from log-likelihood
with mc.Model():
"""
Data will be sampled randomly with uniform distribution
because the log-p doesn't work on it
"""
data_matrix = mc.Uniform('data_matrix', shape=(2), lower=0.0, upper=1.0)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable(data_matrix)
# use a DensityDist (use a lamdba function to "call" the Op)
mc.DensityDist('likelihood_matrix', lambda v: logl_matrix(v), observed={'v': theta})
trace_matrix = mc.sample(5000, tune=100, discard_tuned_samples=True)
If you only want the highest likelihood parameter values, then you want the Maximum A Posteriori (MAP) estimate, which can be obtained using pymc3.find_MAP() (see starting.py for method details). If you expect a multimodal posterior, then you will likely need to run this repeatedly with different initializations and select the one that obtains the largest logp value, but that still only increases the chances of finding the global optimum, though cannot guarantee it.
It should be noted that at high parameter dimensions, the MAP estimate is usually not part of the typical set, i.e., it is not representative of typical parameter values that would lead to the observed data. Michael Betancourt discusses this in A Conceptual Introduction to Hamiltonian Monte Carlo. The fully Bayesian approach is to use posterior predictive distributions, which effectively averages over all the high-likelihood parameter configurations rather than using a single point estimate for parameters.
I haven't seen this specific scenario in my research for this error in Numba. This is my first time using the package so it might be something obvious.
I have a function that calculates engineered features in a data set by adding, multiplying and/or dividing each column in a dataframe called data and I wanted to test whether numba would speed it up
#jit
def engineer_features(engineer_type,features,joined):
#choose which features to engineer (must be > 1)
engineered = features
if len(engineered) > 1:
if 'Square' in engineer_type:
sq = data[features].apply(np.square)
sq.columns = map(lambda s:s + '_^2',features)
for c1,c2 in combinations(engineered,2):
if 'Add' in engineer_type:
data['{0}+{1}'.format(c1,c2)] = data[c1] + data[c2]
if 'Multiply' in engineer_type:
data['{0}*{1}'.format(c1,c2)] = data[c1] * data[c2]
if 'Divide' in engineer_type:
data['{0}/{1}'.format(c1,c2)] = data[c1] / data[c2]
if 'Square' in engineer_type and len(sq) > 0:
data= pd.merge(data,sq,left_index=True,right_index=True)
return data
When I call it with lists of features, engineer_type and the dataset:
engineer_type = ['Square','Add','Multiply','Divide']
df = engineer_features(engineer_type,features,joined)
I get the error: Failed at object (analyzing bytecode)
'DataFlowAnalysis' object has no attribute 'op_MAKE_FUNCTION'
Same question here. I think the problem might be the lambda function since numba does not support function creation.
I had this same error. Numba doesnt support pandas. I converted important columns from my pandas df into bunch of arrays and it worked successfully under #JIT.
Also arrays are much faster then pandas df, incase you need it for processing large data.
I'm interested in using SymPy to augment my engineering models. Instead of defining a rigid set of inputs and outputs, I'd like for the user to simply provide everything they know about a system, then apply that data to an algebraic model which calculates the unknowns (if it has enough data).
For an example, say I have some stuff which has some mass, volume, and density. I'd like to define a relationship between those parameters (density = mass / volume) such that when the user has provided enough information (any 2 variables), the 3rd variable is automatically calculated. Finally, if any value is later updated, one of the other values should change to preserve the relationship. One of the challenges with this system is that when there are multiple independent variables, there would need to be a way to specify which independent variable should change to satisfy the requirement.
Here's some working code that I have currently:
from sympy import *
class Stuff(object):
def __init__(self, *args, **kwargs):
#String of variables
varString = 'm v rho'
#Initialize Symbolic variables
m, v, rho = symbols(varString)
#Define density equation
# "rho = m / v" becomes "rho - m / v = 0" which means the eqn. is rho - m / v
# This is because solve() assumes equation = 0
density_eq = rho - m / v
#Store the equation and the variable string for later use
self._eqn = density_eq
self._varString = varString
#Get a list of variable names
variables = varString.split()
#Initialize parameters dictionary
self._params = {name: None for name in variables}
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
#property
def density(self):
return self._params['rho']
#density.setter
def density(self, value):
param = 'rho'
self._params[param] = value
self.balance(param)
def balance(self, param):
#Get the list of all variable names
variables = self._varString.split()
#Get a copy of the list except for the recently changed parameter
others = [name for name in variables if name != param]
#Loop through the less recently changed variables
for name in others:
try:
#Symbolically solve for the current variable
eq = solve(self._eqn, [symbols(name)])[0]
#Get a dictionary of variables and values to substitute for a numerical solution (will contain None for unset values)
indvars = {symbols(n): self._params[n] for n in variables if n is not name}
#Set the parameter with the new numeric solution
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
if __name__ == "__main__":
#Run some examples - need to turn these into actual tests
stuff = Stuff()
stuff.density = 0.1
stuff.mass = 10.0
print stuff.volume
stuff = Water()
stuff.mass = 10.0
stuff.volume = 100.0
print stuff.density
stuff = Water()
stuff.density = 0.1
stuff.volume = 100.0
print stuff.mass
#increase volume
print "Setting Volume to 200"
stuff.volume = 200.0
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Density {0}".format(stuff.density)
#Setting Mass to 15
print "Setting Mass to 15.0"
stuff.mass = 15.0
print "Volume changes"
print "Volume {0}".format(stuff.volume)
print "Density {0}".format(stuff.density)
#Setting Density to 0.5
print "Setting Density to 0.5"
stuff.density = 0.5
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Volume {0}".format(stuff.volume)
print "It is impossible to let mass and volume drive density with this setup since either mass or volume will " \
"always be recalculated first."
I tried to be as elegant as I could be with the overall approach and layout of the class, but I can't help but wonder if I'm going about it wrong - if I'm using SymPy in the wrong way to accomplish this task. I'm interested in developing a complex aerospace vehicle model with dozens/hundreds of interrelated properties. I'd like to find an elegant and extensible way to use SymPy to govern property relationships across the vehicle before I scale up from this fairly simple example.
I'm also concerned about how/when to rebalance the equations. I'm familiar with PyQt Signals and Slots which is my first thought for how to link dependent things together to trigger a model update (emit a signal when a value updates, which will be received by rebalancing functions for each system of equations that relies on that parameter?). Yeah, I really don't know the best way to do this with SymPy. Might need a bigger example to explore systems of equations.
Here's some thoughts on where I'm headed with this project. Just using mass as an example, I would like to define the entire vehicle mass as the sum of the subsystem masses, and all subsystem masses as the sum of component masses. In addition, mass relationships will exist between certain subsystems and components. These relationships will drive the model until more concrete data is provided. So if the default ratio of fuel mass to total vehicle mass is 50%, then specifying 100lb of fuel will size the vehicle at 200lb. However if I later specify that the vehicle is actually 210lb, I would want the relationship recalculated (let it become dependent since fuel mass and vehicle mass were set more recently, or because I specified that they are independent variables or locked or something). The next problem is iteration. When circular or conflicting relationships exist in a model, the model must be iterated on to hopefully converge on a solution. This is often the case with the vehicle mass model described above. If the vehicle gets heavier, there needs to be more fuel to meet a requirement, which causes the vehicle to get even heavier, etc. I'm not sure how to leverage SymPy in these situations.
Any suggestions?
PS
Good explanation of the challenges associated with space launch vehicle design.
Edit: Changing the code structure based on goncalopp's suggestions...
class Balanced(object):
def __init__(self, variables, equationStr):
self._variables = variables
self._equation = sympify(equationStr)
#Initialize parameters dictionary
self._params = {name: None for name in self._variables}
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for varname in self._variables:
setattr(Balanced, varname, property(fget=partial(var_getter, varname),
fset=partial(var_setter, varname)))
def balance(self, recentlyChanged):
#Get a copy of the list except for the recently changed parameter
others = [name for name in self._variables if name != recentlyChanged]
#Loop through the less recently changed variables
for name in others:
try:
eq = solve(self._equation, [symbols(name)])[0]
indvars = {symbols(n): self._params[n] for n in self._variables if n != name}
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
class HasMass(Balanced):
def __init__(self):
super(HasMass, self).__init__(variables=['mass', 'volume', 'density'],
equationStr='density - mass / volume')
class Prop(HasMass):
def __init__(self):
super(Prop, self).__init__()
if __name__ == "__main__":
prop = Prop()
prop.density = 0.1
prop.mass = 10.0
print prop.volume
prop = Prop()
prop.mass = 10.0
prop.volume = 100.0
print prop.density
prop = Prop()
prop.density = 0.1
prop.volume = 100.0
print prop.mass
What this makes me want to do is use multiple inheritance or decorators to automatically assign physical inter-related properties to things. So I could have another class called "Cylindrical" which defines radius, diameter, and length properties, then I could have like class DowelRod(HasMass, Cylindrical). The really tricky part here is that I would want to define the cylinder volume (volume = length * pi * radius^2) and let that volume interact with the volume defined in the mass balance equations... such that, perhaps mass would respond to a change in length, etc. Not only is the multiple inheritance tricky, but automatically combining relationships will be even worse. This will get tricky very quickly. I don't know how to handle systems of equations yet, and it's clear with lots of parametric relationships that parameter locking or specifying independent/dependent variables will be necessary.
While I have no experience with this kind of models, and little experience with SymPy, here's some tips:
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
As you've noticed, you're repeating a lot of code for each variable. This is unnecessary, and since you'll have lots of variables, this eventually leads to code maintenance nightmares. You have your variables neatly arranged on varString = 'm v rho' My suggestion is to go further, and define a dictionary:
my_vars= {"m":"mass", "v":"volume", "rho":"density"}
and then add the properties and setters dynamically to the class (instead of explicitly):
from functools import partial
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for k,v in my_vars.items():
setattr(Stuff, k, property(fget=partial(var_getter, v), fset=partial(var_setter, v)))
This way, you only need to write the getter and setter once.
If you want to have a couple of different getters, you can still use this technique. Store either the getter for each variable or the variables for each getter - whichever is more convenient.
Another thing that may be useful once equations get complex, is that you can keep equations as strings in your source:
density_eq = sympy.sympify( "rho - m / v" )
Using these two "tricks", you may even want to keep your variables and your equations defined in external text files, or maybe CSV.
Viewing your problem as a constraint-solving problem, you may also want to look at python-constraint (http://labix.org/python-constraint) in addition to SymPy.
Just realized that the python-constraint package doesn't apply here since the domain needs to be finite. If the domain were finite though, here's an illustrative example:
import constraint as cst
p = cst.Problem()
p.addVariables(['rho','m', 'v'], range(100))
p.addConstraint(lambda rho,m,v: rho * v == m, ['rho', 'm', 'v'])
p.addConstraint(lambda rho: rho == 2, ['rho']) # setting inputs; for illustration only
p.addConstraint(lambda m: m == 10, ['m'])
print p.getSolutions()
[{'m': 10, 'rho': 2, 'v': 5}]
However, since the real domain is needed here, the package doesn't apply.