How to Use MCMC with a Custom Log-Probability and Solve for a Matrix - pymc3

The code is in PyMC3, but this is a general problem. I want to find which matrix (combination of variables) gives me the highest probability. Taking the mean of the trace of each element is meaningless because they depend on each other.
Here is a simple case; the code uses a vector rather than a matrix for simplicity. The goal is to find a vector of length 2, where the each value is between 0 and 1, so that the sum is 1.
import numpy as np
import theano
import theano.tensor as tt
import pymc3 as mc
# define a theano Op for our likelihood function
class LogLike_Matrix(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, loglike):
self.likelihood = loglike # the log-p function
def perform(self, node, inputs, outputs):
# the method that is used when calling the Op
theta, = inputs # this will contain my variables
# call the log-likelihood function
logl = self.likelihood(theta)
outputs[0][0] = np.array(logl) # output the log-likelihood
def logLikelihood_Matrix(data):
"""
We want sum(data) = 1
"""
p = 1-np.abs(np.sum(data)-1)
return np.log(p)
logl_matrix = LogLike_Matrix(logLikelihood_Matrix)
# use PyMC3 to sampler from log-likelihood
with mc.Model():
"""
Data will be sampled randomly with uniform distribution
because the log-p doesn't work on it
"""
data_matrix = mc.Uniform('data_matrix', shape=(2), lower=0.0, upper=1.0)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable(data_matrix)
# use a DensityDist (use a lamdba function to "call" the Op)
mc.DensityDist('likelihood_matrix', lambda v: logl_matrix(v), observed={'v': theta})
trace_matrix = mc.sample(5000, tune=100, discard_tuned_samples=True)

If you only want the highest likelihood parameter values, then you want the Maximum A Posteriori (MAP) estimate, which can be obtained using pymc3.find_MAP() (see starting.py for method details). If you expect a multimodal posterior, then you will likely need to run this repeatedly with different initializations and select the one that obtains the largest logp value, but that still only increases the chances of finding the global optimum, though cannot guarantee it.
It should be noted that at high parameter dimensions, the MAP estimate is usually not part of the typical set, i.e., it is not representative of typical parameter values that would lead to the observed data. Michael Betancourt discusses this in A Conceptual Introduction to Hamiltonian Monte Carlo. The fully Bayesian approach is to use posterior predictive distributions, which effectively averages over all the high-likelihood parameter configurations rather than using a single point estimate for parameters.

Related

SAS Action to provide the class probability statistics

I have a vector of nominal values and I need to know the probability of occurring each of the nominal values. Basically, I need those to obtain the min, max, mean, std of the probability of observing the nominal values and to get the Class Entropy value.
For example, lets assume there is a data-set in which the target is predicting 0, 1, or 2. In the training data-set. We can count the number of records which their target is 1, and call it n_1 and similarly we can define n_0 and n_2. Then, the probability of observing class 1 in the training data-set is simply p_1=n_1/(n_0 + n_2). Once p_0, p_1, and p_2 are obtained, one can get min, max, mean, and std of the these probabilitis.
It is easy to get that in python by pandas, but want to avoid reading the data-set twice. I was wondering if there is any CAS-action in SAS that can provide it to me. Note that I use the Python API of SAS through swat and I need to have the API in python.
I found the following solution and it works fine. It uses s.dataPreprocess.highcardinality to get the number of classes and then uses s.dataPreprocess.binning to obtain the number of observations within each class. Then, there is just some straightforward calculation.
import swat
# create a CAS server
s = swat.CAS(server, port)
# load the table
tbl_name = 'hmeq'
s.upload("./data/hmeq.csv", casout=dict(name=tbl_name, replace=True))
# call to get the number of classes
cardinality_result = s.dataPreprocess.highcardinality(table=tbl_name, vars=[target_var])
cardinality_result_df = pd.DataFrame(cardinality_result["HighCardinalityDetails"])
number_of_classes = int(cardinality_result_df["CardinalityEstimate"])
# call dataPreprocess.binning action to get the probability of each class
s.loadactionset(actionset="dataPreprocess")
result_binning = s.dataPreprocess.binning(table=tbl_name, vars=[target_var], nBinsArray=[number_of_classes])
result_binning_df = pd.DataFrame(result_binning["BinDetails"])
probs = result_binning_df["NInBin"]/result_binning_df["NInBin"].sum()
prob_min = probs.min()
prob_max = probs.max()
prob_mean = probs.mean()
prob_std = probs.std()
entropy = -sum(probs*np.log2(probs))

determining whether two convex hulls overlap

I'm trying to find an efficient algorithm for determining whether two convex hulls intersect or not. The hulls consist of data points in N-dimensional space, where N is 3 up to 10 or so. One elegant algorithm was suggested here using linprog from scipy, but you have to loop over all points in one hull, and it turns out the algorithm is very slow for low dimensions (I tried it and so did one of the respondents). It seems to me the algorithm could be generalized to answer the question I am posting here, and I found what I think is a solution here. The authors say that the general linear programming problem takes the form Ax + tp >= 1, where the A matrix contains the points of both hulls, t is some constant >= 0, and p = [1,1,1,1...1] (it's equivalent to finding a solution to Ax > 0 for some x). As I am new to linprog() it isn't clear to me whether it can handle problems of this form. If A_ub is defined as on page 1 of the paper, then what is b_ub?
There is a nice explanation of how to do this problem, with an algorithm in R, on this website. My original post referred to the scipy.optimize.linprog library, but this proved to be insufficiently robust. I found that the SCS algorithm in the cvxpy library worked very nicely, and based on this I came up with the following python code:
import numpy as np
import cvxpy as cvxpy
# Determine feasibility of Ax <= b
# cloud1 and cloud2 should be numpy.ndarrays
def clouds_overlap(cloud1, cloud2):
# build the A matrix
cloud12 = np.vstack((-cloud1, cloud2))
vec_ones = np.r_[np.ones((len(cloud1),1)), -np.ones((len(cloud2),1))]
A = np.r_['1', cloud12, vec_ones]
# make b vector
ntot = len(cloud1) + len(cloud2)
b = -np.ones(ntot)
# define the x variable and the equation to be solved
x = cvxpy.Variable(A.shape[1])
constraints = [A*x <= b]
# since we're only determining feasibility there is no minimization
# so just set the objective function to a constant
obj = cvxpy.Minimize(0)
# SCS was the most accurate/robust of the non-commercial solvers
# for my application
problem = cvxpy.Problem(obj, constraints)
problem.solve(solver=cvxpy.SCS)
# Any 'inaccurate' status indicates ambiguity, so you can
# return True or False as you please
if problem.status == 'infeasible' or problem.status.endswith('inaccurate'):
return True
else:
return False
cube = np.array([[1,1,1],[1,1,-1],[1,-1,1],[1,-1,-1],[-1,1,1],[-1,1,-1],[-1,-1,1],[-1,-1,-1]])
inside = np.array([[0.49,0.0,0.0]])
outside = np.array([[1.01,0,0]])
print("Clouds overlap?", clouds_overlap(cube, inside))
print("Clouds overlap?", clouds_overlap(cube, outside))
# Clouds overlap? True
# Clouds overlap? False
The area of numerical instability is when the two clouds just touch, or are arbitrarily close to touching such that it isn't possible to definitively say whether they overlap or not. That is one of the cases where you will see this algorithm report an 'inaccurate' status. In my code I chose to consider such cases overlapping, but since it is ambiguous you can decide for yourself what to do.

Fitting multiple data sets using lmfit without writting an objective function

This topic describes how to fit multiple data-sets using lmfit:
Python and lmfit: How to fit multiple datasets with shared parameters?
However it uses a fitting/objective function written by the user.
I was wondering if it's possible to fit multiple data-sets using lmfit without writing an objective function and using model.fit() method of the model class.
As an example: Lets say we have multiple data sets of (x,y) coordinates that we want to fit using the same model function in order to find the set of parameters that on average fit all the data best.
import numpy as np
from lmfit import Model, Parameters
from lmfit.models import GaussianModel
def gauss(x, amp, cen, sigma):
return amp*np.exp(-(x-cen)**2/(2.*sigma**2))
x1= np.arange(0.,100.,0.1)
x2= np.arange(0.,100.,0.09)
y1= gauss(x1, 1.,50.,5.)+ np.random.normal(size=len(x1), scale=0.1)
y2= gauss(x2, 0.8,48.4.,4.5)+ np.random.normal(size=len(x2), scale=0.1)
mod= GaussianModel()
params= mod.make_params()
mod.fit([y1,y2], params, x= [x1, x2])
I guess if this is possible the data has to be passed to mod.fit in the right type. The documentation only says that mod.fit takes an array-like data input.
I tried to give it lists and arrays. If I pass the different data sets as a list I get a ValueError: setting an array element with a sequence
If I pass an array I get an AttributeError: 'numpy.ndarray' has no atribute 'exp'
So am I just trying to do something that isn't possible or am I doing something wrong?
Well, I think the answer is "sort of". The lmfit.Model class is meant to represent a model for an array of data. So, if you can map your multiple datasets into a numpy ndarray (say, with np.concatenate), you can probably write a Model function to represent this by building sub-models for the different datasets and concatenating them in the same way.
I don't think you could do that with any of the built-in models. I also think that once you start down the road of writing complex model functions, it isn't a very big jump to writing objective functions. That is, what would be
def model_function(x, a, b, c):
### do some calculation with x, a, b, c values
result = a + x*b + x*x*c
return result
might become
def objective_function(params, x, data):
vals = params.valuesdict()
return data - model_function(x, vals['a'], vals['b'], vals['c'])
If that do_calc() is doing anything complex, the additional burden of unpacking the parameters and subtracting the data is pretty small. And, especially if some parameters would be used for multiple datasets and some only for particular datasets, you'll have to manage that in either the model function or the objective function. In the example you link to, my answer included a loop over datasets, picking out parameters by name for each dataset. You'll probably want to do something like that. You could probably do that in a model function by thinking of it as modeling the concatenated datasets, but I'm not sure you'd really gain a lot by doing that.
I found the problem. Actually model.fit() will handle arrays of multiple data sets just fine and perform a proper fit. The correct call of model.fit() with multiple data sets would be:
import numpy as np
from lmfit import Model, Parameters
from lmfit.models import GaussianModel
import matplotlib.pyplot as plt
def gauss(x, amp, cen, sigma):
"basic gaussian"
return amp*np.exp(-(x-cen)**2/(2.*sigma**2))
x1= np.arange(0.,100.,0.1)
x2= np.arange(0.,100.,0.1)
y1= gauss(x1, 1.,50.,5.)+ np.random.normal(size=len(x1), scale=0.01)
y2= gauss(x2, 0.8,48.4,4.5)+ np.random.normal(size=len(x2), scale=0.01)
mod= GaussianModel()
params= mod.make_params()
params['amplitude'].set(1.,min=0.01,max=100.)
params['center'].set(1.,min=0.01,max=100.)
params['sigma'].set(1.,min=0.01,max=100.)
result= mod.fit(np.array([y1,y2]), params,method='basinhopping',
x=np.array([x1,x2]))
print(result.fit_report(min_correl=0.5))
fig, ax = plt.subplots()
plt.plot(x1,y1, lw=2, color='red')
plt.plot(x2,y2, lw=2, color='orange')
plt.plot(x1,result.eval(x=x1), lw=2, color='black')
plt.show()
The problem in the original code actually lies in the fact that my data sets don't have the same length. However I'm not sure at all how to handle this in the most elegant way?

Difficulty Understanding TensorFlow Computations

I'm new to TensorFlow and have difficulty understanding how the computations works. I could not find the answer to my question on the web.
For the following piece of code, the last time I print "d" in the for loop of the "train_neural_net()" function, I'm expecting the values to be identical to when I print "test_distance.eval". But they are way different. Can anyone tell me why this is happening? Isn't TensorFlow supposed to cache the Variable results learned in the for loop and use them when I run "test_distance.eval"?
def neural_network_model1(data):
nn1_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn1_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn1_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn1_l1 = tf.add(tf.matmul(data, nn1_hidden_1_layer["weights"]), nn1_hidden_1_layer["biasses"])
nn1_l1 = tf.sigmoid(nn1_l1)
nn1_l2 = tf.add(tf.matmul(nn1_l1, nn1_hidden_2_layer["weights"]), nn1_hidden_2_layer["biasses"])
nn1_l2 = tf.sigmoid(nn1_l2)
nn1_output = tf.add(tf.matmul(nn1_l2, nn1_output_layer["weights"]), nn1_output_layer["biasses"])
return nn1_output
def neural_network_model2(data):
nn2_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn2_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn2_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn2_l1 = tf.add(tf.matmul(data, nn2_hidden_1_layer["weights"]), nn2_hidden_1_layer["biasses"])
nn2_l1 = tf.sigmoid(nn2_l1)
nn2_l2 = tf.add(tf.matmul(nn2_l1, nn2_hidden_2_layer["weights"]), nn2_hidden_2_layer["biasses"])
nn2_l2 = tf.sigmoid(nn2_l2)
nn2_output = tf.add(tf.matmul(nn2_l2, nn2_output_layer["weights"]), nn2_output_layer["biasses"])
return nn2_output
def train_neural_net():
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(prediction1, prediction2)), reduction_indices=1))
cost = tf.reduce_mean(tf.multiply(y, distance))
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 500
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
test_distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(test_result1, test_result2)), reduction_indices=1))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
_, d = sess.run([optimizer, distance], feed_dict = {x1: train_x1, x2: train_x2, y: train_y})
print("Epoch", epoch, "distance", d)
print("test distance", test_distance.eval({x3: train_x1, x4: train_x2}))
train_neural_net()
Each time you call the functions neural_network_model1() or neural_network_model2(), you create a new set of variables, so there are four sets of variables in total.
The call to sess.run(tf.global_variables_initializer()) initializes all four sets of variables.
When you train in the for loop, you only update the first two sets of variables, created with these lines:
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
When you evaluate with test_distance.eval(), the tensor test_distance depends only on the variables that were created in the last two sets of variables, which were created with these lines:
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
These variables were never updated in the training loop, so the evaluation results will be based on the random initial values.
TensorFlow does include some code for sharing weights between multiple calls to the same function, using with tf.variable_scope(...): blocks. For more information on how to use these, see the tutorial on variables and sharing on the TensorFlow website.
You don't need to define two function for generating models, you can use tf.name_scope, and pass a model name to the function to use it as a prefix for variable declaration. On the other hand, you defined two variables for distance, first is distance and second is test_distance . But your model will learn from train data to minimize cost which is only related to first distance variable. Therefore, test_distance is never used and the model which is related to it, will never learn anything! Again there is no need for two distance functions. You only need one. When you want to calculate train distance, you should feed it with train data and when you want to calculate test distance you should feed it with test data.
Anyway, if you want second distance to work, you should declare another optimizer for it and also you have to learn it as you have done for first one. Also you should consider the fact that models are learning base on their initial values and training data. Even if you feed both models with exactly same training batches, you can't expect to have exactly similar characteristics models since initial values for weights are different and this could cause falling into different local minimum of error surface. At the end notice that whenever you call neural_network_model1 or neural_network_model2 you will generate new weights and biases, because tf.Variable is generating new variables for you.

Enforce algebraic relationships between object attributes with SymPy

I'm interested in using SymPy to augment my engineering models. Instead of defining a rigid set of inputs and outputs, I'd like for the user to simply provide everything they know about a system, then apply that data to an algebraic model which calculates the unknowns (if it has enough data).
For an example, say I have some stuff which has some mass, volume, and density. I'd like to define a relationship between those parameters (density = mass / volume) such that when the user has provided enough information (any 2 variables), the 3rd variable is automatically calculated. Finally, if any value is later updated, one of the other values should change to preserve the relationship. One of the challenges with this system is that when there are multiple independent variables, there would need to be a way to specify which independent variable should change to satisfy the requirement.
Here's some working code that I have currently:
from sympy import *
class Stuff(object):
def __init__(self, *args, **kwargs):
#String of variables
varString = 'm v rho'
#Initialize Symbolic variables
m, v, rho = symbols(varString)
#Define density equation
# "rho = m / v" becomes "rho - m / v = 0" which means the eqn. is rho - m / v
# This is because solve() assumes equation = 0
density_eq = rho - m / v
#Store the equation and the variable string for later use
self._eqn = density_eq
self._varString = varString
#Get a list of variable names
variables = varString.split()
#Initialize parameters dictionary
self._params = {name: None for name in variables}
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
#property
def density(self):
return self._params['rho']
#density.setter
def density(self, value):
param = 'rho'
self._params[param] = value
self.balance(param)
def balance(self, param):
#Get the list of all variable names
variables = self._varString.split()
#Get a copy of the list except for the recently changed parameter
others = [name for name in variables if name != param]
#Loop through the less recently changed variables
for name in others:
try:
#Symbolically solve for the current variable
eq = solve(self._eqn, [symbols(name)])[0]
#Get a dictionary of variables and values to substitute for a numerical solution (will contain None for unset values)
indvars = {symbols(n): self._params[n] for n in variables if n is not name}
#Set the parameter with the new numeric solution
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
if __name__ == "__main__":
#Run some examples - need to turn these into actual tests
stuff = Stuff()
stuff.density = 0.1
stuff.mass = 10.0
print stuff.volume
stuff = Water()
stuff.mass = 10.0
stuff.volume = 100.0
print stuff.density
stuff = Water()
stuff.density = 0.1
stuff.volume = 100.0
print stuff.mass
#increase volume
print "Setting Volume to 200"
stuff.volume = 200.0
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Density {0}".format(stuff.density)
#Setting Mass to 15
print "Setting Mass to 15.0"
stuff.mass = 15.0
print "Volume changes"
print "Volume {0}".format(stuff.volume)
print "Density {0}".format(stuff.density)
#Setting Density to 0.5
print "Setting Density to 0.5"
stuff.density = 0.5
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Volume {0}".format(stuff.volume)
print "It is impossible to let mass and volume drive density with this setup since either mass or volume will " \
"always be recalculated first."
I tried to be as elegant as I could be with the overall approach and layout of the class, but I can't help but wonder if I'm going about it wrong - if I'm using SymPy in the wrong way to accomplish this task. I'm interested in developing a complex aerospace vehicle model with dozens/hundreds of interrelated properties. I'd like to find an elegant and extensible way to use SymPy to govern property relationships across the vehicle before I scale up from this fairly simple example.
I'm also concerned about how/when to rebalance the equations. I'm familiar with PyQt Signals and Slots which is my first thought for how to link dependent things together to trigger a model update (emit a signal when a value updates, which will be received by rebalancing functions for each system of equations that relies on that parameter?). Yeah, I really don't know the best way to do this with SymPy. Might need a bigger example to explore systems of equations.
Here's some thoughts on where I'm headed with this project. Just using mass as an example, I would like to define the entire vehicle mass as the sum of the subsystem masses, and all subsystem masses as the sum of component masses. In addition, mass relationships will exist between certain subsystems and components. These relationships will drive the model until more concrete data is provided. So if the default ratio of fuel mass to total vehicle mass is 50%, then specifying 100lb of fuel will size the vehicle at 200lb. However if I later specify that the vehicle is actually 210lb, I would want the relationship recalculated (let it become dependent since fuel mass and vehicle mass were set more recently, or because I specified that they are independent variables or locked or something). The next problem is iteration. When circular or conflicting relationships exist in a model, the model must be iterated on to hopefully converge on a solution. This is often the case with the vehicle mass model described above. If the vehicle gets heavier, there needs to be more fuel to meet a requirement, which causes the vehicle to get even heavier, etc. I'm not sure how to leverage SymPy in these situations.
Any suggestions?
PS
Good explanation of the challenges associated with space launch vehicle design.
Edit: Changing the code structure based on goncalopp's suggestions...
class Balanced(object):
def __init__(self, variables, equationStr):
self._variables = variables
self._equation = sympify(equationStr)
#Initialize parameters dictionary
self._params = {name: None for name in self._variables}
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for varname in self._variables:
setattr(Balanced, varname, property(fget=partial(var_getter, varname),
fset=partial(var_setter, varname)))
def balance(self, recentlyChanged):
#Get a copy of the list except for the recently changed parameter
others = [name for name in self._variables if name != recentlyChanged]
#Loop through the less recently changed variables
for name in others:
try:
eq = solve(self._equation, [symbols(name)])[0]
indvars = {symbols(n): self._params[n] for n in self._variables if n != name}
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
class HasMass(Balanced):
def __init__(self):
super(HasMass, self).__init__(variables=['mass', 'volume', 'density'],
equationStr='density - mass / volume')
class Prop(HasMass):
def __init__(self):
super(Prop, self).__init__()
if __name__ == "__main__":
prop = Prop()
prop.density = 0.1
prop.mass = 10.0
print prop.volume
prop = Prop()
prop.mass = 10.0
prop.volume = 100.0
print prop.density
prop = Prop()
prop.density = 0.1
prop.volume = 100.0
print prop.mass
What this makes me want to do is use multiple inheritance or decorators to automatically assign physical inter-related properties to things. So I could have another class called "Cylindrical" which defines radius, diameter, and length properties, then I could have like class DowelRod(HasMass, Cylindrical). The really tricky part here is that I would want to define the cylinder volume (volume = length * pi * radius^2) and let that volume interact with the volume defined in the mass balance equations... such that, perhaps mass would respond to a change in length, etc. Not only is the multiple inheritance tricky, but automatically combining relationships will be even worse. This will get tricky very quickly. I don't know how to handle systems of equations yet, and it's clear with lots of parametric relationships that parameter locking or specifying independent/dependent variables will be necessary.
While I have no experience with this kind of models, and little experience with SymPy, here's some tips:
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
As you've noticed, you're repeating a lot of code for each variable. This is unnecessary, and since you'll have lots of variables, this eventually leads to code maintenance nightmares. You have your variables neatly arranged on varString = 'm v rho' My suggestion is to go further, and define a dictionary:
my_vars= {"m":"mass", "v":"volume", "rho":"density"}
and then add the properties and setters dynamically to the class (instead of explicitly):
from functools import partial
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for k,v in my_vars.items():
setattr(Stuff, k, property(fget=partial(var_getter, v), fset=partial(var_setter, v)))
This way, you only need to write the getter and setter once.
If you want to have a couple of different getters, you can still use this technique. Store either the getter for each variable or the variables for each getter - whichever is more convenient.
Another thing that may be useful once equations get complex, is that you can keep equations as strings in your source:
density_eq = sympy.sympify( "rho - m / v" )
Using these two "tricks", you may even want to keep your variables and your equations defined in external text files, or maybe CSV.
Viewing your problem as a constraint-solving problem, you may also want to look at python-constraint (http://labix.org/python-constraint) in addition to SymPy.
Just realized that the python-constraint package doesn't apply here since the domain needs to be finite. If the domain were finite though, here's an illustrative example:
import constraint as cst
p = cst.Problem()
p.addVariables(['rho','m', 'v'], range(100))
p.addConstraint(lambda rho,m,v: rho * v == m, ['rho', 'm', 'v'])
p.addConstraint(lambda rho: rho == 2, ['rho']) # setting inputs; for illustration only
p.addConstraint(lambda m: m == 10, ['m'])
print p.getSolutions()
[{'m': 10, 'rho': 2, 'v': 5}]
However, since the real domain is needed here, the package doesn't apply.