We are attempted to debug a Pyomo model. Ipopt with halt_on_ampl_error True tells up Error evaluating constraint 30. Is there an easy way to programmatically look up a constraint in a Pyomo model by its number (using the numbering of Ipopt)?
You can just modify your call to solve slightly:
opt = SolverFactory('ipopt')
res = opt.solve(model, symbolic_solver_labels=True)
Then, you should see a more useful error message in the ipopt output.
Let me extend this answer to address the other part of the question regarding looking up a constraint by its number. Because Pyomo variables and constraints do not have indices, this is very solver-interface-specific. For Pynumero, you have a couple of options. Suppose you have the following ConcreteModel called m and PyomoNLP called nlp.
import pyomo.environ as pyo
from pyomo.contrib.pynumero.interfaces.pyomo_nlp import PyomoNLP
m = pyo.ConcreteModel()
m.x = pyo.Var()
m.y = pyo.Var()
m.obj = pyo.Objective(expr=m.y)
m.c1 = pyo.Constraint(expr=m.y >= m.x)
m.c2 = pyo.Constraint(expr=m.y >= -m.x)
nlp = PyomoNLP(m)
If you just want to get the index of of a few variables or constraints, you can use
var_indices = nlp.get_primal_indices([m.x, m.y])
con_indices = nlp.get_constraint_indices([m.c1, m.c2])
If you want complete maps both directions, you can use
con_to_index = dict()
index_to_con = dict()
var_to_index = pyo.ComponentMap()
index_to-var = dict()
for ndx, var in enumerate(nlp.get_pyomo_variables()):
var_to_index[var] = ndx
index_to_var[ndx] = var
for ndx, con in enumerate(nlp.get_pyomo_constraints()):
con_to_index[con] = ndx
index_to_con[ndx] = con
I will assume that you are using the PyomoNLP interface in PyNumero. If you are using the asl interface, this is a little more difficult. With the PyomoNLP interface in PyNumero, there are several methods that should be able to do what you need. Have a look at the comments in pyomo_nlp.py. Note that this stuff is pretty new and the API is subject to change.
Related
My question is simple. Given a Pyomo constraint, how can I easily extract the variables that appear in the constraint?
This question has been asked a few times already. I believe the Pyomo internals have been modified and the proposed solutions no longer work.
How to get variables of a constraint in Pyomo
Access all variables occurring in a pyomo constraint
Minimum working test problem:
from pyomo.environ import *
m = ConcreteModel()
m.I = Set(initialize=[i for i in range(5)])
m.x = Var(m.I,bounds=(-10,10),initialize=1.0)
m.z = Var(bounds=(-100,100), initialize=5.0)
m.con1 = Constraint(expr=m.x[0] + m.x[1] - m.x[3] >= 10)
m.con2 = Constraint(expr=m.x[0]*m.x[3] + m.x[1] >= 0)
m.con3 = Constraint(expr=m.x[4]*m.x[3] + m.x[0]*m.x[3] - m.x[4] == 0)
m.obj = Objective(expr=sum(m.x[i]**2 for i in m.I))
m.pprint()
opt = SolverFactory('ipopt')
opt.options['max_iter'] = 0
opt.solve(m, tee=True)
In this example, I would like to programmatically inspect the variables in con1.
The second link has the correct solution: Access all variables occurring in a pyomo constraint
identify_variables() still exists, but it looks like it was moved to pyomo.core.expr.visitor.
It might be worth promoting it into the pyomo.core.expr namespace.
I am implementing a disjunctive programming problem in Pyomo. In the disjuncts I have nonconvex nonlinear constraints. Therefore I can't use the BigM transformation directly. I have to pass suffixes with the values of the bigM. Here is a snippet of my code:
m = ConcreteModel()
m.Block1 = Block()
#... Here some variables and other constraints are defined.
# Defining the rule for the disjunct:
def _d(disjunct, flag):
m = disjunct.parent_block()
if flag:
# Here 2 constraints are valid
disjunct.c1 = Constraint(expr=x1==0)
disjunct.c2 = Constraint(expr=x2==0)
else:
# Here 7 constraints are valid
disjunct.c1 = Constraint(expr=y1==0)
disjunct.c2 = Constraint(expr=y2==0)
disjunct.c3 = Constraint(expr=y3==0)
...
# Define the disjunct:
m.Block1.d1 = Disjunct([0,1], rule=_d)
# Define the disjunction
def _c1(model):
return [model.d1[0], model.d1[1]]
m.Block1.c1 = Disjunction(rule=_c1)
# Applying the transformation:
TransformationFactory('gdp.bigm').apply_to(m)
I have to pass suffixes for the bigM and don't know how to do this. Also how can this be done with different bigM-values for different constraints?
I'm looking forward for your help.
For linear constraints, Pyomo can automatically compute appropriate big-M values for you. For nonlinear constraints, you'll want to add:
m.BigM = Suffix(direction=Suffix.LOCAL)
m.BigM[m.Block1.d1[0].c1] = your_value_here
Note that different ways of expressing disjunctions are also available, which might make things easier for you. I personally like option 2 the best if I want to be clear, and option 3 if I want to be more concise: http://pyomo.readthedocs.io/en/latest/modeling_extensions/gdp.html
I'm new to TensorFlow and have difficulty understanding how the computations works. I could not find the answer to my question on the web.
For the following piece of code, the last time I print "d" in the for loop of the "train_neural_net()" function, I'm expecting the values to be identical to when I print "test_distance.eval". But they are way different. Can anyone tell me why this is happening? Isn't TensorFlow supposed to cache the Variable results learned in the for loop and use them when I run "test_distance.eval"?
def neural_network_model1(data):
nn1_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn1_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn1_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn1_l1 = tf.add(tf.matmul(data, nn1_hidden_1_layer["weights"]), nn1_hidden_1_layer["biasses"])
nn1_l1 = tf.sigmoid(nn1_l1)
nn1_l2 = tf.add(tf.matmul(nn1_l1, nn1_hidden_2_layer["weights"]), nn1_hidden_2_layer["biasses"])
nn1_l2 = tf.sigmoid(nn1_l2)
nn1_output = tf.add(tf.matmul(nn1_l2, nn1_output_layer["weights"]), nn1_output_layer["biasses"])
return nn1_output
def neural_network_model2(data):
nn2_hidden_1_layer = {'weights': tf.Variable(tf.random_normal([5, n_nodes_hl1])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl1]))}
nn2_hidden_2_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), 'biasses': tf.Variable(tf.random_normal([n_nodes_hl2]))}
nn2_output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, vector_size])), 'biasses': tf.Variable(tf.random_normal([vector_size]))}
nn2_l1 = tf.add(tf.matmul(data, nn2_hidden_1_layer["weights"]), nn2_hidden_1_layer["biasses"])
nn2_l1 = tf.sigmoid(nn2_l1)
nn2_l2 = tf.add(tf.matmul(nn2_l1, nn2_hidden_2_layer["weights"]), nn2_hidden_2_layer["biasses"])
nn2_l2 = tf.sigmoid(nn2_l2)
nn2_output = tf.add(tf.matmul(nn2_l2, nn2_output_layer["weights"]), nn2_output_layer["biasses"])
return nn2_output
def train_neural_net():
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(prediction1, prediction2)), reduction_indices=1))
cost = tf.reduce_mean(tf.multiply(y, distance))
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 500
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
test_distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(test_result1, test_result2)), reduction_indices=1))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
_, d = sess.run([optimizer, distance], feed_dict = {x1: train_x1, x2: train_x2, y: train_y})
print("Epoch", epoch, "distance", d)
print("test distance", test_distance.eval({x3: train_x1, x4: train_x2}))
train_neural_net()
Each time you call the functions neural_network_model1() or neural_network_model2(), you create a new set of variables, so there are four sets of variables in total.
The call to sess.run(tf.global_variables_initializer()) initializes all four sets of variables.
When you train in the for loop, you only update the first two sets of variables, created with these lines:
prediction1 = neural_network_model1(x1)
prediction2 = neural_network_model2(x2)
When you evaluate with test_distance.eval(), the tensor test_distance depends only on the variables that were created in the last two sets of variables, which were created with these lines:
test_result1 = neural_network_model1(x3)
test_result2 = neural_network_model2(x4)
These variables were never updated in the training loop, so the evaluation results will be based on the random initial values.
TensorFlow does include some code for sharing weights between multiple calls to the same function, using with tf.variable_scope(...): blocks. For more information on how to use these, see the tutorial on variables and sharing on the TensorFlow website.
You don't need to define two function for generating models, you can use tf.name_scope, and pass a model name to the function to use it as a prefix for variable declaration. On the other hand, you defined two variables for distance, first is distance and second is test_distance . But your model will learn from train data to minimize cost which is only related to first distance variable. Therefore, test_distance is never used and the model which is related to it, will never learn anything! Again there is no need for two distance functions. You only need one. When you want to calculate train distance, you should feed it with train data and when you want to calculate test distance you should feed it with test data.
Anyway, if you want second distance to work, you should declare another optimizer for it and also you have to learn it as you have done for first one. Also you should consider the fact that models are learning base on their initial values and training data. Even if you feed both models with exactly same training batches, you can't expect to have exactly similar characteristics models since initial values for weights are different and this could cause falling into different local minimum of error surface. At the end notice that whenever you call neural_network_model1 or neural_network_model2 you will generate new weights and biases, because tf.Variable is generating new variables for you.
In fact, I'm trying to implement the very simple model formulation:
min sum_i(y_i*f_i) + sum_i(sum_j(x_ij*c_ij))
s.t. sum_i(x_ij) = 1 for all j
x_ij <= y_i for all i,j
x_ij, y_i are binary
But I simply cannot figure out how the Python API works. They suggest creating variables like this:
model.variables.add(obj = fixedcost,
lb = [0] * num_facilities,
ub = [1] * num_facilities,
types = ["B"] * num_facilities)
# Create one binary variable for each facility/client pair. The variables
# model whether a client is served by a facility.
for c in range(num_clients):
model.variables.add(obj = cost[c],
lb = [0] * num_facilities,
ub = [1] * num_facilities,
types = ["B"] * num_facilities)
# Create corresponding indices for later use
supply = []
for c in range(num_clients):
supply.append([])
for f in range(num_facilities):
supply[c].append((c+1)*(num_facilities)+f)
# Constraint no. 1:
for c in range(num_clients):
assignment_constraint = cplex.SparsePair(ind = [supply[c][f] for f in \
range(num_facilities)],
val = [1] * num_facilities)
model.linear_constraints.add(lin_expr = [assignment_constraint],
senses = ["L"],
rhs = [1])
For this constraints I have no idea how the variables from above are refered to since it only mentions an auxiliary list of lists. Can anyone explain to me how that should work? The problem is easy, I also know how to do it in C++ but the Python API is a closed book to me.
The problem is the uncapacitated facility location problem and I want to adapt the example file facility.py
EDIT: One idea for constraint no.2 is to create one-dimensional vectors and use vector addition to create the final constraint. But that tells me that this is an unsupported operand for SparsePairs
for f in range(num_facilities):
index = [f]
value = [-1.0]
for c in range(num_clients):
open_constraint = cplex.SparsePair(ind = index, val = value) + cplex.SparsePair(ind = [supply[c][f]], val = [1.0])
model.linear_constraints.add(lin_expr=[open_constraint],
senses = ["L"],
rhs = [0])
The Python API is closer to the C Callable Library in nature than the C++/Concert API. Variables are indexed from 0 to model.variables.get_num() - 1 and can be referred to by index, for example, when creating constraints. They can also be referred to by name (the add method has an optional names argument). See the documentation for the VariablesInterface here (this is for version 12.5.1, which I believe you are using, given your previous post).
It may help to start looking at the most simple examples like lpex1.py (and read the comments). Finally, I highly recommend playing with the Python API from the interactive Python prompt (aka the REPL). You can read the help there and type things in to see what they do.
You also might want to take a look at the docplex package. This is a modeling layer built on top of the CPLEX Python API (or which can solve on the cloud if you don't have a local installation of CPLEX installed).
I'm interested in using SymPy to augment my engineering models. Instead of defining a rigid set of inputs and outputs, I'd like for the user to simply provide everything they know about a system, then apply that data to an algebraic model which calculates the unknowns (if it has enough data).
For an example, say I have some stuff which has some mass, volume, and density. I'd like to define a relationship between those parameters (density = mass / volume) such that when the user has provided enough information (any 2 variables), the 3rd variable is automatically calculated. Finally, if any value is later updated, one of the other values should change to preserve the relationship. One of the challenges with this system is that when there are multiple independent variables, there would need to be a way to specify which independent variable should change to satisfy the requirement.
Here's some working code that I have currently:
from sympy import *
class Stuff(object):
def __init__(self, *args, **kwargs):
#String of variables
varString = 'm v rho'
#Initialize Symbolic variables
m, v, rho = symbols(varString)
#Define density equation
# "rho = m / v" becomes "rho - m / v = 0" which means the eqn. is rho - m / v
# This is because solve() assumes equation = 0
density_eq = rho - m / v
#Store the equation and the variable string for later use
self._eqn = density_eq
self._varString = varString
#Get a list of variable names
variables = varString.split()
#Initialize parameters dictionary
self._params = {name: None for name in variables}
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
#property
def density(self):
return self._params['rho']
#density.setter
def density(self, value):
param = 'rho'
self._params[param] = value
self.balance(param)
def balance(self, param):
#Get the list of all variable names
variables = self._varString.split()
#Get a copy of the list except for the recently changed parameter
others = [name for name in variables if name != param]
#Loop through the less recently changed variables
for name in others:
try:
#Symbolically solve for the current variable
eq = solve(self._eqn, [symbols(name)])[0]
#Get a dictionary of variables and values to substitute for a numerical solution (will contain None for unset values)
indvars = {symbols(n): self._params[n] for n in variables if n is not name}
#Set the parameter with the new numeric solution
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
if __name__ == "__main__":
#Run some examples - need to turn these into actual tests
stuff = Stuff()
stuff.density = 0.1
stuff.mass = 10.0
print stuff.volume
stuff = Water()
stuff.mass = 10.0
stuff.volume = 100.0
print stuff.density
stuff = Water()
stuff.density = 0.1
stuff.volume = 100.0
print stuff.mass
#increase volume
print "Setting Volume to 200"
stuff.volume = 200.0
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Density {0}".format(stuff.density)
#Setting Mass to 15
print "Setting Mass to 15.0"
stuff.mass = 15.0
print "Volume changes"
print "Volume {0}".format(stuff.volume)
print "Density {0}".format(stuff.density)
#Setting Density to 0.5
print "Setting Density to 0.5"
stuff.density = 0.5
print "Mass changes"
print "Mass {0}".format(stuff.mass)
print "Volume {0}".format(stuff.volume)
print "It is impossible to let mass and volume drive density with this setup since either mass or volume will " \
"always be recalculated first."
I tried to be as elegant as I could be with the overall approach and layout of the class, but I can't help but wonder if I'm going about it wrong - if I'm using SymPy in the wrong way to accomplish this task. I'm interested in developing a complex aerospace vehicle model with dozens/hundreds of interrelated properties. I'd like to find an elegant and extensible way to use SymPy to govern property relationships across the vehicle before I scale up from this fairly simple example.
I'm also concerned about how/when to rebalance the equations. I'm familiar with PyQt Signals and Slots which is my first thought for how to link dependent things together to trigger a model update (emit a signal when a value updates, which will be received by rebalancing functions for each system of equations that relies on that parameter?). Yeah, I really don't know the best way to do this with SymPy. Might need a bigger example to explore systems of equations.
Here's some thoughts on where I'm headed with this project. Just using mass as an example, I would like to define the entire vehicle mass as the sum of the subsystem masses, and all subsystem masses as the sum of component masses. In addition, mass relationships will exist between certain subsystems and components. These relationships will drive the model until more concrete data is provided. So if the default ratio of fuel mass to total vehicle mass is 50%, then specifying 100lb of fuel will size the vehicle at 200lb. However if I later specify that the vehicle is actually 210lb, I would want the relationship recalculated (let it become dependent since fuel mass and vehicle mass were set more recently, or because I specified that they are independent variables or locked or something). The next problem is iteration. When circular or conflicting relationships exist in a model, the model must be iterated on to hopefully converge on a solution. This is often the case with the vehicle mass model described above. If the vehicle gets heavier, there needs to be more fuel to meet a requirement, which causes the vehicle to get even heavier, etc. I'm not sure how to leverage SymPy in these situations.
Any suggestions?
PS
Good explanation of the challenges associated with space launch vehicle design.
Edit: Changing the code structure based on goncalopp's suggestions...
class Balanced(object):
def __init__(self, variables, equationStr):
self._variables = variables
self._equation = sympify(equationStr)
#Initialize parameters dictionary
self._params = {name: None for name in self._variables}
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for varname in self._variables:
setattr(Balanced, varname, property(fget=partial(var_getter, varname),
fset=partial(var_setter, varname)))
def balance(self, recentlyChanged):
#Get a copy of the list except for the recently changed parameter
others = [name for name in self._variables if name != recentlyChanged]
#Loop through the less recently changed variables
for name in others:
try:
eq = solve(self._equation, [symbols(name)])[0]
indvars = {symbols(n): self._params[n] for n in self._variables if n != name}
self._params[name] = eq.evalf(subs=indvars)
except Exception, e:
pass
class HasMass(Balanced):
def __init__(self):
super(HasMass, self).__init__(variables=['mass', 'volume', 'density'],
equationStr='density - mass / volume')
class Prop(HasMass):
def __init__(self):
super(Prop, self).__init__()
if __name__ == "__main__":
prop = Prop()
prop.density = 0.1
prop.mass = 10.0
print prop.volume
prop = Prop()
prop.mass = 10.0
prop.volume = 100.0
print prop.density
prop = Prop()
prop.density = 0.1
prop.volume = 100.0
print prop.mass
What this makes me want to do is use multiple inheritance or decorators to automatically assign physical inter-related properties to things. So I could have another class called "Cylindrical" which defines radius, diameter, and length properties, then I could have like class DowelRod(HasMass, Cylindrical). The really tricky part here is that I would want to define the cylinder volume (volume = length * pi * radius^2) and let that volume interact with the volume defined in the mass balance equations... such that, perhaps mass would respond to a change in length, etc. Not only is the multiple inheritance tricky, but automatically combining relationships will be even worse. This will get tricky very quickly. I don't know how to handle systems of equations yet, and it's clear with lots of parametric relationships that parameter locking or specifying independent/dependent variables will be necessary.
While I have no experience with this kind of models, and little experience with SymPy, here's some tips:
#property
def mass(self):
return self._params['m']
#mass.setter
def mass(self, value):
param = 'm'
self._params[param] = value
self.balance(param)
#property
def volume(self):
return self._params['v']
#volume.setter
def volume(self, value):
param = 'v'
self._params[param] = value
self.balance(param)
As you've noticed, you're repeating a lot of code for each variable. This is unnecessary, and since you'll have lots of variables, this eventually leads to code maintenance nightmares. You have your variables neatly arranged on varString = 'm v rho' My suggestion is to go further, and define a dictionary:
my_vars= {"m":"mass", "v":"volume", "rho":"density"}
and then add the properties and setters dynamically to the class (instead of explicitly):
from functools import partial
def var_getter(varname, self):
return self._params[varname]
def var_setter(varname, self, value):
self._params[varname] = value
self.balance(varname)
for k,v in my_vars.items():
setattr(Stuff, k, property(fget=partial(var_getter, v), fset=partial(var_setter, v)))
This way, you only need to write the getter and setter once.
If you want to have a couple of different getters, you can still use this technique. Store either the getter for each variable or the variables for each getter - whichever is more convenient.
Another thing that may be useful once equations get complex, is that you can keep equations as strings in your source:
density_eq = sympy.sympify( "rho - m / v" )
Using these two "tricks", you may even want to keep your variables and your equations defined in external text files, or maybe CSV.
Viewing your problem as a constraint-solving problem, you may also want to look at python-constraint (http://labix.org/python-constraint) in addition to SymPy.
Just realized that the python-constraint package doesn't apply here since the domain needs to be finite. If the domain were finite though, here's an illustrative example:
import constraint as cst
p = cst.Problem()
p.addVariables(['rho','m', 'v'], range(100))
p.addConstraint(lambda rho,m,v: rho * v == m, ['rho', 'm', 'v'])
p.addConstraint(lambda rho: rho == 2, ['rho']) # setting inputs; for illustration only
p.addConstraint(lambda m: m == 10, ['m'])
print p.getSolutions()
[{'m': 10, 'rho': 2, 'v': 5}]
However, since the real domain is needed here, the package doesn't apply.