PYOMO: How to use abstract models with internal data - pyomo

Hei all,
I am trying to set up an abstract model for a very simple QP of the form
min (x-x0)^2
s.t.
A x = b
C x <= d
I would like to use an abstract model, as I need to resolve with changing parameters (mainly x0, but potentially also A, b, C, d). I am right now struggeling with simply setting the parameters in the model instance. I do not want to use an external data file, but rather internal python variables. All examples I find online use AMPL formatted data files.
This is the code I have right now
import pyomo.environ as pe
model = pe.AbstractModel()
# the sets
model.n = pe.Param(within=pe.NonNegativeIntegers)
model.m = pe.Param(initialize = 1)
model.ss = pe.RangeSet(1, model.n)
model.os = pe.RangeSet(1, model.m)
# the starting point and the constraint parameters
model.x_hat = pe.Param(model.ss)
model.A = pe.Param(model.os, model.ss)
model.b = pe.Param(model.os)
model.C = pe.Param(model.os, model.os)
model.d = pe.Param(model.ss, model.os)
# the decision variables
model.x_projected = pe.Var(model.ss)
# the cosntraints
# A x = b
def sum_of_elements_rule(model):
value = model.A * model.x_projected
return value == model.d
model.sumelem = pe.Constraint(model.os, rule=sum_of_elements_rule)
# C x <= d
def positivity_constraint(model):
return model.C*model.x_projected <= model.d
model.bounds = pe.Constraint(model.ss, rule=positivity_constraint)
# the cost
def cost_rule(model):
return sum((model.x_projected[i] - model.x[i])**2 for i in model.ss)
model.cost = pe.Objective(rule=cost_rule)
instance = model.create_instance()
And somehow here I am stuck. How do I set the parameters now?
Thanks and best, Theo

I know this is an old post but a solution to this could have helped me so here is the solution to this problem:
## TEST
data_init= {None: dict(
n = {None : 3},
d = {0:0, 1:1, 2:2},
x_hat = {0:10, 1:-1, 2:-100},
b = {None: 10}
)}
# create instance
instance = model.create_instance(data_init)
This creates the instance in an equivalent way than what you did but in a more formal way.

Ok, I seemed to have figured out what the problem is. If I want to set a parameter after I create an instance, I need the
mutable=True
flag. Then, I can set the parameter with something like
for i in range(model_dimension):
getattr(instance, 'd')[i] = i
The model dimension I need to choose before i create an instance (which is ok for my case). The instance can be reused with different parameters for the constraints.
The code below should work for the problem
min (x-x_hat)' * (x-x_hat)
s.t.
sum(x) = b
x[i] >= d[i]
with x_hat, b, d as parameters.
import pyomo.environ as pe
model = pe.AbstractModel()
# model dimension
model.n = pe.Param(default=2)
# state space set
model.ss = pe.RangeSet(0, model.n-1)
# equality
model.b = pe.Param(default=5, mutable=True)
# inequality
model.d = pe.Param(model.ss, default=0.0, mutable=True)
# decision var
model.x = pe.Var(model.ss)
model.x_hat = pe.Param(model.ss, default=0.0, mutable=True)
# the cost
def cost_rule(model):
return sum((model.x[i] - model.x_hat[i])**2 for i in model.ss)
model.cost = pe.Objective(rule=cost_rule)
# CONSTRAINTS
# each x_i bigger than d_i
def lb_rule(model, i):
return (model.x[i] >= model.d[i])
model.state_bound = pe.Constraint(model.ss, rule=lb_rule)
# sum of x == P_tot
def sum_rule(model):
return (sum(model.x[i] for i in model.ss) == model.b)
model.state_sum = pe.Constraint(rule=sum_rule)
## TEST
# define model dimension
model_dimension = 3
model.n = model_dimension
# create instance
instance = model.create_instance()
# set d
for i in range(model_dimension):
getattr(instance, 'd')[i] = i
# set x_hat
xh = (10,1,-100)
for i in range(model_dimension):
getattr(instance, 'x_hat')[i] = xh[i]
# set b
instance.b = 10
# solve
solver = pe.SolverFactory('ipopt')
result = solver.solve(instance)
instance.display()

Related

Is it possible to find all integer solutions?

I wanna get all integer solutions in a limited time, is it possible?
This is a linear, integer constraint satisfaction problem, which can be solved efficiently by OR Tools' CP-SAT. I've modified their example to solve your problem in Python:
from ortools.sat.python import cp_model
class VarArraySolutionPrinter(cp_model.CpSolverSolutionCallback):
"""Print intermediate solutions."""
def __init__(self, variables):
cp_model.CpSolverSolutionCallback.__init__(self)
self.__variables = variables
self.__solution_count = 0
def on_solution_callback(self):
self.__solution_count += 1
for v in self.__variables:
print('%s=%i' % (v, self.Value(v)), end=' ')
print()
def solution_count(self):
return self.__solution_count
def SearchForAllSolutionsSampleSat():
"""Showcases calling the solver to search for all solutions."""
# Creates the model.
model = cp_model.CpModel()
p = [1, 2, 3, 4]
ceq = 30
cgeq = 2
N = len(p)
# Creates the variables
x = [model.NewIntVar(0, 100, f'x{i}') for i in range(N)]
# Create the constraints.
model.Add(sum([xi*pi for xi, pi in zip(x, p)]) == ceq)
model.Add(sum(x) >= cgeq)
# Create a solver and solve.
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinter(x)
status = solver.SearchForAllSolutions(model, solution_printer)
print('Status = %s' % solver.StatusName(status))
print('Number of solutions found: %i' % solution_printer.solution_count())
SearchForAllSolutionsSampleSat()

Change constraints based on variable value in Pyomo

Is there a way of changing the values of a constraint as the solver is running?
Basically, I have a constraint that depends on the value of a variable. The problem is that the constraint is evaluated based on the initial value of the variable, but isn't updated as the variable changes.
Here's a simple example:
from pyomo.environ import *
from pyomo.opt import SolverFactory
import numpy as np
# Setup
model = ConcreteModel()
model.A = Set(initialize = [0,1,2])
model.B = Set(initialize = [0,1,2])
model.x = Var(model.A, model.B, initialize=0)
# A constraint that I'd like to keep updating, based on the value of x
def changing_constraint_rule(model, a):
x_values = list((model.x[a, b].value for b in model.B))
if np.max(x_values) == 0:
return Constraint.Skip
else:
# Not really important what goes here, just as long as it updates the constraint list
if a == 1 : return sum(model.x[a,b] for b in model.B) == 0
else: return sum(model.x[a,b] for b in model.B) == 1
model.changing_constraint = Constraint(model.A, rule = changing_constraint_rule)
# Another constraint that changes the value of x
def bounding_constraint_rule(model, a):
return sum(model.x[a, b] for b in model.B) == 1
model.bounding_constraint = Constraint(
model.A,
rule = bounding_constraint_rule)
# Some objective function
def obj_rule(model):
return(sum(model.x[a,b] for a in model.A for b in model.B))
model.objective = Objective(rule=obj_rule)
# Results
opt = SolverFactory("glpk")
results = opt.solve(model)
results.write()
model.x.display()
If I run model.changing_constraint.pprint() I can see that no constraints have been made, since the initial value of the variable model.x was set to 0.
If it's not possible to change the constraint values while solving, how could I formulate this problem differently to achieve what I'm looking for? I've read this other post but couldn't figure it out from the instructions.
I am giving you the same answer in the other question by #Gabe:
Any if-logic you use inside of rules should not involve the values of
variables (unless it is based on the initial value of a variable, in
which case you would wrap the variable in value() wherever you use it
outside of the main expression that is returned).
for example:
model.x[a, b].value should be model.x[a, b].value()
But still this might not give you the solution what you are looking for.

How can I make an indicator function with Pyomo?

I'm looking to create a simple indicator variable in Pyomo. Assuming I have a variable x, this indicator function would take the value 1 if x > 0, and 0 otherwise.
Here's how I've tried to do it:
model = ConcreteModel()
model.A = Set(initialize=[1,2,3])
model.B = Set(initialize=['J', 'K'])
model.x = Var(model.A, model.B, domain = NonNegativeIntegers)
model.ix = Var(model.A, model.B, domain = Binary)
def ix_indicator_rule(model, a, b):
return model.ix[a, b] == int(model.x[a, b] > 0)
model.ix_constraint = Constraint(model.A, model.B,
rule = ix_indicator_rule)
The error message I get is along the lines of Avoid this error by using Pyomo-provided math functions, which according to this link are found at pyomo.environ...but I'm not sure how to do this. I've tried using validate_PositiveValues(), like this:
def ix_indicator_rule(model, a, b):
return model.ix[a, b] == validate_PositiveValues(model.x[a, b])
model.ix_constraint = Constraint(model.A, model.B,
rule = ix_indicator_rule)
with no luck. Any help is appreciated!
You can achieve this with a "big-M" constraint, like this:
model = ConcreteModel()
model.A = Set(initialize=[1, 2, 3])
model.B = Set(initialize=['J', 'K'])
# m must be larger than largest allowed value of x, but it should
# also be as small as possible to improve numerical stability
model.m = Param(initialize=1e9)
model.x = Var(model.A, model.B, domain=NonNegativeIntegers)
model.ix = Var(model.A, model.B, domain=Binary)
# force model.ix to be 1 if model.x > 0
def ix_indicator_rule(model, a, b):
return model.x <= model.ix[a, b] * model.m
model.ix_constraint = Constraint(
model.A, model.B, rule=ix_indicator_rule
)
But note that the big-M constraint is one-sided. In this example it forces model.ix on when model.x > 0, but doesn't force it off when model.x == 0. You can achieve the latter (but not the former) by flipping the inequality to model.x >= model.ix[a, b] * model.m. But you can't do both in the same model. Generally you just pick the version that suits your model, e.g., if setting model.ix to 1 worsens your objective function, then you would pick the version shown above, and the solver will take care of setting model.ix to 0 whenever it can.
Pyomo also offers disjunctive programming features (see here and here) which may suit your needs. And the cplex solver offers indicator constraints, but I don't know whether Pyomo supports them.
I ended up using the Piecewise function and doing something like this:
DOMAIN_PTS = [0,0,1,1000000000]
RANGE_PTS = [0,0,1,1]
model.ix_constraint = Piecewise(
model.A, model.B,
model.ix, model.x,
pw_pts=DOMAIN_PTS,
pw_repn='INC',
pw_constr_type = 'EQ',
f_rule = RANGE_PTS,
unbounded_domain_var = True)
def objective_rule(model):
return sum(model.ix[a,b] for a in model.A for b in model.B)
model.objective = Objective(rule = objective_rule, sense=minimize)
It seems to work okay.

AttributeError: GaussianMixture instance has no attribute 'loglike'

I'm trying to implement Gaussian Mixture Model with Expectation–Maximization algorithm and I get this error.
This is the Gaussian Mixture model that I used:
class GaussianMixture:
"Model mixture of two univariate Gaussians and their EM estimation"
def __init__(self, data, mu_min=min(data), mu_max=max(data), sigma_min=.1, sigma_max=1, mix=.5):
self.data = data
#init with multiple gaussians
self.one = Gaussian(uniform(mu_min, mu_max),
uniform(sigma_min, sigma_max))
self.two = Gaussian(uniform(mu_min, mu_max),
uniform(sigma_min, sigma_max))
#as well as how much to mix them
self.mix = mix
def Estep(self):
"Perform an E(stimation)-step, freshening up self.loglike in the process"
# compute weights
self.loglike = 0. # = log(p = 1)
for datum in self.data:
# unnormalized weights
wp1 = self.one.pdf(datum) * self.mix
wp2 = self.two.pdf(datum) * (1. - self.mix)
# compute denominator
den = wp1 + wp2
# normalize
wp1 /= den
wp2 /= den
# add into loglike
self.loglike += log(wp1 + wp2)
# yield weight tuple
yield (wp1, wp2)
def Mstep(self, weights):
"Perform an M(aximization)-step"
# compute denominators
(left, rigt) = zip(*weights)
one_den = sum(left)
two_den = sum(rigt)
# compute new means
self.one.mu = sum(w * d / one_den for (w, d) in zip(left, data))
self.two.mu = sum(w * d / two_den for (w, d) in zip(rigt, data))
# compute new sigmas
self.one.sigma = sqrt(sum(w * ((d - self.one.mu) ** 2)
for (w, d) in zip(left, data)) / one_den)
self.two.sigma = sqrt(sum(w * ((d - self.two.mu) ** 2)
for (w, d) in zip(rigt, data)) / two_den)
# compute new mix
self.mix = one_den / len(data)
def iterate(self, N=1, verbose=False):
"Perform N iterations, then compute log-likelihood"
def pdf(self, x):
return (self.mix)*self.one.pdf(x) + (1-self.mix)*self.two.pdf(x)
def __repr__(self):
return 'GaussianMixture({0}, {1}, mix={2.03})'.format(self.one,
self.two,
self.mix)
def __str__(self):
return 'Mixture: {0}, {1}, mix={2:.03})'.format(self.one,
self.two,
self.mix)
And then , while training I get that error in the conditional statement.
# Check out the fitting process
n_iterations = 5
best_mix = None
best_loglike = float('-inf')
mix = GaussianMixture(data)
for _ in range(n_iterations):
try:
#train!
mix.iterate(verbose=True)
if mix.loglike > best_loglike:
best_loglike = mix.loglike
best_mix = mix
except (ZeroDivisionError, ValueError, RuntimeWarning): # Catch division errors from bad starts, and just throw them out...
pass
Any ideas why I have the following error?
AttributeError: GaussianMixture instance has no attribute 'loglike'
The loglike attribute is only created when you call the Estep method.
def Estep(self):
"Perform an E(stimation)-step, freshening up self.loglike in the process"
# compute weights
self.loglike = 0. # = log(p = 1)
You didn't call Estep between creating the GaussianMixture instance and mix.loglike:
mix = GaussianMixture(data)
for _ in range(n_iterations):
try:
#train!
mix.iterate(verbose=True)
if mix.loglike > best_loglike:
And the iterate method is empty (It looks like you forgot some code here).
def iterate(self, N=1, verbose=False):
"Perform N iterations, then compute log-likelihood"
So, there'll be no loglike attribute set on the mix instance by the time you do if mix.loglike. Hence, the AttributeError.
You need to do one of the following:
Call the Estep method (since you set self.loglike there)
Define a loglike attribute in __init__

Reformulating the AMPL car example

I am trying migrating the ampl car problem that comes in the Ipopt source code tarball as example. I am having got problems with the end condition (reach a place with zero speed at final iteration) and with the cost function (minimize final time).
Can someone help me revise the following model?
# min tf
# dx/dt = 0
# dv/dt = a - R*v^2
# x(0) = 0; x(tf) = 100
# v(0) = 0; v(tf) = 0
# -3 <= a <= 1 (a is the control variable)
#!Python3.5
from pyomo.environ import *
from pyomo.dae import *
N = 20;
T = 10;
L = 100;
m = ConcreteModel()
# Parameters
m.R = Param(initialize=0.001)
# Variables
def x_init(m, i):
return i*L/N
m.t = ContinuousSet(bounds=(0,1000))
m.x = Var(m.t, bounds=(0,None), initialize=x_init)
m.v = Var(m.t, bounds=(0,None), initialize=L/T)
m.a = Var(m.t, bounds=(-3.0,1.0), initialize=0)
# Derivatives
m.dxdt = DerivativeVar(m.x, wrt=m.t)
m.dvdt = DerivativeVar(m.v, wrt=m.t)
# Objetives
m.obj = Objective(expr=m.t[N])
# DAE
def _ode1(m, i):
if i==0:
return Constraint.Skip
return m.dxdt[i] == m.v[i]
m.ode1 = Constraint(m.t, rule=_ode1)
def _ode2(m, i):
if i==0:
return Constraint.Skip
return m.dvdt[i] == m.a[i] - m.R*m.v[i]**2
m.ode2 = Constraint(m.t, rule=_ode2)
# Constraints
def _init(m):
yield m.x[0] == 0
yield m.v[0] == 0
yield ConstraintList.End
m.init = ConstraintList(rule=_init)
'''
def _end(m, i):
if i==N:
return m.x[i] == L amd m.v[i] == 0
return Constraint.Skip
m.end = ConstraintList(rule=_end)
'''
# Discretize
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=N, wrt=m.t, scheme='BACKWARD')
# Solve
solver = SolverFactory('ipopt', executable='C:\\EXTERNOS\\COIN-OR\\win32-msvc12\\bin\\ipopt')
results = solver.solve(m, tee=True)
Currently, a ContinuousSet in Pyomo has to be bounded. This means that in order to solve a minimum time optimal control problem using this tool, the problem must be reformulated to remove the time scaling from the ContinuousSet. In addition, you have to introduce an extra variable to represent the final time. I've added an example to the Pyomo github repository showing how this can be done for your problem.