I am using pyomo and I want to define a general equation (with general variables) and then replace the specific variables, something like that:
def Variable_trap_eq(model, variable, f_variable, i):
return 0 == variable[i] - variable[i+1] + (m.step/2.0)*( f_variable[i] + f_variable[i+1])
m.Variable_trap_eq_const = Constraint(m.N1, rule = Variable_trap_eq(x, f_x))
m.Variable_trap_eq_const = Constraint(m.N1, rule = Variable_trap_eq(y, f_y))
Something like that, where in the first constraint: variable= model.x, and f_variable = model.f_x, and in the second one: variable= model.y, and f_variable = model.f_y.
Any help?
Thanks,
MarĂa
I think the simplest way to do this would be to define two constraint rules and a third function that those two rules then call. E.g.,
def Variable_trap_eq(model, variable, f_variable, i):
return 0 == variable[i] - variable[i+1] + (m.step/2.0)*(f_variable[i] + f_variable[i+1])
def Variable_trap_eq_const1_rule(model, i):
return Variable_trap_eq(model, model.x, model.f_x, i)
m.Variable_trap_eq_const1 = Constraint(m.N1, rule=Variable_trap_eq_const1_rule)
def Variable_trap_eq_const2_rule(model, i):
return Variable_trap_eq(model, model.y, model.f_y, i)
m.Variable_trap_eq_const2 = Constraint(m.N1, rule=Variable_trap_eq_const2_rule)
Related
I am working on a small optimization model with some disjunctions. The way I did in a concrete model worked well:
from pyomo.environ import *
m = ConcreteModel()
m.d1 = Disjunct()
m.d2 = Disjunct()
m.d1.sub1 = Disjunct()
m.d1.sub2 = Disjunct()
m.d1.disj = Disjunction(expr=[m.d1.sub1, m.d1.sub2])
m.disj = Disjunction(expr=[m.d1, m.d2])
But now I tranfered the concrete model into an abstract formulation. I was able to fix everything instead of nesting the disjunctions. The way I did it was like:
#Disjunct 1
def _op_mode1(self, op_mode, t):
m = op_mode.model()
op_mode.c1 = po.Constraint(expr=m.x[t] == True)
#Disjunct 2
def _op_mode2(self, op_mode, t):
m = op_mode.model()
op_mode.c1 = po.Constraint(expr=m.x[t] == False)
#Disjunction 1
def _op_modes(self,m, t):
return [m.mode1[t], m.mode2[t]]
#Adding Components
self.model.del_component("mode1")
self.model.del_component("mode1_index")
self.model.add_component("mode1", pogdp.Disjunct(self.model.T, rule=self._op_mode1))
self.model.del_component("mode2")
self.model.del_component("mode2_index")
self.model.add_component("mode2", pogdp.Disjunct(self.model.T, rule=self._op_mode1))
self.model.del_component("modes")
self.model.del_component("modes_index")
self.model.add_component("modes", pogdp.Disjunction(self.model.T, rule=self._op_modes))`
As I previously mentioned, this works fine. But I haven`t found any way to nest the disjunctions. Pyomo alsways complains about the second layer of the disjuncts like "sub1".
Would anybody could give me a hint?
Many greetings
Joerg
The issue with the latest model above is that you are declaring m.d1 and m.d2 for each element of m.T, but they overwrite each other each time since they have the same name. You should be seeing warning messages logged for this. So if you uncomment your pprint of the model, you'll see that you only have the last ones you declared (with constraints on x[10]). So the first 9 Disjunctions in m.disjunction_ are disjunctions of Disjuncts that do not exist. The simplest fix for this is to give the disjuncts unique names when you declare them:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(0, 10)
model.x=pyo.Var(model.T,bounds=(-2, 10))
model.y=pyo.Var(model.T,bounds=(20, 30))
# This was also a duplicate declaration:
#model.disjunction_ = pogdp.Disjunction(model.T)
def d1(m, t):
disj = pogdp.Disjunct()
disj.c1= pyo.Constraint(expr=m.x[t] <= 10)
m.add_component('d1_%s' % t, disj)
return disj
def d2(m, t):
disj = pogdp.Disjunct()
disj.c1= pyo.Constraint(expr=m.x[t] >= 10)
m.add_component('d2_%s' % t, disj)
return disj
# sum x,y
def obj_rule(m):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in
m.T)
model.obj = pyo.Objective(rule=obj_rule)
def _op_mode_test(m, t):
disj1 = d1(m, t)
disj2 = d2(m, t)
return [disj1, disj2]
model.disjunction_ = pogdp.Disjunction(model.T, rule=_op_mode_test)
However, it would be cleaner (and probably easier down the line) to index the Disjuncts by m.T as well, since that's basically what the unique names are doing.
Block (and hence Disjunct rules) are passed the block (or disjunct) to be populated as the first argument. So, an "abstract" equivalent too your concrete model might look something like this:
model = AbstractModel()
#model.Disjunct()
def d1(d):
# populate the `d` disjunct (i.e., `model.d1`) here
pass
#model.Disjunct()
def d2(d):
#d.Disjunct()
def sub1(sd):
# populate the 'sub1' disjunct here
pass
#d.Disjunct()
def sub2(sd):
# populate the 'sub2' disjunct here
pass
d.disj = Disjunction(expr=[d.sub1, d.sub2])
model.disj = Disjunction(expr=[model.d1, model.d2])
There is a more fundamental question as to why you are converting your model over to "abstract" form. Pyomo Abstract models were mostly devised to be familiar to people coming from modeling in AMPL. While they will work with block-structured models, as AMPL was never really designed with blocks in mind, similarly block-oriented Abstract models tend to be unnecessarily cumbersome.
Here ist our new model:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(0,10)
model.x=pyo.Var(model.T,bounds=(-2, 10))
model.y=pyo.Var(model.T,bounds=(20, 30))
model.disjunction_=pogdp.Disjunction(model.T)
def d1(m,t):
m.d1 = pogdp.Disjunct()
m.d1.c1= pyo.Constraint(expr=m.x[t] <=10)
def d2(m,t):
m.d2 = pogdp.Disjunct()
m.d2.c1= pyo.Constraint(expr=m.x[t] >=10)
# sum x,y
def obj_rule(m):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in m.T)
model.obj = pyo.Objective(rule=obj_rule)
def _op_mode_test(m,t):
d1(m,t)
d2(m,t)
return [m.d1,m.d2]
model.disjunction_=pogdp.Disjunction(model.T,rule=_op_mode_test)
#model.pprint()
pyo.TransformationFactory('gdp.bigm').apply_to(model)
solver = pyo.SolverFactory('baron')
solver.solve(model)
print(pyo.value(model.obj))
I think it has something to do with the RangeSet. For a single step it works, but with more than one steps it throws an error: AttributeError: 'NoneType' object has no attribute 'component'
It would be great if you could have a look on it.
Many thanks
Here is the code which works pretty fine with bigm, but not with mbigm or hull transformation:
import pyomo.environ as pyo
import pyomo.gdp as pogdp
model = pyo.ConcreteModel()
model.T = pyo.RangeSet(2)
model.x=pyo.Var(model.T,bounds=(1, 10))
model.y=pyo.Var(model.T,bounds=(1, 100))
def _op_mode_sub(m, t):
m.disj1[t].sub1 = pogdp.Disjunct()
m.disj1[t].sub1.c1= pyo.Constraint(expr=m.y[t] == 60)
m.disj1[t].sub2 = pogdp.Disjunct()
m.disj1[t].sub2.c1= pyo.Constraint(expr=m.y[t] == 100)
return [m.disj1[t].sub1, m.disj1[t].sub2]
def _op_mode(m, t):
m.disj2[t].c1= pyo.Constraint(expr=m.y[t] >= 3)
m.disj2[t].c2= pyo.Constraint(expr=m.y[t] <= 5)
return [m.disj1[t], m.disj2[t]]
model.disj1 = pogdp.Disjunct(model.T)
model.disj2 = pogdp.Disjunct(model.T)
model.disjunction1sub = pogdp.Disjunction(model.T, rule=_op_mode_sub)
model.disjunction1 = pogdp.Disjunction(model.T, rule=_op_mode)
def obj_rule(m, t):
return pyo.quicksum(pyo.quicksum([m.x[t] + m.y[t]], linear=False) for t in m.T)
model.obj = pyo.Objective(rule=obj_rule)
model.pprint()
gdp_relax=pyo.TransformationFactory('gdp.bigm')
gdp_relax.apply_to(model)
solver = pyo.SolverFactory('glpk')
solver.solve(model)
print(pyo.value(model.obj))
Is there a way of changing the values of a constraint as the solver is running?
Basically, I have a constraint that depends on the value of a variable. The problem is that the constraint is evaluated based on the initial value of the variable, but isn't updated as the variable changes.
Here's a simple example:
from pyomo.environ import *
from pyomo.opt import SolverFactory
import numpy as np
# Setup
model = ConcreteModel()
model.A = Set(initialize = [0,1,2])
model.B = Set(initialize = [0,1,2])
model.x = Var(model.A, model.B, initialize=0)
# A constraint that I'd like to keep updating, based on the value of x
def changing_constraint_rule(model, a):
x_values = list((model.x[a, b].value for b in model.B))
if np.max(x_values) == 0:
return Constraint.Skip
else:
# Not really important what goes here, just as long as it updates the constraint list
if a == 1 : return sum(model.x[a,b] for b in model.B) == 0
else: return sum(model.x[a,b] for b in model.B) == 1
model.changing_constraint = Constraint(model.A, rule = changing_constraint_rule)
# Another constraint that changes the value of x
def bounding_constraint_rule(model, a):
return sum(model.x[a, b] for b in model.B) == 1
model.bounding_constraint = Constraint(
model.A,
rule = bounding_constraint_rule)
# Some objective function
def obj_rule(model):
return(sum(model.x[a,b] for a in model.A for b in model.B))
model.objective = Objective(rule=obj_rule)
# Results
opt = SolverFactory("glpk")
results = opt.solve(model)
results.write()
model.x.display()
If I run model.changing_constraint.pprint() I can see that no constraints have been made, since the initial value of the variable model.x was set to 0.
If it's not possible to change the constraint values while solving, how could I formulate this problem differently to achieve what I'm looking for? I've read this other post but couldn't figure it out from the instructions.
I am giving you the same answer in the other question by #Gabe:
Any if-logic you use inside of rules should not involve the values of
variables (unless it is based on the initial value of a variable, in
which case you would wrap the variable in value() wherever you use it
outside of the main expression that is returned).
for example:
model.x[a, b].value should be model.x[a, b].value()
But still this might not give you the solution what you are looking for.
So I have this class:
#!/usr/bin/python3
class MyClass(object):
def __init__(self, length):
self._list = length
def get(self, index):
try:
return self._list[index]
except IndexError:
return None
which takes in a list and returns a value, a list index I think. I am trying to get that value:
def my_function(a_list):
a_list = MyClass
for x in (10**p for p in range(1, 9)):
if a_list:
print(a_list)
def main():
length = my_function(MyClass([i for i in range(0, 543)]))
but I keep getting only the memory location of the list, I think this is supposed to return an int.
I am hoping this is a workable bit of code, but I am struggling, with the concept of passing an "object" to a class, it doesn't make any sense to me.
Here is a test I am supposed to use:
def test_large_list():
s_list = My_Class([i for i in xrange(0, 100000)])
assert len(s_list._list) == list_length(s_list)
Ok, Here is my full function that works, it is done, how od I do this so that the first line takes an argument
#!/usr/bin/python3
#def list_length(single_method_list): This is what I am supposed to work with
from single_method_list import SingleMethodList
def my_function(): # This is how I have done it and it works.
a_list = MyClass([i for i in range(0, 234589)])
for x in (10**p for p in range(1, 8)):
if a_list.get(x):
print("More than", x)
first = x
else:
print("Less than", x)
last = x
break
answer = False
while not answer:
result = (first + last)/2
result = int(round(result))
print(result)
if s_list.get(result):
first = result
print('first', result)
else:
last = result
print('last', result)
if s_list.get(result) and not s_list.get(result + 1):
answer = True
print(result + 1)
my_function()
I don't know what more I can give to explain where I am stuck, it is the OOP part of this that I don't know I need the same results here, just passing it to the function instead of creating it inside the function which I did in order to do the algorithm.
Well your class does something else.MyClass is designed to take a List at initialization, so the naming length is not a good idea.
The get() method of this class takes in a number and returns the element located at that particular index in the initialized self._list.
Your logic should be like:
def my_function(a_list):
a_list = MyClass(a_list)
...
def main():
length = my_function([i for i in range(0, 543)])
Just to clarify some misunderstanding that you might have.
Class does not return anything. It is a blueprint for creating objects.
What can return value is a method (function). For instance, if you want to write a method which returns length of some list:
def my_function(some_list):
return len(some_list)
Or in your case:
def my_function(a_list):
return len(a_list._list)
Note that you should not call your variables list. It's a built-in function in python which creates lists.
And as you can see there is another built-in function len in python which returns length of list, tuple, dictionary etc.
Hope this helps, although it's still a bit unclear what you're trying to achieve.
After having seen the nice implementation of the "ampl car example" in Pyomo repository, I would like to keep extending the problem with new features and constraints, but I have found the next problems during development. Is someone able of fix them?
1) Added new constraint "electric car": Now the acceleration is limited by adherence until a determined speed and then constant power model is used. I am not able of implement this constraint as i would think. It is commented in the, but Pyomo complains about that a constraint is related to a variable. (now Umax depends of the car speed).
2) Added new comfort acceleration and jerk constraints. It seems they are working right, but should be nice if a Pyomo guru supervise them and tell me if they are really implemented in the correct way.
3) About last one, in order of reducing verbosity. Is there any way of combine accelerationL and accelerationU in a unique constraint? Same for jerkL and jerkU.
4) The last feature is a speed limit constraint divided in two steps. Again, I am not able of getting it works, so it is commented in code. Does anybody dare to fix it?
# Ampl Car Example (Extended)
#
# Shows how to convert a minimize final time optimal control problem
# to a format pyomo.dae can handle by removing the time scaling from
# the ContinuousSet.
#
# min tf
# dx/dt = v
# dv/dt = u - R*v^2
# x(0)=0; x(tf)=L
# v(0)=0; v(tf)=0
# -3 <= u <= 1 (engine constraint)
#
# {v <= 7m/s ===> u < 1
# u <= { (electric car constraint)
# {v > 7m/s ===> u < 1*7/v
#
# -1.5 <= dv/dt <= 0.8 (comfort constraint -> smooth driving)
# -0.5 <= d2v/dt2 <= 0.5 (comfort constraint -> jerk)
# v <= Vmax (40 kmh[0-500m] + 25 kmh(500-1000m])
from pyomo.environ import *
from pyomo.dae import *
m = ConcreteModel()
m.R = Param(initialize=0.001) # Friction factor
m.L = Param(initialize=1000.0) # Final position
m.T = Param(initialize=50.0) # Estimated time
m.aU = Param(initialize=0.8) # Acceleration upper bound
m.aL = Param(initialize=-1.5) # Acceleration lower bound
m.jU = Param(initialize=0.5) # Jerk upper bound
m.jL = Param(initialize=-0.5) # Jerk lower bound
m.NFE = Param(initialize=100) # Number of finite elements
'''
def _initX(m, i):
return m.x[i] == i*m.L/m.NFE
def _initV(m):
return m.v[i] == m.L/50
'''
m.tf = Var()
m.tau = ContinuousSet(bounds=(0,1)) # Unscaled time
m.t = Var(m.tau) # Scaled time
m.x = Var(m.tau, bounds=(0,m.L))
m.v = Var(m.tau, bounds=(0,None))
m.u = Var(m.tau, bounds=(-3,1), initialize=0)
m.dt = DerivativeVar(m.t)
m.dx = DerivativeVar(m.x)
m.dv = DerivativeVar(m.v)
m.da = DerivativeVar(m.v, wrt=(m.tau, m.tau))
m.obj = Objective(expr=m.tf)
def _ode1(m, i):
if i==0:
return Constraint.Skip
return m.dt[i] == m.tf
m.ode1 = Constraint(m.tau, rule=_ode1)
def _ode2(m, i):
if i==0:
return Constraint.Skip
return m.dx[i] == m.tf * m.v[i]
m.ode2 = Constraint(m.tau, rule=_ode2)
def _ode3(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] == m.tf*(m.u[i] - m.R*m.v[i]**2)
m.ode3 = Constraint(m.tau, rule=_ode3)
def _accelerationL(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] >= m.aL*m.tf
m.accelerationL = Constraint(m.tau, rule=_accelerationL)
def _accelerationU(m, i):
if i==0:
return Constraint.Skip
return m.dv[i] <= m.aU*m.tf
m.accelerationU = Constraint(m.tau, rule=_accelerationU)
def _jerkL(m, i):
if i==0:
return Constraint.Skip
return m.da[i] >= m.jL*m.tf**2
m.jerkL = Constraint(m.tau, rule=_jerkL)
def _jerkU(m, i):
if i==0:
return Constraint.Skip
return m.da[i] <= m.jU*m.tf**2
m.jerkU = Constraint(m.tau, rule=_jerkU)
'''
def _electric(m, i):
if i==0:
return Constraint.Skip
elif value(m.v[i])<=7:
return m.a[i] <= 1
else:
return m.v[i] <= 1*7/m.v[i]
m.electric = Constraint(m.tau, rule=_electric)
'''
'''
def _speed(m, i):
if i==0:
return Constraint.Skip
elif value(m.x[i])<=500:
return m.v[i] <= 40/3.6
else:
return m.v[i] <= 25/3.6
m.speed = Constraint(m.tau, rule=_speed)
'''
def _initial(m):
yield m.x[0] == 0
yield m.x[1] == m.L
yield m.v[0] == 0
yield m.v[1] == 0
yield m.t[0] == 0
m.initial = ConstraintList(rule=_initial)
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(m, nfe=value(m.NFE), wrt=m.tau, scheme='BACKWARD')
#discretizer = TransformationFactory('dae.collocation')
#discretizer.apply_to(m, nfe=value(m.NFE), ncp=4, wrt=m.tau, scheme='LAGRANGE-RADAU')
solver = SolverFactory('ipopt')
solver.solve(m,tee=True)
print("final time = %6.2f" %(value(m.tf)))
t = []
x = []
v = []
a = []
u = []
for i in m.tau:
t.append(value(m.t[i]))
x.append(value(m.x[i]))
v.append(3.6*value(m.v[i]))
a.append(10*value(m.u[i] - m.R*m.v[i]**2))
u.append(10*value(m.u[i]))
import matplotlib.pyplot as plt
plt.plot(x, v, label='v (km/h)')
plt.plot(x, a, label='a (dm/s2)')
plt.plot(x, u, label='u (dm/s2)')
plt.xlabel('distance')
plt.grid('on')
plt.legend()
plt.show()
Thanks a lot in advance,
Pablo
(1) You should not think of Pyomo constraint rules as callbacks that are used by the solver. You should think of them more as a function to generate a container of constraint objects that gets called once for each index when the model is constructed. Meaning it is invalid to use a variable in an if statement unless you are really only using its initial value to define the constraint expression. There are ways to express what I think you are trying to do, but they involve introducing binary variables into the problem, in which case you can no longer use Ipopt.
(2) Can't really provide any help. Syntax looks fine.
(3) Pyomo allows you to return double-sided inequality expressions (e.g., L <= f(x) <= U) from constraint rules, but they can not involve variable expressions in the L and U locations. It doesn't look like the constraints you are referring to can be combined into this form.
(4) See (1)
What would be the best approach to create a type that is a saturated integer in python ?
i.e.:
v = SaturatedInteger(0, 100)
# That should create an integer that will always be in between 0 and 100,
# and have all default operations
v += 68719
print v #Should print '100'.
I can think of inheriting int type, but where should the saturating logic be implemented then ?
If you need a new (quick and dirty) class for it, I would implement it as follows.
class SaturatedInteger:
def __init__(self, val, lo, hi):
self.real, self.lo, self.hi = val, lo, hi
def __add__(self, other):
return min(self.real + other.real, self.hi)
def __sub__(self, other):
return max(self.real - other.real, self.lo)
...
Add as many of the other operators in the docs as you feel you will need (and their 'r' variants).
By storing the value in the instance name real, you can do your arithmetic with regular integers, floats, etc. too:
a = SaturatedInteger(60, 0, 100)
print(a)
60
print(a+30)
90
print(a+40)
100
print(a+50.)
100
print(a-70.)
0
print(a+a)
100
Though, of course you only add the real part if you're adding a complex number to your SaturatedInteger, so watch out. (For a much more complete and robust version, #jonrsharpe's answer is the way to go).
In general, I would implement using a #property to protect an instance's value attribute, then emulate a numeric type, rather than inheriting from int:
class SaturatedInteger(object):
"""Emulates an integer, but with a built-in minimum and maximum."""
def __init__(self, min_, max_, value=None):
self.min = min_
self.max = max_
self.value = min_ if value is None else value
#property
def value(self):
return self._value
#value.setter
def value(self, new_val):
self._value = min(self.max, max(self.min, new_val))
#staticmethod
def _other_val(other):
"""Get the value from the other object."""
if hasattr(other, 'value'):
return other.value
return other
def __add__(self, other):
new_val = self.value + self._other_val(other)
return SaturatedInteger(self.min, self.max, new_val)
__radd__ = __add__
def __eq__(self, other):
return self.value == self._other_val(other)
if __name__ == '__main__':
v = SaturatedInteger(0, 100)
v += 68719
assert v == 100
assert 123 + v == 100
I've only implemented __add__, __radd__ and __eq__, but you can probably see how the rest could be built out as required. You might want to think about what happens when two SaturatedIntegers are used together - should the result have e.g. min(self.min, other.min) as its own min?
I wrote a sample class that has an add function:
class SatInt:
def __init__(self, low, up):
self.lower = low
self.upper = up
self.value = 0
def add(self, n):
if n+self.value > self.upper:
self.value = self.upper
else:
self.value = self.value + n
x = SatInt(0,100)
x.add(32718)
print(x.value)
100