Changing index reference for Set in Pyomo - pyomo

Is there a way to change the Set indexing from 1 indexed to 0 indexed in Pyomo? It's very difficult to keep everything straight when you are dealing with multiple objects where Pyomo is 1 referenced and everything else from Python is 0 referenced.
The reason for this is to generate a model fitting routine for multiple circuit devices. Instead of recreating the entire model over and over, I want to define it once with an AbstractModel. Then I can just reload the data and resolve for each device.
In my objective function, I'm defining intermediate values using list comprehension. Once these intermediate values are generated, they are now 0 referenced. An example of what I'm doing is below. As you can see, I have to have some parameters declared with [i] and others with [i-1]. It just becomes difficult and confusing when the functions become large. It would make a whole lot more sense if everything was just 0 referenced so that it was consistent with standard Python code. I was hoping there was some easy option or setting to declare whether a Set is 0 or 1 referenced.
y11intre = [1 / m.Ra[1] + 1 / m.Rb[1] for i in m.n]
y11intim = [m.w[i] * (m.Ca[1] + m.Cb[1]) for i in m.n]
y12intre = ...
...
z11intre = [-y22intim[i-1] * ... for i in m.n]
...
z11re = [m.Rae[1] + z11intre[i-1] for i in m.n]
z11im = [m.w[i] * m.Lae[1] + z11intim[i-1] for i in m.n]

You can provide the starting and stopping point to RangeSet to give you the values you want:
m.r = RangeSet(0,5) # [0,1,2,3,4,5]
m.s = RangeSet(0,4) # [0,1,2,3,4]

Related

Sparse index use for optimization with Pyomo model

I have a Pyomo model connected to a Django-created website.
My decision variable has 4 indices and I have a huge amount of constraints running on it.
Since Pyomo takes a ton of time to read in the constraints with so many variables, I want to sparse out the index set to only contain variables that actually could be 1 (i have some conditions on that)
I saw this post
Create a variable with sparse index in pyomo
and tried a for loop for all my conditions. I created a set "AllowedVariables" to later put this inside my constraints.
But Django's server takes so long to create this set while performing the system check, it never comes out.
Currently i have this model:
model = AbstractModel()
model.x = Var(model.K, model.L, model.F, model.Z, domain=Boolean)
def ObjRule(model):
# some rule, sense maximize
model.Obj = pyomo.environ.Objective(rule=ObjRule, sense=maximize)
def ARule(model,l):
maxA = sum(model.x[k,l,f,z] * for k in model.K for f in model.F
for z in model.Z and (k,l,f,z) in model.AllowedVariables)
return maxA <= 1
model.maxA = Constraint(model.L, rule=ARule)
The constraint is exemplary, I have 15 more similar ones. I currently create "AllowedVariables" this way:
AllowedVariables = []
for k in model.K:
for l in model.L:
..... check all sorts of conditions, break if not valid
AllowedVaraibles.append((k,l,f,z))
model.AllowedVariables = Set(initialize=AllowedVariables)
Using this, the Django server starts checking....and never stops
performing system checks...
Sadly, I somehow need some restriction on the variables or else the reading for the solver will take way to long since the constraints contain so many unnecessary variables that have to be 0 anyways.
Any ideas on how I can sparse my variable set?

Creating a if..then type rule (constraint) in Pyomo

I am new to Pyomo. I want to add an if..then.. type constraint to my linear programming problem. I have an abstract model and this is an example what I'd like to do:
if node j1 is receiving less than half of its water demand, the minimum flow in the link between j2 and j1 must be set to demand value in j1 (A and B are model variables, d is a known parameter).
if A(j1)<0.5 then B(j2,j1)>=d(j1)
I tried the following when I define model constraints. But since the model has not yet created the instance from its data file, it doesn't recognize j1 and j2.
def rule_(model):
term1=floor(model.A[j1]/0.5)
return (term1*model.B[j1,j2]>term1*mdoel.demand[j1])
model.rule=Constraint(rule=rule_)
If I take these lines after instantiating the model using data file, I think the constraint will not be implemented at all.
Can anyone help with this, please? Thanks.
"If/then" expressions and floor() are not linear, so they can't be inserted directly into a linear program. However, you can get the same effect by setting a binary flag and using that to activate and deactivate the constraint. Note that binary variables are also not linear, but they are commonly handled by mixed-integer solvers.
model.flag = Var(within=Binary)
def set_flag_rule(model):
# force the flag to be set if A[j1] < 0.5
return ((1 - model.flag) * 0.5 <= model.A[j1])
model.set_flag = Constraint(rule=set_flag_rule)
def rule(model):
# force B[j1, j2] to meet demand if the flag is set
return (model.B[j1,j2] >= model.flag * model.demand[j1])
model.rule=Constraint(rule=rule)

how can i improve bulk calculation from file data

I have a file of binary values. The section I am looking at is 4 byte int with the values in the pattern of MW1, MVAR1, MW2, MVAR2,...
I read the values in with
temp = array.array("f")
temp.fromfile(file, length *2)
mw_mvar = temp.tolist()
I then calculate the magnitude like this.
mag = [0] * length
for x in range(0,length * 2, 2):
a = mw_mvar[x]
b = mw_mvar[x + 1]
mag[(x / 2)] = sqrt(a*a + b*b)
The calculations (not the read) are doubling the total length of my script. I know there is (theoretically) a way to do this faster because am mimicking a script that ultimately calls fortran (pyd to call function dlls in fortran i think) which is able to do this calculation with negligible affect on run time.
This is the best i can come up with. any suggestions for improvements?
I have also tried math.pow(), **.5, **2 with no differences.
with no luck improving the calculations, I went around the problem. I realised that I only needed 1% of those calculated values so I created a class to calculate them on demand. It was important (to me) that the resulting code act similar to as if it were a list of calculated values. A lot of the remainder of the process uses the values and different versions of the data are pre-calculated. The class means i don't need a set of procedures for each version of data
class mag:
def __init__(self,mw_mvar):
self._mw_mvar = mw_mvar
#_sgn = sgn
def __len__(self):
return len(self._mw_mvar/2)
def __getitem__(self, item):
return sqrt(self._mw_mvar[2*item] ** 2 + self._mw_mvar[2*item+1] ** 2)
ps this could also be done in a function and take both versions. i would have had to make more changes to the overall script.
function (a,b,x):
if b[x]==0:
return a[x]
else:
return sqrt(a[x]**2 + b[x]**2)

'DataFlowAnalysis' object has no attribute 'op_MAKE_FUNCTION' in Numba

I haven't seen this specific scenario in my research for this error in Numba. This is my first time using the package so it might be something obvious.
I have a function that calculates engineered features in a data set by adding, multiplying and/or dividing each column in a dataframe called data and I wanted to test whether numba would speed it up
#jit
def engineer_features(engineer_type,features,joined):
#choose which features to engineer (must be > 1)
engineered = features
if len(engineered) > 1:
if 'Square' in engineer_type:
sq = data[features].apply(np.square)
sq.columns = map(lambda s:s + '_^2',features)
for c1,c2 in combinations(engineered,2):
if 'Add' in engineer_type:
data['{0}+{1}'.format(c1,c2)] = data[c1] + data[c2]
if 'Multiply' in engineer_type:
data['{0}*{1}'.format(c1,c2)] = data[c1] * data[c2]
if 'Divide' in engineer_type:
data['{0}/{1}'.format(c1,c2)] = data[c1] / data[c2]
if 'Square' in engineer_type and len(sq) > 0:
data= pd.merge(data,sq,left_index=True,right_index=True)
return data
When I call it with lists of features, engineer_type and the dataset:
engineer_type = ['Square','Add','Multiply','Divide']
df = engineer_features(engineer_type,features,joined)
I get the error: Failed at object (analyzing bytecode)
'DataFlowAnalysis' object has no attribute 'op_MAKE_FUNCTION'
Same question here. I think the problem might be the lambda function since numba does not support function creation.
I had this same error. Numba doesnt support pandas. I converted important columns from my pandas df into bunch of arrays and it worked successfully under #JIT.
Also arrays are much faster then pandas df, incase you need it for processing large data.

Set up model in CPLEX using Python API

In fact, I'm trying to implement the very simple model formulation:
min sum_i(y_i*f_i) + sum_i(sum_j(x_ij*c_ij))
s.t. sum_i(x_ij) = 1 for all j
x_ij <= y_i for all i,j
x_ij, y_i are binary
But I simply cannot figure out how the Python API works. They suggest creating variables like this:
model.variables.add(obj = fixedcost,
lb = [0] * num_facilities,
ub = [1] * num_facilities,
types = ["B"] * num_facilities)
# Create one binary variable for each facility/client pair. The variables
# model whether a client is served by a facility.
for c in range(num_clients):
model.variables.add(obj = cost[c],
lb = [0] * num_facilities,
ub = [1] * num_facilities,
types = ["B"] * num_facilities)
# Create corresponding indices for later use
supply = []
for c in range(num_clients):
supply.append([])
for f in range(num_facilities):
supply[c].append((c+1)*(num_facilities)+f)
# Constraint no. 1:
for c in range(num_clients):
assignment_constraint = cplex.SparsePair(ind = [supply[c][f] for f in \
range(num_facilities)],
val = [1] * num_facilities)
model.linear_constraints.add(lin_expr = [assignment_constraint],
senses = ["L"],
rhs = [1])
For this constraints I have no idea how the variables from above are refered to since it only mentions an auxiliary list of lists. Can anyone explain to me how that should work? The problem is easy, I also know how to do it in C++ but the Python API is a closed book to me.
The problem is the uncapacitated facility location problem and I want to adapt the example file facility.py
EDIT: One idea for constraint no.2 is to create one-dimensional vectors and use vector addition to create the final constraint. But that tells me that this is an unsupported operand for SparsePairs
for f in range(num_facilities):
index = [f]
value = [-1.0]
for c in range(num_clients):
open_constraint = cplex.SparsePair(ind = index, val = value) + cplex.SparsePair(ind = [supply[c][f]], val = [1.0])
model.linear_constraints.add(lin_expr=[open_constraint],
senses = ["L"],
rhs = [0])
The Python API is closer to the C Callable Library in nature than the C++/Concert API. Variables are indexed from 0 to model.variables.get_num() - 1 and can be referred to by index, for example, when creating constraints. They can also be referred to by name (the add method has an optional names argument). See the documentation for the VariablesInterface here (this is for version 12.5.1, which I believe you are using, given your previous post).
It may help to start looking at the most simple examples like lpex1.py (and read the comments). Finally, I highly recommend playing with the Python API from the interactive Python prompt (aka the REPL). You can read the help there and type things in to see what they do.
You also might want to take a look at the docplex package. This is a modeling layer built on top of the CPLEX Python API (or which can solve on the cloud if you don't have a local installation of CPLEX installed).