Pyomo integral over range? - pyomo

I am looking to operate a pyomo constraint that the integral of a variable (wrt time) cannot be more than a value for any 80 day period.
# Integral
#m.Integral(m.t)
def power(m, t):
return m.E[t]
# Constraint
#m.Constraint()
def total_power(m):
return m.power <= 80
This only works for the integral on domain [t0 , tf]. How can I specifically constrain the integral over every 80-day period?

Related

Variable Pyomo bounds

I'm modelling a 100 kWh battery. I have a set containing 24 hours.
Is it possible to have different upper bounds for charging the battery based on the battery's state of charge(SOC) which i don't know yet?
The SOC is decided later by the solver.
eg. at 50 % SOC the charging has an upper bound of 30 kW.
at 20 % SOC the charging has an upper bound of 10 kW.
The upper bound for charging is different for each SOC
def battery_upper_bound(m,t):
return m.bat_charing[t] <= max(pyo.value(m.SOC[t]*4.18)),5)
m.battery_Max_charge_c = pyo.Constraint(m.t, rule=battery_upper_bound)
My approach with pyo.value(m.SOC[t]) uses the initialized SOC value and is therefore equal in all timesteps and obviously not a feasible solution.
If the relationship between these two variables is linear (or if that is a good enough approximation) as you have coded, then this is easy. All you need to do is get rid of the max() function because max is a non-linear operation. This can be done by just breaking your constraint into two constraints
def battery_charge_lim(m, t):
return m.bat_charging[t] <= m.SOC[t]*4.18
m.batt_charge_lim = pyo.Constraint(m.t, rule=battery_charge_lim)
def battery_max_rate(m, t):
return m.bat_charging[t] <= 5
m.battery_max_charge = pyo.Constraint(m.t, rule=battery_max_rate)

Sympy Typeerror: cannot determine truth value of Relational (How to make sure x > 0)

I want to check if 6**x1 is bigger than 0 for every positive value of x1. I am using sympy.
I've done the following:
x1 = sm.symbols('x_1',nonnegative=True)
u = 6**x1
def checker(func):
if u > 0:
return True
else:
return False
However, I get the error:
TypeError: cannot determine truth value of Relational
I think this is because the function does not know that x1 is positive. But how do I make sure it knows this? Apparently, its not enough with the sm.symbols definition.
Zebraboard
SymPy only let's you compare things that can be computed to a number with literal > (and similar). If you want to make a query on a symbolic expression use expr.is_positive or (expr.is_extended_positive if you want oo to be considered, too).
>>> u.is_positive
True
This can be None or False, too. None is returned when a definitive determination cannot be made, e.g. symbols('x').is_positive is None -> True.

Evaluate constraint expression as boolean

I want to evaluate if a constraint is respected or not in Pyomo when the values of the variables contained in constraint expression are known.
Use case: We know that one particular constraint sometimes makes the problem infeasible, depending on the value of the variable. Instead of sending the problem to the solver to test if the problem is feasible, converting the constraint expression to a boolean type would be enough to determine if the constraint is the culprit.
For the sake of providing a feasible example, here would be the code:
from pyomo.environ import ConcreteModel, Var, Constraint
model = ConcreteModel()
model.X = Var()
def constraint_rule(model):
return model.X <= 1
model.a_constraint = Constraint(rule=constraint_rule)
Now, let's try to work with the expression to evaluate:
# Let's define the expression in this way:
expression = constraint_rule(model)
# Let's show that the expression is what we expected:
print(str(expression))
The previous statement should print X <= 1.0.
Now, the tricky part is how to evaluate the expression.
if expression == True:
print("feasible")
else:
print("infeasible")
creates an TypeError Exception (TypeError: Cannot create an EqualityExpression where one of the sub-expressions is a relational expression: X <= 1.0).
The last example doesn't work because constraint_rule doesn't return a boolean but a Pyomo expression.
Finally, I know that something like
def evaluate_constraint_a_expression(model):
return value(model.X) <= 1
would work, but I can't assume that I will always know the content of my constraint expression, so I need a robust way of evaluating it.
Is there a clever way of achieving this? Like, evaluating the expression as a boolean and evaluating the left hand side and right hand side of the expression at the same time?
The solution is to use value function. Even if it says that it evaluates an expression to a numeric value, it also converts the expression to a boolean value if it is an equality/inequality expression, like the rule of a constraint.
Let's suppose that the model is defined the way it is in the question, then the rest of the code should be:
from pyomo.environ import value
if value(expression) == True:
print("feasible")
else:
print("infeasible")
where expression is defined as written in the question.
However, be advised that numeric precision in Python using this method can be different than the one provided by the solver. Therefore, it is possible that this method will show that a constraint is infeasible while it is just a matter of numeric imprecision of under 1e-10. So, while it is useful in finding if most constraints are feasible, it also generates some false positives.

How to Use MCMC with a Custom Log-Probability and Solve for a Matrix

The code is in PyMC3, but this is a general problem. I want to find which matrix (combination of variables) gives me the highest probability. Taking the mean of the trace of each element is meaningless because they depend on each other.
Here is a simple case; the code uses a vector rather than a matrix for simplicity. The goal is to find a vector of length 2, where the each value is between 0 and 1, so that the sum is 1.
import numpy as np
import theano
import theano.tensor as tt
import pymc3 as mc
# define a theano Op for our likelihood function
class LogLike_Matrix(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, loglike):
self.likelihood = loglike # the log-p function
def perform(self, node, inputs, outputs):
# the method that is used when calling the Op
theta, = inputs # this will contain my variables
# call the log-likelihood function
logl = self.likelihood(theta)
outputs[0][0] = np.array(logl) # output the log-likelihood
def logLikelihood_Matrix(data):
"""
We want sum(data) = 1
"""
p = 1-np.abs(np.sum(data)-1)
return np.log(p)
logl_matrix = LogLike_Matrix(logLikelihood_Matrix)
# use PyMC3 to sampler from log-likelihood
with mc.Model():
"""
Data will be sampled randomly with uniform distribution
because the log-p doesn't work on it
"""
data_matrix = mc.Uniform('data_matrix', shape=(2), lower=0.0, upper=1.0)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable(data_matrix)
# use a DensityDist (use a lamdba function to "call" the Op)
mc.DensityDist('likelihood_matrix', lambda v: logl_matrix(v), observed={'v': theta})
trace_matrix = mc.sample(5000, tune=100, discard_tuned_samples=True)
If you only want the highest likelihood parameter values, then you want the Maximum A Posteriori (MAP) estimate, which can be obtained using pymc3.find_MAP() (see starting.py for method details). If you expect a multimodal posterior, then you will likely need to run this repeatedly with different initializations and select the one that obtains the largest logp value, but that still only increases the chances of finding the global optimum, though cannot guarantee it.
It should be noted that at high parameter dimensions, the MAP estimate is usually not part of the typical set, i.e., it is not representative of typical parameter values that would lead to the observed data. Michael Betancourt discusses this in A Conceptual Introduction to Hamiltonian Monte Carlo. The fully Bayesian approach is to use posterior predictive distributions, which effectively averages over all the high-likelihood parameter configurations rather than using a single point estimate for parameters.

How to get a value from multiple functions in Pyomo

Let's suppose that the objective function is
max z(x,y) = f1(x) - f2(y)
where f1 is function of variables x and f2 is functions of variables y.
This could be written in Pyomo as
def z(model):
return f1(model) - f2(model)
def f1(model):
return [some summation of x variables with some coefficients]
def f2(model):
return [some summation of y variables with some coefficients]
model.objective = Objective(rule=z)
I know it is possible to get the numeric value of z(x,y) easily by calling (since it is the objective function) :
print(model.objective())
but is there a way to get the numeric value of any of these sub-functions separetedly after the optimization, even if they are not explicitly defined as objectives?
I'll answer your question in terms of a ConcreteModel, since rules in Pyomo, for the most part, are nothing more than a mechanism to delay building a ConcereteModel. For now, they are also required to define indexed objects, but that will likely change soon.
First, there is nothing stopping you from defining those "rules" as standard functions that take in some argument and return a value. E.g.,
def z(x, y):
return f1(x) - f2(y)
def f1(x):
return x + 1
def f2(x):
return y**2
Now if you call any of these functions with a built-in type (e.g., f(1,5)), you will get a number back. However, if you call them with Pyomo variables (or Pyomo expressions) you will get a Pyomo expression back, which you can assign to an objective or constraint. This works because Pyomo modeling components, such as variables, overload the standard algebraic operators like +, -, *, etc. Here is an example of how you can build an objective with these functions:
import pyomo.environ as aml
m = aml.ConcreteModel()
m.x = aml.Var()
m.y = aml.Var()
m.o = aml.Objective(expr= z(m.x, m.y))
Now if m.x and m.y have a value loaded into them (i.e., the .value attribute is something other than None), then you can call one of the sub-functions with them and evaluate the returned expression (slower)
aml.value(f1(m.x))
aml.value(f2(m.y))
or you can extract the value from them and pass that to the sub-functions (faster)
f1(m.x.value)
f2(m.y.value)
You can also use the Expression object to store sub-expressions that you want to evaluate on the fly or share inside multiple other expression on a model (all of which you can update by changing what expression is stored under the Expression object).