How to Formulate a Piecewise Step Function in pyomo - pyomo

I have a question regarding the correct formulation of a piecewise step function in pyomo. I want to include in my model a single piecewise function of the form:
/ 1 , 0 <= X(t) <= 1
Z(X) = \ 0 , 1 <= X(t) <= 2
Where X is being fit to data over taken over a time domain and Z acts like a binary variable. The most similar example in pyomo documentation is the step.py example using INC. However, when solving with this formulation I observe the problem of the domain variable x ‘sticking’ to the breakpoint at x=1. I assume this is because (as noted in the documentation) Z can solve to the entire vertical line if continuous or is doubly feasible at both 0 and 1 if binary. Other formulations offered via the piecewise function (i.e. dlog, dcc, log, etc.) experience similar issues (in fact, based on the output to GAMS I’m pretty sure they don’t support binary/integer variables at all).
Is there a ‘correct’ way to formulate a piecewise function in pyomo that avoids the multiple-feasibility issue at the breakpoint, thus avoiding the domain variable converging to the breakpoint? I am using BARON with solvers cplex and ipopt, however my gut tells me this formulation issue can’t be solved by simply changing solvers.
I can also send a document illustrating my observations on why the current pyomo piecewise formulations don’t support binary variables, if it would help.

Here's some sample code where we try to minimise the sum of the step function Z.
model = ConcreteModel()
model.A = Set(initialize=[1,2,3])
model.B = Set(initialize=['J', 'K'])
model.x = Var(model.A, model.B, bounds=(0, 2))
model.z = Var(model.A, model.B, domain = Binary)
DOMAIN_PTS = [0,1,1,2]
RANGE_PTS = [1,1,0,0]
model.z_constraint = Piecewise(
model.A, model.B,
model.z, model.x,
pw_pts=DOMAIN_PTS,
pw_repn='INC',
pw_constr_type = 'EQ',
f_rule = RANGE_PTS,
unbounded_domain_var = True)
def objective_rule(model):
return sum(model.z[a,b] for a in model.A for b in model.B)
model.objective = Objective(rule = objective_rule, sense=minimize)
If you set sense = minimize above, the program will solve and give x = 1 for each index value. If you set sense = maximize, the program will solve and give x = 0 for each index value. I'm not too sure what you mean by stickiness, but I don't think this program does it. and it implements the step function.

This assumes that your z is not also indexed by time. If so, I would need to edit this answer:
model.t = RangeSet(*time*)
model.x = Var(model.t, bounds=(0, 2))
model.z = Var(domain=Binary)
model.d = Disjunction(expr=[
[0 <= model.x[t] for t in model.t] + [model.x[t] <= 1 for t in model.t],
[1 <= model.x[t] for t in model.t] + [model.x[t] <= 2 for t in model.t]
])
TransformationFactory('gdp.bigm').apply_to(model)
SolverFactory('baron').solve(model)

Related

Using gradient descent to solve a nonlinear system

I have the following code, which uses gradient descent to find the global minimum of y = (x+5)^2:
cur_x = 3 # the algorithm starts at x=3
rate = 0.01 # learning rate
precision = 0.000001 # this tells us when to stop the algorithm
previous_step_size = 1
max_iters = 10000 # maximum number of iterations
iters = 0 # iteration counter
df = lambda x: 2*(x+5) # gradient of our function
while previous_step_size > precision and iters < max_iters:
prev_x = cur_x # store current x value in prev_x
cur_x = cur_x - rate * df(prev_x) # grad descent
previous_step_size = abs(cur_x - prev_x) # change in x
iters = iters+1 # iteration count
print("Iteration",iters,"\nX value is",cur_x) # print iterations
print("The local minimum occurs at", cur_x)
The procedure is fairly simple, and among the most intuitive and brief for solving such a problem (at least, that I'm aware of).
I'd now like to apply this to solving a system of nonlinear equations. Namely, I want to use this to solve the Time Difference of Arrival problem in three dimensions. That is, given the coordinates of 4 observers (or, in general, n+1 observers for an n dimensional solution), the velocity v of some signal, and the time of arrival at each observer, I want to reconstruct the source (determine it's coordinates [x,y,z].
I've already accomplished this using approximation search (see this excellent post on the matter: ), and I'd now like to try doing so with gradient descent (really, just as an interesting exercise). I know that the problem in two dimensions can be described by the following non-linear system:
sqrt{(x-x_1)^2+(y-y_1)^2}+s(t_2-t_1) = sqrt{(x-x_2)^2 + (y-y_2)^2}
sqrt{(x-x_2)^2+(y-y_2)^2}+s(t_3-t_2) = sqrt{(x-x_3)^2 + (y-y_3)^2}
sqrt{(x-x_3)^2+(y-y_3)^2}+s(t_1-t_3) = sqrt{(x-x_1)^2 + (y-y_1)^2}
I know that it can be done, however I cannot determine how.
How might I go about applying this to 3-dimensions, or some nonlinear system in general?

Pyomo and Gurobi: Does Pyomo support solver callbacks to Gurobi?

I have started using Pyomo for modelling of MILPs and need to add problem specific cutting planes at certain MILP-feasible solutions. I know that it is possible to do that via callbacks in Gurobi's own gurobipy API. However, since I am using Pyomo atm I would like to stick to it if possible. I have seen that there exist persistent/direct solver IO options, however, I could not figure out how to make use of those options for my purposes.
Any help is appreciated.
Callbacks and lazy constraints are currently supported by Pyomo for the Gurobi Persistent solver interface. Here is a small example from the documentation of that interface:
from gurobipy import GRB
import pyomo.environ as pe
from pyomo.core.expr.taylor_series import taylor_series_expansion
m = pe.ConcreteModel()
m.x = pe.Var(bounds = (0, 4))
m.y = pe.Var(within = pe.Integers, bounds = (0, None))
m.obj = pe.Objective(expr = 2 * m.x + m.y)
m.cons = pe.ConstraintList() # for the cutting planes
def _add_cut(xval):
# a function to generate the cut
m.x.value = xval
return m.cons.add(m.y >= taylor_series_expansion((m.x - 2) ** 2))
_add_cut(0) # start with 2 cuts at the bounds of x
_add_cut(4) # this is an arbitrary choice
opt = pe.SolverFactory('gurobi_persistent')
opt.set_instance(m)
opt.set_gurobi_param('PreCrush', 1)
opt.set_gurobi_param('LazyConstraints', 1)
def my_callback(cb_m, cb_opt, cb_where):
if cb_where == GRB.Callback.MIPSOL:
cb_opt.cbGetSolution(vars = [m.x, m.y])
if m.y.value < (m.x.value - 2) ** 2 - 1e-6:
print('adding cut')
cb_opt.cbLazy(_add_cut(m.x.value))
opt.set_callback(my_callback)
opt.solve()
assert abs(m.x.value - 1) <= 1e-6
assert abs(m.y.value - 1) <= 1e-6
Changed value of parameter PreCrush to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Changed value of parameter LazyConstraints to 1
Prev: 0 Min: 0 Max: 1 Default: 0
adding cut
adding cut
adding cut
adding cut
adding cut
adding cut
adding cut
adding cut
For further details on the methods available within this Pyomo callback interface, see the GurobiPersistent class.

How to set gradients to Zero without optimizer?

Between mutliple .backward() passes I'd like to set the gradients to zero. Right now I have to do this for every component seperately (here these are x and t), is there a way to do this "globally" for all affected variables? (I imagine something like z.set_all_gradients_to_zero().)
I know there is optimizer.zero_grad() if you use an optimizer, but is there also a direct way without using an optimizer?
import torch
x = torch.randn(3, requires_grad = True)
t = torch.randn(3, requires_grad = True)
y = x + t
z = y + y.flip(0)
z.backward(torch.tensor([1., 0., 0.]), retain_graph = True)
print(x.grad)
print(t.grad)
x.grad.data.zero_() # both gradients need to be set to zero
t.grad.data.zero_()
z.backward(torch.tensor([0., 1., 0.]), retain_graph = True)
print(x.grad)
print(t.grad)
You can also use nn.Module.zero_grad(). In fact, optim.zero_grad() just calls nn.Module.zero_grad() on all parameters which were passed to it.
There is no reasonable way to do it globally. You can collect your variables in a list
grad_vars = [x, t]
for var in grad_vars:
var.grad = None
or create some hacky function based on vars(). Perhaps it's also possible to inspect the computation graph and zero the gradient of all leaf nodes, but I am not familiar with the graph API. Long story short, you're expected to use the object-oriented interface of torch.nn instead of manually creating tensor variables.

Plot variables out of differential equations system function

I have a 4-4 differential equations system in a function (subsystem4) and I solved it with odeint funtion. I managed to plot the results of the system. My problem is that I want to plot and some other equations (e.g. x,y,vcxdot...) which are included in the same function (subsystem4) but I get NameError: name 'vcxdot' is not defined. Also, I want to use some of these equations (not only the results of the equation's system) as inputs in a following differential equations system and plot all the equations in the same period of time (t). I have done this using Matlab-Simulink but it was much easier because of Simulink blocks. How can I have access to and plot all the equations of a function (subsystem4) and use them as input in a following system? I am new in python and I use Python 2.7.12. Thank you in advance!
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def subsystem4(u,t):
added_mass_x = 0.03 # kg
added_mass_y = 0.04
mb = 0.3 # kg
m1 = mb-added_mass_x
m2 = mb-added_mass_y
l1 = 0.07 # m
l2 = 0.05 # m
J = 0.00050797 # kgm^2
Sa = 0.0110 # m^2
Cd = 2.44
Cl = 3.41
Kd = 0.000655 # kgm^2
r = 1000 # kg/m^3
f = 2 # Hz
c1 = 0.5*r*Sa*Cd
c2 = 0.5*r*Sa*Cl
c3 = 0.5*mb*(l1**2)
c4 = Kd/J
c5 = (1/(2*J))*(l1**2)*mb*l2
c6 = (1/(3*J))*(l1**3)*mb
vcx = u[0]
vcy = u[1]
psi = u[2]
wz = u[3]
x = 3 + 0.3*np.cos(t)
y = 0.5 + 0.3*np.sin(t)
xdot = -0.3*np.sin(t)
ydot = 0.3*np.cos(t)
xdotdot = -0.3*np.cos(t)
ydotdot = -0.3*np.sin(t)
vcx = xdot*np.cos(psi)-ydot*np.sin(psi)
vcy = ydot*np.cos(psi)+xdot*np.sin(psi)
psidot = wz
vcxdot = xdotdot*np.cos(psi)-xdot*np.sin(psi)*psidot-ydotdot*np.sin(psi)-ydot*np.cos(psi)*psidot
vcydot = ydotdot*np.cos(psi)-ydot*np.sin(psi)*psidot+xdotdot*np.sin(psi)+xdot*np.cos(psi)*psidot
g1 = -(m1/c3)*vcxdot+(m2/c3)*vcy*wz-(c1/c3)*vcx*np.sqrt((vcx**2)+(vcy**2))+(c2/c3)*vcy*np.sqrt((vcx**2)+(vcy**2))*np.arctan2(vcy,vcx)
g2 = (m2/c3)*vcydot+(m1/c3)*vcx*wz+(c1/c3)*vcy*np.sqrt((vcx**2)+(vcy**2))+(c2/c3)*vcx*np.sqrt((vcx**2)+(vcy**2))*np.arctan2(vcy,vcx)
A = 12*np.sin(2*np.pi*f*t+np.pi)
if A>=0.1:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2-c6*np.sqrt((g1**2)+(g2**2))
elif A<-0.1:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2+c6*np.sqrt((g1**2)+(g2**2))
else:
wzdot = ((m1-m2)/J)*vcx*vcy-c4*wz**2*np.sign(wz)-c5*g2
return [vcxdot,vcydot,psidot,wzdot]
u0 = [0,0,0,0]
t = np.linspace(0,15,1000)
u = odeint(subsystem4,u0,t)
vcx = u[:,0]
vcy = u[:,1]
psi = u[:,2]
wz = u[:,3]
plt.figure(1)
plt.subplot(211)
plt.plot(t,vcx,'r-',linewidth=2,label='vcx')
plt.plot(t,vcy,'b--',linewidth=2,label='vcy')
plt.plot(t,psi,'g:',linewidth=2,label='psi')
plt.plot(t,wz,'c',linewidth=2,label='wz')
plt.xlabel('time')
plt.legend()
plt.show()
To the immediate question of plotting the derivatives, you can get the velocities by directly calling the ODE function again on the solution,
u = odeint(subsystem4,u0,t)
udot = subsystem4(u.T,t)
and get the separate velocity arrays via
vcxdot,vcydot,psidot,wzdot = udot
In this case the function involves branching, which is not very friendly to vectorized calls of it. There are ways to vectorize branching, but the easiest work-around is to loop manually through the solution points, which is slower than a working vectorized implementation. This will again procude a list of tuples like odeint, so the result has to be transposed as a tuple of lists for "easy" assignment to the single array variables.
udot = [ subsystem4(uk, tk) for uk, tk in zip(u,t) ];
vcxdot,vcydot,psidot,wzdot = np.asarray(udot).T
This may appear to double somewhat the computation, but not really, as the solution points are usually interpolated from the internal step points of the solver. The evaluation of the ODE function during integration will usually happen at points that are different from the solution points.
For the other variables, extract the computation of position and velocities into functions to have the constant and composition in one place only:
def xy_pos(t): return 3 + 0.3*np.cos(t), 0.5 + 0.3*np.sin(t)
def xy_vel(t): return -0.3*np.sin(t), 0.3*np.cos(t)
def xy_acc(t): return -0.3*np.cos(t), -0.3*np.sin(t)
or similar that you can then use both inside the ODE function and in preparing the plots.
What Simulink most likely does is to collect all the equations of all the blocks and form this into one big ODE system which is then solved for the whole state at once. You will need to implement something similar. One big state vector, and each subsystem knows its slice of the state resp. derivatives vector to get its specific state variables from and write the derivatives to. The computation of the derivatives can then use values communicated among the subsystems.
What you are trying to do, solving the subsystems separately, works only for resp. will likely result in a order 1 integration method. All higher order methods need to be able to simultaneously shift the state in some direction computed from previous stages of the method, and evaluate the whole system there.

Parameter optimization using a genetic algorithm?

I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.