Linear Programming - Question - Python Gekko - linear-programming

I am recently learning Python Gekko and I am very very new to linear programming, so excuse my ignorance in certain topics.
I have a variable which should have a value of either 0 or should be greater than 20.
I later learnt that this is called a semi-continuous variable. My questions are as below
Is it possible to convert the above condition into a linear equation
By any chance does Gekko support the semi-continuous variables as I could not find anything about it in the documentation.

You can use the if3() function to enforce that constraint. That function uses a binary variable for the switch condition so it transforms the problem from a linear programming (LP) problem to a mixed integer linear programming (MILP) problem.
from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
m = GEKKO()
p = m.Param(np.linspace(0,50))
y = m.if3(p-20,0,p)
m.options.IMODE=2
m.solve()
# plot solution
plt.plot(p.value,'r-',lw=3)
plt.plot(y.value,'b.-')
plt.show()

Related

Computing the gradient of an objective function evaluated at the optimum in a dynamic optimization problem, pyomo

I am computing the solution to a dynamic non-linear optimization problem, that I set up usign the pyomo library. I use a ConcreteModel, with an objective function and several constraints, all time-indexed.
My objective function takes the form of a ScalarObjective (I am solving a dynamic general equilibrium problem in which I seek to maximize total welfare). I would like to compute the gradient of the objective, evaluated at the optimum, with respect to one of the model's variables at a given period t. My problem is a discrete-time problem.
I have tried many different options, asking AI chatbots for help (both You Chat and ChatGPT), but every solution I'm given is incorrect -- on this topic the AI chatbots seem to know very little.
I feel that some method in the library pyomo.dae could be of help, but I haven't found a solution yet. Could anyone help me, please?
You can do this using Pyomo's differentiate function. Here is a toy example:
import pyomo.environ as pyo
from pyomo.core.expr.calculus.derivatives import differentiate
m = pyo.ConcreteModel()
m.x = pyo.Var()
m.con = pyo.Constraint(expr=m.x<=10)
m.obj = pyo.Objective(expr=m.x**2)
pyo.SolverFactory('ipopt').solve(m)
print(pyo.value(m.x))
# -1.2528349584581178e-10
# Evaluate the derivative at current value of m.x
ddx = differentiate(m.obj, wrt=m.x)
print(ddx)
# -2.5056699169162357e-10
# Return derivative expression
ddx2 = differentiate(m.obj, wrt=m.x, mode='sympy')
print(ddx2)
# 2.0*x
You can read more about this function here: https://github.com/Pyomo/pyomo/blob/main/pyomo/core/expr/calculus/derivatives.py#L31

PyMC3 autocorrplot() for any given array

The autocorrplot() function gives the autocorrelation plot for the sampled data from the trace.
If I already have a sample of data in the form of an array or list, can I use autocorrplot() to do the same?
Is there any alternative to generate autocorrelation plots given a sequence of data?
Please help.
autocorrplot is a wrapper around matplotlib's acorr. To get a similar look to pymc3's version, you can use something like
import numpy as np
import matplotlib.pyplot as plt
my_array = np.random.normal(size=1000)
plt.acorr(my_array, detrend=plt.mlab.detrend_mean, maxlags=100)
plt.xlim(0, 100)
Note the call to xlim at the end, since by default PyMC3 does not show negative correlations.

Edit tick labels in logarithmic axis

I'm trying to edit the tick labels but I keep getting scientific notation, even after setting the ticks. Here is a MWE:
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(9, 7))
fig.subplots_adjust(left=0.11, right=0.95, top=0.94)
ax.ticklabel_format(style='plain')
plt.plot([1,4],[3,6] )
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xticks([0.7,1,1.5,2,2.5,3,4,5])
ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
which produces this plot
As you can see ax.ticklabel_format(style='plain') doesn't seem to work as I keep getting tick labels in scientific notation, and when using ax.set_xticks the old tick labels are still present. I took a look at this topic and it seems like the problem is in the choose of the ticks, if I use for example 0.3 instead of 0.7 as the first tick it works, however I need to do a plot in this specific range and using log scale.
Any work around?
Actually, your code is doing what you need, the problem is the labels from the minor ticks that remain unaffected and overlap with the major ticks
you can simply add the line:
ax.get_xaxis().set_minor_formatter(matplotlib.ticker.NullFormatter())
full code:
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(9, 7))
fig.subplots_adjust(left=0.11, right=0.95, top=0.94)
ax.ticklabel_format(style='plain')
plt.plot([1,4],[3,6] )
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xticks([0.7,1,1.5,2,2.5,3,4,5])
ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.get_xaxis().set_minor_formatter(matplotlib.ticker.NullFormatter())

Interpolation between two xarray datasets with Basemap

I have two different xarray datasets that have different latitude/longitude grid resolutions. I want to regrid the one xarray with lower resolution to the same resolution as the one xarray with higher resolution. I found some examples (e.g., http://earthpy.org/interpolation_between_grids_with_basemap.html), but it does not work for me. Here is one example that I made for testing:
import numpy as np
import xarray as xray
import mpl_toolkits.basemap
var1=xray.DataArray(np.random.randn(len(np.linspace(40.5,49.5,10)),len(np.linspace(-39.5,-20.5,20))),coords=[np.linspace(40.5,49.5,10), np.linspace(-39.5,-20.5,20)],dims=['lat','lon'])
(xlon, xlat)=np.meshgrid(np.linspace(-39.875,-20.125,80),np.linspace(40.125,49.875,40))
var2=xray.DataArray(-xlon**2+xlat**2,coords=[np.linspace(40.125,49.875,40),np.linspace(-39.875,-20.125,80)],dims=['lat','lon'])
mpl_toolkits.basemap.interp(var1,var1.lon,var1.lat,var2.lon,var2.lat,checkbounds=False,masked=False,order=0)
I get following error:
ValueError: xout and yout must have same shape!
Screenshot:
Does basemap.interp() require xout and yout to be the same shape? So var2 needs to be a square? This is almost never the case with any of my datasets! How can I regrid var1 to be the same resolution as var2?
Note: After regridding, I want to subsample var1 given some condition related to var2. For example:
var1_subset = var1.where(var2>1000)
So I want to minimize any loss of grid points during the interpolation.
basemap.interp will work only when xout and yout are same in number or number of output nlons and nlats are same,
why not generate same length output nlats and nlons and subset it later.
For example:
import numpy as np
import xarray as xray
import mpl_toolkits.basemap
var1=xray.DataArray(np.random.randn(len(np.linspace(40.5,49.5,10)),len(np.linspace(-39.5,-20.5,20))),coords=[np.linspace(40.5,49.5,10), np.linspace(-39.5,-20.5,20)],dims=['lat','lon'])
(xlon,xlat)=np.meshgrid(np.linspace(-39.875,20.125,80),np.linspace(40.125,49.875,80))
var2=xray.DataArray(-xlon**2+xlat**2,coords[np.linspace(40.125,49.875,80),np.linspace(-39.875,-20.125,80)],dims=['lat','lon'])
mpl_toolkits.basemap.interp(var1,var1.lon,var1.lat,var2.lon,var2.lat,checkbounds=False,masked=False,order=0)
Here is another cool trick with xarray.
lonreg=var1.groupby_bins('lon',np.linspace(-39.875,20.125,80)).mean(dim='lon')
regridded=lonreg.groupby_bins('lat',np.linspace(-39.5,20.5,20)).mean(dim='lat')
if you want weighted averaged regridding, it is easy to extend this for area averaged regridding by using weights and sum function on groupby object.

Using python/numpy to create a matrix?

I have a basic understanding using numpy to create a matrix, but the context in which I have to create one confuses me. For example, I need to create a 2X1000 matrix with normally distributed values with mean 0 and standard deviation of 1. I'm not sure what it means to make a matrix with these conditions.
Besides what was writeen above by CoDEmanX, From numpy.random.normal we can read about the generic Normal Distribution in numpy:
numpy.random.normal(loc=0.0, scale=1.0, size=None)
Where:
loc is the Mean of the distribution and scale is the standard deviation (square root of variance).
import numpy as np
A = np.random.normal(loc =0, scale =1, size=(2, 1000))
print(A)
But if these examples are confusing, then consider that
np.random.normal()
simply gives you a random number, and you can create your own custom matrix:
import numpy as np
A = [ [np.random.normal() for i in range(1000)] for j in range(2) ]
A = np.array(A)
If you refer to the numpy docs, whether there are utility functions that facilitate you reaching your goal, you'll come across a normal distribution function:
numpy.random.standard_normal(size=None)
standard_normal(size=None)
Returns samples from a Standard Normal distribution (mean=0, stdev=1).
The simple average is 0, the standard deviation 1.
arr = numpy.random.standard_normal((2, 1000))
print(arr.mean()) # -0.027...
print(arr.std()) # 1.0272...
Note that it's not exactly 0 or 1.
I still recommend to read about normal distributation and standard deviation / variance, eventhough numpy offers a simple solution.