GAMS translated nonlinear objective function looks different than defined objective - pyomo

I am studying the batchdes.lst file for MINLP model batchdes in GAMS library. The objective function is
Defining objective function obj.. cost =g= sum(j, alpha(j)*(exp(n(j) + beta(j)*v(j))));
However, in the list of equations in the .lst file is presented as ---- obj =G= objective function definition
obj.. - (25141.1498186984)*v(mixer) - (64131.2769053431)*v(reactor) - (49066.7923833869)*v(centrifuge) - (41901.9163644973)*n(mixer) - (106885.461508905)*n(reactor) - (81777.9873056449)*n(centrifuge) + cost =G= 0 ; (LHS = -230565.365179047, INFES = 230565.365179047 ****)
What kind of operation has been applied here? How the exp() translated? Is this a feature of GAMS or the Solver selected?
I implemented the same model to Pyomo and solve with the same solver from GAMS, however the Obj does not look the same in the .lst file.
Thanks!

What you see here are the partial derivatives of each variable evaluated at their current level values. This comes from the GAMS documentation:
Nonlinear equations are treated differently. If the coefficient of a variable in the equation listing is enclosed in parentheses, then the corresponding constraint is nonlinear, and the value of the coefficient depends on the activity levels of one or more of the variables. The listing is not algebraic, but shows the partial derivative of each variable evaluated at their current level values.

Related

Computing the gradient of an objective function evaluated at the optimum in a dynamic optimization problem, pyomo

I am computing the solution to a dynamic non-linear optimization problem, that I set up usign the pyomo library. I use a ConcreteModel, with an objective function and several constraints, all time-indexed.
My objective function takes the form of a ScalarObjective (I am solving a dynamic general equilibrium problem in which I seek to maximize total welfare). I would like to compute the gradient of the objective, evaluated at the optimum, with respect to one of the model's variables at a given period t. My problem is a discrete-time problem.
I have tried many different options, asking AI chatbots for help (both You Chat and ChatGPT), but every solution I'm given is incorrect -- on this topic the AI chatbots seem to know very little.
I feel that some method in the library pyomo.dae could be of help, but I haven't found a solution yet. Could anyone help me, please?
You can do this using Pyomo's differentiate function. Here is a toy example:
import pyomo.environ as pyo
from pyomo.core.expr.calculus.derivatives import differentiate
m = pyo.ConcreteModel()
m.x = pyo.Var()
m.con = pyo.Constraint(expr=m.x<=10)
m.obj = pyo.Objective(expr=m.x**2)
pyo.SolverFactory('ipopt').solve(m)
print(pyo.value(m.x))
# -1.2528349584581178e-10
# Evaluate the derivative at current value of m.x
ddx = differentiate(m.obj, wrt=m.x)
print(ddx)
# -2.5056699169162357e-10
# Return derivative expression
ddx2 = differentiate(m.obj, wrt=m.x, mode='sympy')
print(ddx2)
# 2.0*x
You can read more about this function here: https://github.com/Pyomo/pyomo/blob/main/pyomo/core/expr/calculus/derivatives.py#L31

Gurobi shadow price of a variable without generating separate constraint

Gurobi 9.0.0 // C++
I am trying to get the shadow price of variables without explicitly generating a constraint for them.
I am generating variables the following way:
GRBModel* milp_model
milp_model->addVar(lb, up, type, 0, name)
Now I would like to get the shadow price (dual) for these variables.
I found this article which says that for "a linear program with lower and upper bounds on a variable, i.e., l ≤ x ≤ u" [...] "Gurobi gives access to the reduced cost of x, which corresponds to sl+su".
To get the shadow price of a constraint one would use the GRB functions according to the following answer (python but same idea) using the Pi constraint attribute.
What would be the GRB function that returns the previously mentioned reduced cost of x / shadow price of a variable?
I tried gurobi_var.get(GRB_DoubleAttr_Pi) which works for gurobi_constr.get(GRB_DoubleAttr_Pi)
but it returns: Not right attribute. Error code = 10003
Can anyone help me with this?
I suppose you are referring to the reduced costs of the variables. You can get them via the variable attribute RC as explained here. And then you need to figure out whether these dual values are corresponding to the upper or lower bound as discussed here.

How to specify GP dependent on non-obsereved random walk

I have a cyclical signal I would like to model. I would like to allow the signal to be able to stretch and compress in time, and I do not know the exact profile.
At the moment, I am modelling the phase progression as a random walk, and capturing the cyclical nature by defining the mean likelihood as a sum of sines and cosines on the phase, where the weights on the cosines are parameters to be fitted.
i.e.
y = N(f(phase),sigma) = N(sum_i(a_i*sin(phase) + b_i*cos(phase)),sigma)
(i.e. latex image of above)
This seems to work to some extent, but I would like to change the definition of f so that it does not rely on sums of sin and cos.
I was looking at Gaussian Processes, and thinking that there could be a solution to this there - but I can't figure out how (if it's possible) to define the y in terms of phase when using GP.
There is an example on the pymc github site:
y_obs = pm.gp.GP('y_obs', cov_func=f_cov, sigma=s2_n, observed={'X':X, 'Y':y})
The problem here is that X is defined as observed, while I need to model it as a random variable.
I tried this form:
y_obs = pm.gp.GP('y_obs', X = phase , cov_func=f_cov, sigma=s2_n, observed={ 'Y':y})
But that leads to an error:
File "/home/person/.conda/envs/mcmcx/lib/python3.6/site-packages/pymc3/distributions/distribution.py", line 56, in __init__
raise TypeError("Expected int elements in shape")
I am new to HB/GP/pymc3... and even stackoverflow. Apologies if the question is off.

Calculation where output is square polynomial plus remainder

My son is learning how to calculate the formula for a parabola using a directrix and focus point on his Khan Academy course. (a,b) is the focus point, k is the parameter for the directrix as y=k. I wanted to show him a simple way to check his results using Sympy; programming helps hugely in solidifying internal algorithms. Step 1 is clearly to set the equation out.
Parabola = Eq(sqrt((y-k)**2),sqrt((x-a)**2+(y-b)**2))
I first solved this for y, intending then to show how to substitute values and derive the equation, thus:
Y = solve(Parabola,y)
This was in a reasonable form, having collected the 1/(2b-2k) to the outside.
Next, I substituted the value of the focus and directrix into the equation, obtaining the equation y= 1/6*(x**2+16*x+49), which is correct.
He needed next to resolve this in a form (x+c1)(x+c2)+remainder. There does not seem to be a direct way to factor from the equation above into this form, at least not from an hour searching the docs.
Answer = Y[0].subs({a:-8,b:-1,k:-4})
factor(Answer,deep=True)
Of course I understand how to reduce to a square factorization plus remainder; my question is solely whether this is possible in sympy and, if so, how?
A second, perhaps trivial, question is why Sympy returns some factorizations as (constant - x) where (x -constant) is preferred: is there a way of specifying the form?
Thanks for any help, on behalf of my son, to whom I am showing the wonders of Sympy.
The process is usually called "completing the square". It is not implemented as a single SymPy method, but one can use the SymPy equation solver to find the coefficients of such a form of the polynomial:
>>> var('A B C')
>>> solve(Eq(Answer, A*(x-B)**2 + C), [A, B, C])
[(1/6, -8, -5/2)]
So the parabola vertex is at (8, -5/2), and the polynomial can be written as 1/6*(x+8)**2 - 5/2

Dummy and Heckman

I'm using Heckman Selection Model which are two consist of 2 equation. i'm using Probit as a selection equation and multiple regression as a result equation.
how can put in dummy variables in those equation ?
Do we have to make the variables into logaritmic form ?
How can I make logaritmic variables with stata ?
Thank you..
Here's an example of how you might do what you ask. The example looks at the effect of being a union member on log wages:
webuse union3
gen log_wage = ln(wage)
etregress log_wage age grade i.smsa i.black tenure, treat(union = i.south i.black tenure) twostep
etregress estimates an average treatment effect of an endogenous binary-treatment variable. In plain English, that means the "first-stage" is a probit. Estimation is by either full maximum likelihood or a two-step consistent estimator, as above.
The dummies are created on the fly by putting an i. in front of the covariates. This is called factor variable notation, and it also makes interactions a breeze. You can also do tab race, gen(d_) to create d_1, d_2, and d_3 (3 race dummies, one of which you can drop).