Must a PyOMO's model objective function return a PyOMO Var object? - pyomo

I have an objective function in a model that returns a regular float which must be minimized.
But when I try to solve it I get the following Warning and the solver doesn't search for a solution:
WARNING: Constant objective detected, replacing with a placeholder to prevent
solver failure.
I also get a second Warning:
WARNING: Empty constraint block written in LP format - solver may error
Do I have to add contraints to the model? I already set bounds on the parameters to be optimized.

Related

How to get constraint matrix after presolving in PySCIPOpt

I have a pyscipopt.Model variable model, encoding an integer program with linear constraints.
I want to evaluate some solution on all rows of the constraint matrix. The only way I thought of is to use model.getSolVal() to get solution values on modified variables, and compute the dot product with the rows of the modified constraint matrix manually.
The following snippet extracts the nonzero coefficients of a constraint in the model:
constr = model.getConss()[0]
coeff_dict = model.getValsLinear(constr)
It runs fine before presolving, but after presolving (or just optimizing) I get the following error
Warning: 'coefficients not available for constraints of type ', 'logicor'.
My current solution is to disable presolving completely, in which case the variables aren't modified. Can I avoid that?
I am assuming you want to do something with the row activities and not just check whether your solution is feasible?
getValsLinear is only usable for linear constraints. In presolving SCIP upgrades linear constraints to some more specialized constraint types (in your case logicor).
There exists a function in SCIP called SCIPgetConsVals that does what you want getValsLinear to do (it gets you the values for all constraints that have a linear representation). However, that function is not wrapped in PySCIPopt yet.
You can easily wrap that function yourself (and even head over to https://github.com/scipopt/PySCIPOpt and create a Pull-request).
The other option is to read a settings file that forbids the linear constraints from being upgraded.
constraints/linear/upgrade/logicor = FALSE
constraints/linear/upgrade/indicator = FALSE
constraints/linear/upgrade/knapsack = FALSE
constraints/linear/upgrade/setppc = FALSE
constraints/linear/upgrade/xor = FALSE
constraints/linear/upgrade/varbound = FALSE
would be the settings you need. That way you still have presolving just without constraint upgrades.

GAMS translated nonlinear objective function looks different than defined objective

I am studying the batchdes.lst file for MINLP model batchdes in GAMS library. The objective function is
Defining objective function obj.. cost =g= sum(j, alpha(j)*(exp(n(j) + beta(j)*v(j))));
However, in the list of equations in the .lst file is presented as ---- obj =G= objective function definition
obj.. - (25141.1498186984)*v(mixer) - (64131.2769053431)*v(reactor) - (49066.7923833869)*v(centrifuge) - (41901.9163644973)*n(mixer) - (106885.461508905)*n(reactor) - (81777.9873056449)*n(centrifuge) + cost =G= 0 ; (LHS = -230565.365179047, INFES = 230565.365179047 ****)
What kind of operation has been applied here? How the exp() translated? Is this a feature of GAMS or the Solver selected?
I implemented the same model to Pyomo and solve with the same solver from GAMS, however the Obj does not look the same in the .lst file.
Thanks!
What you see here are the partial derivatives of each variable evaluated at their current level values. This comes from the GAMS documentation:
Nonlinear equations are treated differently. If the coefficient of a variable in the equation listing is enclosed in parentheses, then the corresponding constraint is nonlinear, and the value of the coefficient depends on the activity levels of one or more of the variables. The listing is not algebraic, but shows the partial derivative of each variable evaluated at their current level values.

Gurobi shadow price of a variable without generating separate constraint

Gurobi 9.0.0 // C++
I am trying to get the shadow price of variables without explicitly generating a constraint for them.
I am generating variables the following way:
GRBModel* milp_model
milp_model->addVar(lb, up, type, 0, name)
Now I would like to get the shadow price (dual) for these variables.
I found this article which says that for "a linear program with lower and upper bounds on a variable, i.e., l ≤ x ≤ u" [...] "Gurobi gives access to the reduced cost of x, which corresponds to sl+su".
To get the shadow price of a constraint one would use the GRB functions according to the following answer (python but same idea) using the Pi constraint attribute.
What would be the GRB function that returns the previously mentioned reduced cost of x / shadow price of a variable?
I tried gurobi_var.get(GRB_DoubleAttr_Pi) which works for gurobi_constr.get(GRB_DoubleAttr_Pi)
but it returns: Not right attribute. Error code = 10003
Can anyone help me with this?
I suppose you are referring to the reduced costs of the variables. You can get them via the variable attribute RC as explained here. And then you need to figure out whether these dual values are corresponding to the upper or lower bound as discussed here.

Error while passing test data point to sci-kit learn <endpoint>.predict() function created inside AWS-Sagemaker

I created a scikit learn model endpoint inside AWS Sagemaker. I want to make predictions on my test set using this endpoint. My endpoint creation code looks like
predictor = sklearn.deploy(1, 'ml.m4.xlarge')
from sagemaker.predictor import csv_serializer
predictor.content_type = 'text/csv'
predictor.serializer = csv_serializer
predictor.deserializer = None
When I pass my test data point as a list predictor.predict(l) where l is the list, it throws an error
ValueError: Expected 2D array, got 1D array instead:
When I pass it as a numpy array of 2 dimensions(I checked dimensions using .ndim), it still throws the same error. When I tried to pass the data as a string separated by commas without any spaces, it still throws the same error.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
This line gets displayed every time the Valueerror is thrown but even after reshaping, the same error persists.
So I have tried the formats '2,3,4,5', ['2,3,4,5'], array[[2,3,4,5]],[2,3,4,5] but none of these work.
Can someone please convey what the right format is for the input to the predictor function of sci-kit learn?
Are you still having problems with your input dimensions? Users are able to implement their own input_fn in order to help with debugging as well. I suggest you try printing the deserialized input when it is passed to the inference call to see where you might be having trouble with the dimension size.

Accessing constraints/objective

I'm using a toolbox that internally calls Pyomo to solve an optimization problem. I am interested in accessing the Pyomo model it constructs so I can modify the constraints/objective. So my question is, suppose I get the following output:
Problem:
Name: unknown
Lower bound: 6250.0
Upper bound: 6250.0
Number of objectives: 1
Number of constraints: 11
Number of variables: 8
Number of nonzeros: 17
Sense: minimize
Solver:
Status: ok
Termination condition: optimal
Statistics:
Branch and bound:
Number of bounded subproblems: 0
Number of created subproblems: 0
Error rc: 0
Time: 0.015599727630615234
Solution:
number of solutions: 0
number of solutions displayed: 0
So the problem works well and I get a solution, the modeling was done internally via another too. Is it possible to access the constraints/objective so I can freely modify the optimization model in Pyomo syntax?
I tried to call the model and got something like this:
<pyomo.core.base.PyomoModel.ConcreteModel at xxxxxxx>
It sounds like network.model is the Pyomo model and yes, if you have access to it, you can modify it. Try printing the model or individual model components using the pprint() method:
network.model.pprint()
network.model.con1.pprint()
or you can loop over specific types of components in the model:
for c in network.model.component_objects(Constraint):
print(c) # Prints the name of every constraint on the model
The easiest way to modify the objective would be to find the existing one, deactivate it, and add a new one
network.model.obj.deactivate()
network.model.new_obj = Objective(expr=network.model.x**2)
There are ways to modify constraints/objectives in place but they are not well-documented. I'd recommend using Python's dir() function and playing around with some of the component attributes and methods to get a feel for how they work.
dir(network.model.obj)