Gurobi shadow price of a variable without generating separate constraint - c++

Gurobi 9.0.0 // C++
I am trying to get the shadow price of variables without explicitly generating a constraint for them.
I am generating variables the following way:
GRBModel* milp_model
milp_model->addVar(lb, up, type, 0, name)
Now I would like to get the shadow price (dual) for these variables.
I found this article which says that for "a linear program with lower and upper bounds on a variable, i.e., l ≤ x ≤ u" [...] "Gurobi gives access to the reduced cost of x, which corresponds to sl+su".
To get the shadow price of a constraint one would use the GRB functions according to the following answer (python but same idea) using the Pi constraint attribute.
What would be the GRB function that returns the previously mentioned reduced cost of x / shadow price of a variable?
I tried gurobi_var.get(GRB_DoubleAttr_Pi) which works for gurobi_constr.get(GRB_DoubleAttr_Pi)
but it returns: Not right attribute. Error code = 10003
Can anyone help me with this?

I suppose you are referring to the reduced costs of the variables. You can get them via the variable attribute RC as explained here. And then you need to figure out whether these dual values are corresponding to the upper or lower bound as discussed here.

Related

How to get constraint matrix after presolving in PySCIPOpt

I have a pyscipopt.Model variable model, encoding an integer program with linear constraints.
I want to evaluate some solution on all rows of the constraint matrix. The only way I thought of is to use model.getSolVal() to get solution values on modified variables, and compute the dot product with the rows of the modified constraint matrix manually.
The following snippet extracts the nonzero coefficients of a constraint in the model:
constr = model.getConss()[0]
coeff_dict = model.getValsLinear(constr)
It runs fine before presolving, but after presolving (or just optimizing) I get the following error
Warning: 'coefficients not available for constraints of type ', 'logicor'.
My current solution is to disable presolving completely, in which case the variables aren't modified. Can I avoid that?
I am assuming you want to do something with the row activities and not just check whether your solution is feasible?
getValsLinear is only usable for linear constraints. In presolving SCIP upgrades linear constraints to some more specialized constraint types (in your case logicor).
There exists a function in SCIP called SCIPgetConsVals that does what you want getValsLinear to do (it gets you the values for all constraints that have a linear representation). However, that function is not wrapped in PySCIPopt yet.
You can easily wrap that function yourself (and even head over to https://github.com/scipopt/PySCIPOpt and create a Pull-request).
The other option is to read a settings file that forbids the linear constraints from being upgraded.
constraints/linear/upgrade/logicor = FALSE
constraints/linear/upgrade/indicator = FALSE
constraints/linear/upgrade/knapsack = FALSE
constraints/linear/upgrade/setppc = FALSE
constraints/linear/upgrade/xor = FALSE
constraints/linear/upgrade/varbound = FALSE
would be the settings you need. That way you still have presolving just without constraint upgrades.

LP Duals and Reduced Costs with CPLEX

I am working on a Column Generation algorithm using CPLEX to solve the Reduced Master Problem.
After adding the new variables to the RMP, I set their upper bounds to 0, solve the RMP again and retrieve their reduced costs (to check if the value I calculated matches the one provided by CPLEX).
In the first iterations, the reduced costs match.
However, after some iterations, I start getting different reduced cost.
When I run CPLEX Interative Optimizer, read the LP model (or MPS) and compare the duals of the constraints, I get some different values.
Does it make any sense?
I've tried using different methods for solving my LP. Also tried changing tolerances.
Problem stats
Objective sense : Minimize
Variables : 453308 [Fix: 8, Box: 453300]
Objective nonzeros : 6545
Linear constraints : 578166 [Less: 70814, Greater: 503886, Equal: 3466]
Nonzeros : 2710194
RHS nonzeros : 7986
Variables : Min LB: 0.0000000 Max UB: 74868.86
Objective nonzeros : Min : 0.01000000 Max : 10000.00
Linear constraints :
Nonzeros : Min : 0.004000000 Max : 396.8800
RHS nonzeros : Min : 0.01250000 Max : 74868.86
Displaying the solution quality I get these info:
Max. unscaled (scaled) bound infeas. = 8.52651e-014 (3.33067e-015)
Max. unscaled (scaled) reduced-cost infeas. = 2.24935e-010 (5.62339e-011)
Max. unscaled (scaled) Ax-b resid. = 5.90461e-011 (3.69038e-012)
Max. unscaled (scaled) c-B'pi resid. = 2.6489e-011 (7.27596e-012)
Max. unscaled (scaled) |x| = 45433 (2839.56)
Max. unscaled (scaled) |slack| = 4970.49 (80.1926)
Max. unscaled (scaled) |pi| = 295000 (206312)
Max. unscaled (scaled) |red-cost| = 411845 (330962)
Condition number of scaled basis = 1.1e+008
As mentioned in the comment by Erwin, what you are experiencing is probably degeneracy.
Both the primal and dual solutions are often not unique in problems larger than a toy model.
By fixing a set of primal variables to their optimal level, assuming the solution was otherwise primal-dual optimal and the solution is stored in CPLEX, then it should take zero iterations reoptimizing the model after applying the fixes. Hence it should return the same solution. But if no solution is stored in CPLEX and you reoptimize from scratch, then CPLEX may return a different (but also optimal) (primal and/or dual) solution.
Do you see iterations in the log ?
As debug, try to write out the model before fixing and after, then do a diff on these two files to make sure there's not a modeling/programming mistake on your side.
You are also welcome to contact me at bo.jensen (at) dk (dot) ibm (dot) com and I will try to help you as I don't follow stack overflow closely.
My guess would be that when you are setting up the subproblem you fail to account for the reduced cost of variables out of basis at their upper bound. Those reduced costs are essentially the dual values of the upper bound constraint and hence must be taken into account when setting up the subproblem.
This sort of accidental omission typically happens when the generated variables are created with an upper bound.
If really this is your problem, then the your easiest solution may be simply not specifying upper bounds for the new variables, which you can do if the upper bound is implied (e.g., from the new variable being part of a clique constraint).

Use the function "mod" in the instructions "if" and "select case"

I wrote a little code in Fortran. But the code doesn't behave as I thought, and I can figure out where is the problem.
I will not put the code here because it has 1200 lines but here its philosophy:
I create a 3D grid represented by a four dimensional table (I stock a vector of 2 elements on each point of the grid, corresponding at the nature of the site and who is occupying the site). This grid represents what we call a crystal (where atoms can be found periodically)
When this grid is constructed, the code scans each point of this grid and it looks to the neighboring sites to count the different type of atoms or the vacancies.
For this last point, I use a triple imbricated loop which permit to explore the different sites and I check the different neighboring site using either the if or the select case instructions. As I want my grid to be periodic, I have the function mod in the argument of the if or the select case.
The problem is sometimes, It found a different element in a neighboring site that the actual element in this specific neighboring site. As an example:
In the two ouput files where all the coordinates are written with the
element type I have grid(0,0,1)=-1 (which correspond to a empty site).
But while the code is looking to the neighboring sites of grdi(0,0,1) It tells that there is actually an element indexed 2 in grid(0,0,1).
I look carefully to the block in the triple implemented loop, but it seems fine.
I would like to know if anyone has already meet this kind of problem, or know if there is some problems using mod in a if or select case argument ?
If some of you want to look closer, I can send you the code, with some explanations.
Arrays are usually dimensioned as:
REAL(KIND=8),DIMENSION(0:N) ::A
or
REAL(KIND=8),DIMENSION(N) :: A
In the later example, they are assumed to start at 1.
You could also go (-N:N) or (10:191)
If you use the compiler switch '-check bounds' or ;-check all' you will see if you are going outside the array/etc. This is not an uncommon thing to get hosed up, but the compiler will abort quickly when the dimension is outside.
Once it works then removed the -check bounds and/or -check all.
Thanks for your consideration francescalus and haraldkl.
It was not related to the dimension of arrays Holmz, but thank you to try to help
It seems I finally succeed to fix it. I will post an over answer If I fully understand why it was not working properly.
Apparently, it was related to the combination of a different argument order in a call procedure and the subroutine header + a declaration in the subroutine with intent(inout).
It was like the intent(inout) was masking the problem. But It a bit strange for me.
Some explanations about the code :
As I said, the code create a 3D grid where each intersection of the 3D grid correspond to a crystallographic site. I attribute a value at each site -1 for an empty site, 1 for a crystal atom (0 if there is a vacancy instead of a crystal atom), 2,3,4,5 for different impurities. Actually, the empty sites and the sites which received crystal atoms are not of the same type, that's why an empty site and a vacancy are distinguished. The impurities can only occupied the empty site and are forbidden to occupied a crystal site.
The aim of the code is to explore the configurational space of the system, in other words all the possible distribution we can obtained with the different elements. To do so I start from a initial configuration and I choose randomly to site (respecting the rules of occupation) and I virtually switch them. I calculate the energy of the old an new configurations, if the new has a lower energy I keep it, if not, i keep the old one. The calculus of the energy is based on the knowledge of the environment of each vacancies and impurities, so we need to know their neighbors. And I repeat the all procedure again and again to converge to the most stable (so the most probable) configuration.
The next step is to include the temperature effect, and to add the second type of empty sites.
Have a nice day,
M.

AIMMS artificial variable constraint

I have difficulty formulating the constraints correctly. Dumbed down version of the problem:
There are 12 time units, 3 products, demand d_{i,t} for the product i at time $t$ is known in advance and the resources r_{i,t} (all 8, product i uses the non-i resources) necessary for product i at time t are known as well. We need to minimize holding costs h_i by deciding how much of product i we need to produce at time t, this is called x_{i,t}. Starting inventory for every product is 6. To help I introduce stock level s_{i,t}. This equates to the following formulation:
I got this working using the Excel solver but I need to do this in AIMMS. The Stock variable s is giving me issues, I can't get it to work using an if statement to condition on if t=1 nor would I know how to split it in two constraints, given the first iteration of the second constraint would need to reference the first constraint.
You can specify index domain in your constraint attributes as follows:
(t,i) | t > 1
if t > 1 statement should work if the set of time instances is a subset of integers. If not - you should use ord(t) > 1, i.e.
if ord(t) > 1
then
Your_Constraint
endif

Stata seems to be ignoring my starting values in maximum likelihood estimation

I am trying to estimate a maximum likelihood model and it is running into convergence problems in Stata. The actual model is quite complicated, but it converges with no troubles in R when it is supplied with appropriate starting values. I however cannot seem to get Stata to accept the starting values I provide.
I have included a simple example below estimating the mean of a poisson distribution. This is not the actual model I am trying to estimate, but it demonstrates my problem. I set the trace variable, which allows you to see the parameters as Stata searches the likelihood surface.
Although I use init to set a starting value of 0.5, the first iteration still shows that Stata is trying a coefficient of 4.
Why is this? How can I force the estimation procedure to use my starting values?
Thanks!
generate y = rpoisson(4)
capture program drop mypoisson
program define mypoisson
args lnf mu
quietly replace `lnf' = $ML_y1*ln(`mu') - `mu' - lnfactorial($ML_y1)
end
ml model lf mypoisson (mean:y=)
ml init 0.5, copy
ml maximize, iterations(2) trace
Output:
Iteration 0:
Parameter vector:
mean:
_cons
r1 4
Added: Stata doesn't ignore the initial value. If you look at the output of the ml maximize command, the first line in the listing will be titled
initial: log likelihood =
Following the equal sign is the value of the likelihood for the parameter value set in the init statement.
I don't know how the search(off) or search(norescale) solutions affect the subsequent likelihood calculations, so these solution might still be worthwhile.
Original "solutions":
To force a start at your initial value, add the search(off) option to ml maximize:
ml maximize, iterate(2) trace search(off)
You can also force a use of the initial value with search(norescale). See Jeff Pitblado's post at http://www.stata.com/statalist/archive/2006-07/msg00499.html.