Creating several variables/constraints in one shot in SCIP - linear-programming

I have a C++ application that can use indistinctly (vía polymorphism and virtual methods) CPLEX, GUROBI or COIN to solve an LP, and now I'm adding support for SCIP.
In the first three solvers I can add several constraints/variables at once, using the calls:
Constraints
CPLEX = CPXXaddrows
GUROBI = GRBXaddconstrs
COIN = OsiSolverInterface::addRows
Variables
CPLEX = CPXXaddcols
GUROBI = GRBXaddvars
COIN = OsiSolverInterface::addCols
But for SCIP, I've only figured out how to do it one by one:
Constraints
SCIPcreateConsBasicLinear -> SCIPaddCoefLinear-> SCIPaddCons -> SCIPreleaseCons
Variables
SCIPcreateVarBasic -> SCIPaddVar -> SCIPreleaseVar
My question is: Is there a way in SCIP to create several variables/constraints in one instruction? I ask because, at least for the other solvers, doing it that way is a lot faster than doing it one by one.
Thanks a lot in advance.

In SCIP you have to create variables and constraints one by one.

Related

Pyomo - How to consider all scenarios in the objective function within the ReferenceModel?

I am facing a problem in defining the objective function shown in Figure: Objective Function.
In here, p_n = 1/#Scenarios and W-n: Final Wealth in Scenario n. Scenarios are equiprobable, so p_n is known in advance.
The problem is: How can I define the sum of the cross-product of p_n and W_n over EVERY scenario, if the Deterministic ReferenceModel considers only one scenario? In other terms, given that the set of scenarios is added only after having defined the objective function in the deterministic model, how can I consider all the scenarios?
Moreover, I have to count the number of times in which the Final Wealth is lower than the Target Wealth, in a given pre-specified stage. How can I set the counter, if the Deterministic ReferenceModel considers just one scenario?
I am a new Pyomo user, and a quantitative finance student with no much knowledge about Operation Research, so I apologize in advance if the questions are obvious or silly.
Thank you!

How to use CPLEX Multi-Objective (ObjectiveSense) parameters

I am working on pure LP problem and using Multi-Objective to solve it. When I am using objective with weights:
solver1.minimize( A* 1000
+ B* 10000
+ C* 100
+ D* 1
)
Solver is able to get the optimal result successfully
But when I am using ObjectiveSense package with objective:
solver1.set_multi_objective(ObjectiveSense.Minimize, [A, B, C,D]
, priorities=[1, 2, 0, 1], weights=[1000,1,1,1])
Solver is getting infeasible after first hierarchy of solve and give: "Non-optimal status: infeasibleCPLEX Error 1300: Failure to solve multi-objective subproblem."
I am trying to figure out how to use parameters reltol and abstol to get a feasible solution. Any ideas or examples?
Also trying to understand why Multi-Objective is going infeasible though we have solution through weighted objective? Any input about same will help.
Thanks!
This page
https://www.ibm.com/docs/en/icos/12.10.0?topic=optimization-solving-multiple-objective-problems
explains in detail how tolerances are used to solve multi objective problems.
To summarize, two different algorithms are used for LP and MIP.
The LP algorithm uses LP bases, for which absolute tolerance is interpreted as a limit for reduced costs, and relative costs are not used.
The MIP algorithm uses an iterative solve approach, for which tolerances are interpreted as tolerances on each intermediate sub-objective.
We have seen cases where the LP algorithm may fail, when the model has difficult numerical properties. To check this case, run the following line on your model:
The range of coefficients should not exceed 1e+6.
print(mdl.cplex_matrix_stats)
As a workaround, you can try the Model.solve_with_goals method in Docplex, which runs the iterative multi objective algorithm for any kind of model.

If condition in Cplex objective function

I am new to Cplex. I'm solving an integer programming problem, but I have a problem with objective function.
Problem is that I have some project, that have a due date D, and if project is tardy, than I have a tardiness penalty b, so it looks like b*(cn-D). Where cn is a real complection time of a project, and it is decision variable.
It must look like this
if (cn-D)>=0 then b*(cn-D)==0
I tried to use "if-then" constraint, but seems that it isn't working with decision variable.
I looked on question similar to this, but but could not find solution. Please, help me define correct objective function.
The standard way to model this is:
min sum(i, penalty(i)*Tardy(i))
Tardy(i) >= CompletionTime(i) - DueDate(i)
Tardy(i) >= 0
Tardy is a non-negative variable and can never become negative. The other quantities are:
penalty: a constant indicating the cost associated with job i being tardy one unit of time.
CompletionTime: a variable that holds the completion time of job i
DueDate: a constant with the due date of job i.
The above measures the sum. Sometimes we also want to measure the count: the number of jobs that are tardy. This is to prevent many jobs being tardy. In the most general case one would have both the sum and the count in the objective with different weights or penalties.
There is a virtually unlimited number of papers showing MIP formulations on scheduling models that involve tardiness. Instead of reinventing the wheel, it may be useful to consult some of them and see what others have done to formulate this.

Steps for creating an optimizer on TensorFlow

I'm trying to implement a new optimizer that consist in a big part of the Gradient Descent method (which means I want to perform a few Gradient Descent steps, then do different operations on the output and then again). Unfortunately, I found 2 pieces of information;
You can't perform a given amount of steps with the optimizers. Am I wrong about that? Because it would seem a logical option to add.
Given that 1 is true, you need to code the optimizer using C++ as a kernel and thus losing the powerful possibilities of TensorFlow (like computing gradients).
If both of them are true then 2 makes no sense for me, and I'm trying to figure out then what's the correct way to build a new optimizer (the algorithm and everything else are crystal clear).
Thanks a lot
I am not 100% sure about that, but I think you are right. But I don't see the benefits of adding such option to TensorFlow. The optimizers based on GD I know usually work like this:
for i in num_of_epochs:
g = gradient_of_loss()
some_storage = f(previous_storage, func(g))
params = func2(previous_params, some_storage)
If you need to perform a couple of optimization steps, you can simply do it in a loop:
train_op = optimizer.minimize(loss)
for i in range(10):
sess.run(train_op)
I don't think parameter multitrain_op = optimizer.minimize(loss, steps) was needed in the implementation of the current optimizers and the final user can easily simulate it with code before, so that was probably the reason it was not added.
Let's take a look at a TF implementation of an example optimizer, Adam: python code, c++ code.
The "gradient handling" part is processed entirely by inheriting optimizer.Optimizer in python code. The python code only define types of storage to hold the moving window averages, square of gradients, etc, and executes c++ code passing to it the already calculated gradient.
The c++ code has 4 lines, updating the stored averages and parameters.
So to your question "how to build an optimizer":
1 . define what you need to store between the calculations of the gradient
2. inherit optimizer.Optimizer
3. implement updating the variables in c++.

LPSolve - specify constant coefficients

I'm using LPSolve IDE to solve a LP problem. I have to test the model against about 10 or 20 sets of different parameters and compare them.
Is there any way for me to keep the general model, but to specify the constants as I wish? For example, if I have the following constraint:
A >= [c]*B
I want to test how the model behaves when [c] = 10, [c] = 20, and so on. For now, I'm simply preparing different .lp files via search&replace, but:
a) it doesn't seem too efficient
b) at some point, I need to consider the constraint of the form A >= B/[c] // =(1/[c]*B). It seems, however, that LPSolve doesn't recogize the division operator. Is specifying 1/[c] directly each time the only option?
It is not completely clear what format you use with lp_solve. With the cplex lp format for example, there is no better way: you cannot use division for the coefficient (or even multiplication for that matter) and there is no function to 'include' another file or introduce a symbolic names for a parameter. It is a very simple language, and not suitable for any complex task.
There are several solutions for your problem; it depends if you are interested in something fast to implement, or 'clean', reusable and with a short runtime (of course this is a compromise).
You have the possibility to generate your lp files from another language, e.g. python, bash, etc. This is a 'quick and dirty' solution: very slow at runtime, but probably the faster to implement.
As every lp solver I know, lp_solve comes with several modelling interfaces: you can for example use the GNU mp format instead of the current one. It recognizes multiplication, divisions, conditionals, etc. (everything you are looking for, see the section 3.1 'numeric expressions')
Finally, you have the possibility to use directly the lp_solve interface from another programming language (e.g. C) which will be the most flexible option, but it may require a little bit more work.
See the lp_solve documentation for more details on the supported input formats and the API reference.