I am trying to understand how Gurobi works and have the following question.
Suppose I start with an ILP model 'm' and obtain a solution 'S' using m.optimize(). Now, I add another constraint to the model and re-optimize. Does Gurobi solve the whole problem from scratch or does it use from the found solution 'S' as the starting point and then proceed?
Thank you.
Gurobi, as well as every good solver out there, will try to use an available solution as the starting point to the modified problem, if appropriate. What you are asking is referred to as a warm start.
Specifically, this paragraph from Gurobi's documentation here is relevant to your question:
For linear models, the previously computed solution can be used as an
efficient warm start for the modified model. The Gurobi solver retains
the previous solution, so the next optimize call automatically starts
from the previous solution.
So, yes, it will use thre previous solution S, and will proceed to re-optimize from there, including the new constraint you added.
Related
I am using or-tools with a SCIP to solve some Mixed Integer Linear Program. In other solvers, I know there are options to stop the solver once a certain objective value has been reached, for example the BestObjStop option in GUROBI. Is there a similar option in SCIP as well? If so, is this option accessible via or-tools in C++?
Currently, there is no function implemented that directly does this in SCIP. There is a workaround that should work though.
You can set the objective limit (function SCIPsetObjlimit) to only accept solutions that are better than the objective stop you want. Then you set SCIP to stop as soon as you found a solution (parameter limits/bestsol set to 1).
I am not an or-tools user, so I am not sure how this should best be achieved in or-tools. This recent thread has an answer that tells you how to set specific SCIP parameters from or-tools. Looking at the code on github, it at least seems to me that you can set the objective limit when you call the Solve method with the right GScipParameters. Maybe an or-tools expert can improve this answer.
I have a code with lpsolve package which quite standard way calculates minimal price for given constrains. That works ok.
My next task is to see if i exclude or change one constraint, how does prices change? It could be ok to have little bit higher price, but one constraint is excluded for example. Is there any way to calculate this sensitivity towards constraints? I understand that brutal way is to solve problem again with one constraint excluded, but it's too computationally expensive. Is there any algorithm or so?
I have seen something that one can calculate sensitivity towards coefficients in constraint. But maybe there is simple elegant way for this. Thanks!
I'm using a function (NEQNF manual page here) which I call using
call neqnf(SYSTEM_OF_EQUATIONS, x, xguess=x_GUESS, itmax = 10000)
where SYSTEM_OF_EQUATIONS is the subroutine that contains equations
f(1)=...x(2)...x(1)...
f(2)=...x(1)...x(4)...
f(3)=...x(3)...x(4)...
f(4)=...x(1)...x(5)...
f(5)=...x(1)...x(5)...
from IMSL libraries on Fortran that lets me to solve a non-linear system with five unknowns in five equations. Because there exists more than one solution (couple of five numbers, real or complex, that solve my system), how can I choose which couple to "use" as solution?
I link an online solver with already entered a piece of my system (only two unknowns in two equations, other variables are constant in this example) which easily show you that there exists more than one solution.
example
To conclude my issue I can say that I have to choose the couple of variables which let other variables to be positive, so an easy check is the way to choose the couple.
I don't think the question has anything to do with programming, but I will show how I understand the problem.
You supply an initial guess. Then the method just converges to some solution by a modification of a Newton method.
You can choose the root by the placement of the initial guess. However, the convergence pattern can be very unpredictable (even fractal - https://en.wikipedia.org/wiki/Newton_fractal ) and it may be very difficult to choose the particular root using the initial guess.
My professor gave me a binary linear programming problem, but this problem is slightly different from optimization problems I used to solve(i.e. this is probably not maximizing or minimizing the object function.)
The problem is as follows,
Given a matrix M, for entries m_ij != 0, there are corresponding x_ijk variables.
Entries m_ij = 0 can be ignored.
x_ijk is either 0 or 1, and I want to try 5 x_ijk variables for each m_ij (that is, x_ij1, x_ij2, x_ij3, x_ij4, and x_ij5. One of them is 1 and the others are 0) are enough to satisfy some conditions(a set of inequalities).
More simply, this is to check if the set of constraints involving 5 x_ijk variables for each m_ij is a valid(or feasible) constraints.
I have solved some optimization problems, but I have never solved a problem without an objective function.
What should I set as my objective function here?
0? nothing?
I might be using lp_solve or CPLEX.
Thank you in advance for your advice!
That is correct, you can set an arbitrary constant value as an objective function.
Most of the solvers I have tried allow an empty objective function. Simply leave it out from your model.
Depending on the solver and the API you are using, it can happen that you have to set the coefficients of all variables in the objective to zero.
Don't worry, it has to work.
In response to your comment: Yes, constraint programming tools can provide better performance on feasibility problems than LP solvers (such as CPLEX). I have played with the IBM ILOG CPLEX CP Optimizer a few months ago, it is free for Academic users. Both the LP solver and the CP solver failed on my problems. Don't expect a miracle from constraint programming.
Keep in mind the that time needed to solve a constraint program grows exponentially with the size of the problem in the worse case. Sooner or later, your problems will most likely become unsolvable with either tool.
Just for your information: in the end, the constraint programming solver will call the LP solver (for example CPLEX).
My advice is: try the tool you already have / use the problem formulation that is more natural to you. Check whether the tool can solve your problem. Switch tool only if the tool fails and you cannot improve your model.
I have a set of problems (sets of equations and inequalities) for which I know that all variables have to be integers, and have finitely many solutions. I know that if I take any random objective function and let an lp or mip solver onto it, it finds a solution, however I want all solutions to the problem, and of course, as efficiently as possible. I don't really care about optimizing anything, but apparently most of the software that deals with it does. Is there any solver that can do that? If so, which one is the best/simplest one, or which one would you recommend? At best one that can be used as a C/C++ library.
There is a nice blog post by Paul Rubin on how to find K best solutions, which can be easily generalized to get all the solutions. As Ali suggested one of the approaches is to use a solution pool. Two other approaches are:
Use an incumbent callback to track and reject solutions.
Use an incumbent callback with solution injection.
See the blog post for details.
IBM ILOG CPLEX has a solution pool feature and it's free for academic purposes.
I guess you can probably get all solution if you set the maximum pool size sufficiently large. I don't know for sure, never tried.