glpsol tool split MIP and LP solutions - linear-programming

I am using glpsol to solve a rather big integer optimisation problem. The Simplex algorithm runs for about 30 mins on it, and after that glpsol tries to find integer solution using MIP solver.
Question: Can I split this into two steps using only the glpsol command tool, or should I use glpk API ?
I have tried "read" and "nomip" option that according to the documentation is
-r filename, --read filename
to read the solution from the provided filename rather than to find it with the solver
in this format:
glpsol --cpxlp WhiskasModel.lp --write WhiskasSolution.mip --nomip
and after that
glpsol --cpxlp WhiskasModel.lp --read WhiskasSolution.mip
but I receive an error:
Reading MIP solution from `WhiskasModel.mip'...
WhiskasModel.mip:33702: non-integer column valueUnable to read problem solution
and it is of course true because WhiskasModel.mip is an LP solution with nonint values.
I find the glpsol toolkit rather powerful and I want to play with some MIP options but on each step to wait 30 minutes is rather boring. Can I tell it, "use this LP solution and start MIP" ?

One thing to try: write the LP basis to a plain text file, and then when re-starting, start with that LP solution as the basis.
Try
-w WhiskasBasis.txt
and when restarting to continue on as an IP, ask it to use that basis by adding the ini option.
--ini WhiskasBasis.txt
Other suggestions:
I wouldn't use the command-line option if you are going to be doing this often. The GLPK API (for your language of choice) and with an IDE, will give you so much more flexibility and control. This link mentions several.
If you post your MIP model formulation with the objective and the constraints (maybe as a different question), you might get suggestions to speed things up. Sometimes there are relaxations and sub-problems that can help tremendously.
Hope that helps.

Related

infeasible row in Cplex C++

I have a small question; I am solving MIP Model , coded on C++ and solving by Cplex solver. I remember that when I test the model with relatively smaller instances , it was giving me "infeasibility row …."; Now ,I test the same model on a large size instance and I get the infeasibility and it does not tell me which row causes infeasibility. How can I find the which parameter or constraint causes infeasibility ? While the larger instance is tested, the presolve is performed, may it cause the infeasibility? I googled about conflict refiner but could not find a small and clear example explaining how to invoke it ? I will be very happy, if you have any suggestions or ideas
Thank you
Another way to find where the infeasibility comes is to export your model as an LP file or similar, then try to solve it with the standalone cplex. It helps if you name your variables and constraints sensibly. Then you have all the interactive tools in cplex to help you find where the issues are.
in C++ you should have a look at FeasOpt
In the documentation see
CPLEX > User's Manual for CPLEX > Infeasibility and unboundedness
If you model in OPL you could call the relaxation from concert C++ APIs

How to obtain iterations and runtime of the Feasibility Pump from the Coin-OR Framework

I have programmed an algorithm that finds feasible points for mixed-integer convex optimization problems. Now I wish to compare it with the Feasibility Pump for mixed-integer nonlinear programms on a testbed from the MINLPlib library.
I have access to the BONMIN solver from the Coin OR project, where the Feasibility Pump is also implemented, via Pyomo. Here is a list of possible options for this solver.
My questions are
Are the following options correct to test (plain vanilla) feasibility pump?
opt = SolverFactory('bonmin')
opt.options['bonmin.algorithm'] = 'b-ifp' # Iterated Feasibility Pump as solver
opt.options['bonmin.pump_for_minlp'] = 'yes' # Is this needed?
opt.options['bonmin.solution_limit'] = '1' #For terminating after 1st feasible point
If not, any hint how to do it correctly is appreciated.
How do I access the number of iterations (i.e. pumping cycles) of the feasibility pump? I can see iteration information in the print output but it would be very helpful, if it was stored in some variable.
Pyomo calls Bonmin through the ASL (AMPL solver library) interface. Therefore, whatever options would work for AMPL should be the same ones that are appropriate here.
As for the iteration information, there are various ways of capturing the print output and parsing it to retrieve the desired information. The most straightforward way may be to pipe the output to a file and read it as part of a small post-processing script/function.

Is tf.py_func allowed at online prediction time?

Is tf.py_func allowed at online prediction time?
If yes any examples of how to use it?
Does the answer change if I need to install additional pip packages?
My use-case: I work with text, I need to do word stemming (using porter stemmer), I know how to do it using python, tensorflow doesn't have Ops for that. I would like to use the same text processing at training and prediction time - thus I would like to encode it all into a tensorflow graph.
https://www.tensorflow.org/api_docs/python/tf/py_func comes with known limitations and I would like to know if it will work during training and online prediction before I invest more time into it.
Thanks
Unfortunately, no. Py_func can not be restored from a saved model. However, since your use case involves pre-processing, just invoke the py_func explicitly in all three (train, eval, serving) input functions. This won't work if the py_func is in the middle of your graph, but for stemming, it should work just fine.

specifying tolerance for GLPK solver in PuLP Python

I am running PuLP Programming Library in Python 2.7.8, Windows 32 bit. I'm using GLPK as my solver for a mixed integer linear programming problem. The solver converges to approx. 1% of the optimal quickly, however time to compute the exact optimal solution is high. Is there a way to specify percent tolerance for GLPK solver using PuLP? I searched https://pythonhosted.org/PuLP/solvers.html but it doesn't give any answer for GLPK solver.
If you run "glpsol" on the command line with "--help" you see the "--mipgap tol", where tol is the tolerance.
So, in PuLP, have you tried this:
model.solve(GLPK(options=['--mipgap', '0.01']))
(from this discussion a while ago) (and notice how you can use this same method to pass in more arguments that you please).
Furthermore, I went into the source code ("solvers.py") and took a look at how GLPK expects its "options" arguments, and indeed it expects the arguments as above (look at line 345 or so in the file reproduced below):
proc = ["glpsol", "--cpxlp", tmpLp, "-o", tmpSol]
if not self.mip: proc.append('--nomip')
proc.extend(self.options)
So you see that "proc" (the command which is run using Python's "subprocess" later) is "extended" with what you specify via "options" (incidentally stored in the variable self.options). So, it looks like the approach above (using '--mipgap' etc. in a list) is (still) correct.
Finally, I have not tried that myself, but I hope this helps.

Cross Validation in libsvm

I'm using libsvm library in my project and have recently discovered that it provides out-of-the-box cross validation.
I'm checking the documentation and it says clearly that I have to call svm-train with -n switch to use CV feature
.
When I call it with -v switch I cannot get a model file which is needed by svm-predict.
Implementing Support Vector Machine from scratch is beyond the scope of my project, so I'd rather fix this one if it is broken or ask the community for support.
Can anybody help with that?
Here's the link to the library, implemented in C and C++, and here is the paper that describes how to use it.
Cause libsvm use cv only for parameter selection.
From libsvm FAQ:
Q: After doing cross validation, why there is no model file outputted ?
Cross validation is used for selecting good parameters. After finding them, you want to re-train the whole data without the -v option.
If you are going to use cv for estimating quality of classifier on your data you should implement external cross validation by splitting data, train on some part and test on other.
It's been a while since I used libsvm so I don't think I have the answer you're looking, but if you run the cross-validation and are satisfied with the results, running lib-svm with the same parameters without the -v will yield the same model.