specifying tolerance for GLPK solver in PuLP Python - python-2.7

I am running PuLP Programming Library in Python 2.7.8, Windows 32 bit. I'm using GLPK as my solver for a mixed integer linear programming problem. The solver converges to approx. 1% of the optimal quickly, however time to compute the exact optimal solution is high. Is there a way to specify percent tolerance for GLPK solver using PuLP? I searched https://pythonhosted.org/PuLP/solvers.html but it doesn't give any answer for GLPK solver.

If you run "glpsol" on the command line with "--help" you see the "--mipgap tol", where tol is the tolerance.
So, in PuLP, have you tried this:
model.solve(GLPK(options=['--mipgap', '0.01']))
(from this discussion a while ago) (and notice how you can use this same method to pass in more arguments that you please).
Furthermore, I went into the source code ("solvers.py") and took a look at how GLPK expects its "options" arguments, and indeed it expects the arguments as above (look at line 345 or so in the file reproduced below):
proc = ["glpsol", "--cpxlp", tmpLp, "-o", tmpSol]
if not self.mip: proc.append('--nomip')
proc.extend(self.options)
So you see that "proc" (the command which is run using Python's "subprocess" later) is "extended" with what you specify via "options" (incidentally stored in the variable self.options). So, it looks like the approach above (using '--mipgap' etc. in a list) is (still) correct.
Finally, I have not tried that myself, but I hope this helps.

Related

Changing IPOPT options with pyomo doesn't work

I am using IPOPT solver for solving KKTs conditions (a bunch of equality constraints and complementarity conditions).
For assigning the solver for the complementarity problem, I use the command line below:
solver = po.SolverFactory('mpec_nlp')
And then according to IPOPT documentation I am changing the number of maximum iteration
solver.options['max_iter']=1000
But solver doesn't listen to me and still stops at its default maximum of 3000 iterations
Do you have any suggestions on how to make it work?
Consider creating ipopt.opt file using notepad. Write max_iter 500 into the file. Place the file into your working directory (where your code is running)

Initializing IPOPT when using pyomo parmest

I am learning to use pyomo parmest. I am trying to recreate the following parameter estimation example. The code that I created is in the following jupyter notebook. IPOPT stops with the message of maximum iterations exceeded when using collocation but solves with finite difference discretization. Since it is suggested that collocation is typically more robust, I would like to know what I might be doing wrong in using the collocation discretization.
I had originally used number of collocation points in discretization ncp = 4. When I changed ncp = 2, IPOPT ran without issues. The updated ipython notebook is in this location.

How to run half precision inference on a TensorRT model, written with TensorRT C++ API?

I'm trying to run half precision inference with a model natively written in TensorRT C++ API (not parsed from other frameworks e.g. caffe, tensorflow);
To the best of my knowledge, there is no public working example of this problem; the closest thing I found is the sampleMLP sample code, released with TensorRT 4.0.0.3, yet the release notes say there is no support for fp16;
My toy example code can be found in this repo. It contains API-implemented architecture and inference routine, plus the python script I use to convert my dictionary of trained weights to the wtd TensorRT format.
My toy architecture only consists of one convolution; the goal is to obtain similar results between fp32 and fp16, except for some reasonable loss of precision; the code seems to work with fp32, whereas what I obtain in case of fp16 inferencing are values of totally different orders of magnitude (~1e40); so it looks like I'm doing something wrong during conversions;
I'd appreciate any help in understanding the problem.
Thanks,
f
After quickly reading through your code, I can see you did more than is necessary to get a half precision optimized network. You shouldn't manually convert the loaded weights from float32 to float16 yourself. Instead, you should create your network as you normally would and call nvinfer1::IBuilder::setFp16Mode(true) with your nvinfer1::IBuilder object to let TensorRT do the conversions for you where suitable.

Is tf.py_func allowed at online prediction time?

Is tf.py_func allowed at online prediction time?
If yes any examples of how to use it?
Does the answer change if I need to install additional pip packages?
My use-case: I work with text, I need to do word stemming (using porter stemmer), I know how to do it using python, tensorflow doesn't have Ops for that. I would like to use the same text processing at training and prediction time - thus I would like to encode it all into a tensorflow graph.
https://www.tensorflow.org/api_docs/python/tf/py_func comes with known limitations and I would like to know if it will work during training and online prediction before I invest more time into it.
Thanks
Unfortunately, no. Py_func can not be restored from a saved model. However, since your use case involves pre-processing, just invoke the py_func explicitly in all three (train, eval, serving) input functions. This won't work if the py_func is in the middle of your graph, but for stemming, it should work just fine.

glpsol tool split MIP and LP solutions

I am using glpsol to solve a rather big integer optimisation problem. The Simplex algorithm runs for about 30 mins on it, and after that glpsol tries to find integer solution using MIP solver.
Question: Can I split this into two steps using only the glpsol command tool, or should I use glpk API ?
I have tried "read" and "nomip" option that according to the documentation is
-r filename, --read filename
to read the solution from the provided filename rather than to find it with the solver
in this format:
glpsol --cpxlp WhiskasModel.lp --write WhiskasSolution.mip --nomip
and after that
glpsol --cpxlp WhiskasModel.lp --read WhiskasSolution.mip
but I receive an error:
Reading MIP solution from `WhiskasModel.mip'...
WhiskasModel.mip:33702: non-integer column valueUnable to read problem solution
and it is of course true because WhiskasModel.mip is an LP solution with nonint values.
I find the glpsol toolkit rather powerful and I want to play with some MIP options but on each step to wait 30 minutes is rather boring. Can I tell it, "use this LP solution and start MIP" ?
One thing to try: write the LP basis to a plain text file, and then when re-starting, start with that LP solution as the basis.
Try
-w WhiskasBasis.txt
and when restarting to continue on as an IP, ask it to use that basis by adding the ini option.
--ini WhiskasBasis.txt
Other suggestions:
I wouldn't use the command-line option if you are going to be doing this often. The GLPK API (for your language of choice) and with an IDE, will give you so much more flexibility and control. This link mentions several.
If you post your MIP model formulation with the objective and the constraints (maybe as a different question), you might get suggestions to speed things up. Sometimes there are relaxations and sub-problems that can help tremendously.
Hope that helps.