Best online solution for solving linear programming problems - linear-programming

What is the best online solution to solve a linear programming problem?
I heard about several like Gurobi.
One thing I especially want is the possibility to get an approximate solution when the exact resolution takes too long.

The most comprehensive online optimization system is NEOS. It takes models in a variety of input formats and has a wide range of solvers.
Many solvers have settings to allow them to terminate early, even before optimality is reached, if you want an approximate and quick solution. But often your best bet in that case is to use a heuristic algorithm designed specifically for your problem.

Related

Job scheduling to minimise loss

I have got a job scheduling problem. We are given start time, time to
complete the order, deadline.
It is given that start time + time to
complete <= deadline.
I have also been given the loss that will occur if I am not able to
complete the job before the deadline. I have to design an algorithm to minimize the loss.
I have tried changing the standard algorithm of dynamic programming for maximizing the profit in job scheduling but to no success.
What algorithm can I use to solve the question?
Dynamic Programming isn't the right approach based on what you're aiming to optimize. You can find the optimized schedule by using a greedy approach.
Here's a thorough guide with sample code for your desire language (C++), in the guide it assumes each jobs takes only 1 unit of time, which you can easily modify by using time_to_complete instead.
Your problem is similar to the knapsack one. Using a greedy approach is convenient if you aren't actually looking for the best possible solution, but just a "good enough" one.
The big pro of the greedy approach is that the cost is rather lower than other "more thorough" approaches but, if you need the best solution to your problem, I would say that backtracking is the way to go.
Since the deadline can be violated, the problems looks like a Total Weighted Tardiness Scheduling Problem. There are many flavors of it, but most problems under this umbrella are computationally hard, therefore Dynamic Programming (DP) would not be my first choice. In my experience, DP also poses difficulties during modeling and implementation. Same comment for mathematical programming "as-is". Some approaches that can be implemented more quickly are:
constraint programming: very small learning curve, and there are many libraries out there, included very good open source ones (most have C++ API). Bonus: constraint programming can demonstrate optimality.
ad hoc heuristics: (1) start with a constructive algorithm (like the greedy approach suggested by Ling Zhong and Flavio Giobergia), then (2) use some local search approach to improve if and finally (3) embed the approach into a metaheuristic scheme. This way you can build on top of the previous step, and learn a lot about the problem. Note: in general, heuristics cannot demonstrate optimality
special mention: local solver, a hybrid approach between the two above: it lets you model the problem using a formalism similar to constraint programming and then it solves it using heuristics. It is very easy to learn, it usually lets you get started quickly and, in my tests, it provides remarkably good results.

Solver software for finding ALL solutions to a pure integer MIP

I have a set of problems (sets of equations and inequalities) for which I know that all variables have to be integers, and have finitely many solutions. I know that if I take any random objective function and let an lp or mip solver onto it, it finds a solution, however I want all solutions to the problem, and of course, as efficiently as possible. I don't really care about optimizing anything, but apparently most of the software that deals with it does. Is there any solver that can do that? If so, which one is the best/simplest one, or which one would you recommend? At best one that can be used as a C/C++ library.
There is a nice blog post by Paul Rubin on how to find K best solutions, which can be easily generalized to get all the solutions. As Ali suggested one of the approaches is to use a solution pool. Two other approaches are:
Use an incumbent callback to track and reject solutions.
Use an incumbent callback with solution injection.
See the blog post for details.
IBM ILOG CPLEX has a solution pool feature and it's free for academic purposes.
I guess you can probably get all solution if you set the maximum pool size sufficiently large. I don't know for sure, never tried.

Is there an Integer Linear Programming software that returns also non-optimal solutions?

I have an integer linear optimisation problem and I'm interested in feasible, good solutions. As far as I know, for example the Gnu Linear Programming Kit only returns the optimal solution (given it exists).
This takes endless time and is not exactly what I'm looking for: I would be happy with any good solution, not only the optimal one.
So a LP-Solver that e.g. stops after some time and returns the best solution he found so far, would do the job.
Is there any such software? It would be great if that software was open source or at least free as in beer.
Alternatively: Is there any other way that usually speeds up Integer LP problems?
Is this the right place to ask?
Many solvers provide a time limit parameter; if you set the time limit parameter, they will stop once the time limit is reached. If an integer feasible solution has been found, it will return the best feasible solution found to that point.
As you may know, integer programming is NP-hard, and there is a real art to finding optimal solutions as well as good feasible solutions quickly. To compare the different solvers, see Prof. Hans Mittelmann's Benchmarks for Optimization Software. The MILP benchmarks - particularly MIPLIB2010 and the Feasibility Benchmark should be most relevant.
In addition to selecting a good solver, there are many things that can be done to improve solve times including tuning the parameters of the solver and model reformulation. Many people in research and industry - including myself - spend our careers working on improving the solve times of MIP models, both in general and for specific models.
If you are an academic user, note that the top commercial systems like CPLEX and Gurobi are free for academic use. See the respective websites for details.
Finally, you may want to look at OR-Exchange, a sister site to Stack Overflow that focuses on the field of operations research.
(Disclaimer: I currently work for Gurobi Optimization and formerly worked for ILOG, which provided CPLEX).
If you would like to get a feasibel integer solution fast and if you don't need the optimal solution, you can try
Increase the relative or absolute Gap. Usually solvers have small gaps of say 0.0001% for relative gap. This means that the solver will continue searching for MIP solutions until it the MIP solution is not farther than 0.0001% away from the optimal solution. Increase this gab to e.g. 1%., So you get good solution, but the solver will not spent a long time in proving optimality.
Try different values for solver parameters concerning MIP heuristics.
CPLEX and GUROBI have parameters to control, MIP emphasis. This means that the solver will put more emphasis on looking for feasible solutions or on proving optimality. Set emphasis to feasible MIP solutions.
Most solvers like CPLEX, Gurobi, MOPS or GLPK have settings for gap and heuristics. MIP emphasis can be set - as far as I know - only in CPLEX and Gurobi.
A usual approach for solving ILP is branch-and-bound. This utilized the solution of many sub-LP (without-I). The finally optimal result is the best of all sub-LP. As at least one solution is found you could stop anytime and would have a best-so-far.
One package that could do it, is the free lpsolve. Look there at set_timeout for giving a time limit, and when it is ILP the solve function can return in SUPOPTIMAL the best_so_far value.
As far as I know CPLEX can. It can return the solution pool which contains primal feasible solutions in the search, and if you specify the search focus on feasibility rather on optimality, more faesible solutions can be generated. At the end you can just export the pool. You can use the pool to do a hot start so it's pretty up to you. CPlex is free now at least in my country as you can sign up as a researcher.
Could you take into account Microsoft Solver Foundation? The only restriction is technology stack that you prefer and here you should use, as you guess, Microsoft technologies: C#, vb.net, etc. Here is example how to use it with Excel: http://channel9.msdn.com/posts/Modeling-with-Solver-Foundation-30 .
Regarding to your question it is possible to have not a fully optimized solutions if you set efficiency (for example 85% or 0.85). In outcome you can see all possible solutions for such restriction.

How to choose an integer linear programming solver?

I am newbie for integer linear programming.
I plan to use a integer linear programming solver to solve my combinatorial optimization problem.
I am more familiar with C++/object oriented programming on an IDE.
Now I am using NetBeans with Cygwin to write my applications most of time.
May I ask if there is an easy use ILP solver for me?
Or it depends on the problem I want to solve ? I am trying to do some resources mapping optimization. Please let me know if any further information is required.
Thank you very much, Cassie.
If what you want is linear mixed integer programming, then I would point to Coin-OR (and specifically to the module CBC). It's Free software (as speech)
You can either use it with a specific language, or use C++.
Use C++ if you data requires lots of preprocessing, or if you want to put your hands into the solver (choosing pivot points, column generation, adding cuts and so on...).
Use the integrated language if you want to use the solver as a black box (you're just interested in the result and the problem is easy or classic enough to be solved without tweaking).
But in the tags you mention genetic algorithms and graphs algorithms. Maybe you should start by better defing your problem...
For graphs I like a lot Boost::Graph
I have used lp_solve ( http://lpsolve.sourceforge.net/5.5/ ) on a couple of occasions with success. It is mature, feature rich and is extremely well documented with lots of good advice if your linear programming skills are rusty. The integer linear programming is not a just an add on but is strongly emphasized with this package.
Just noticed that you say you are a 'newbie' at this. Well, then I strongly recommend this package since the documentation is full of examples and gentle tutorials. Other packages I have tried tend to assume a lot of the user.
For large problems, you might look at AMPL, which is an optimization interpreter with many backend solvers available. It runs as a separate process; C++ would be used to write out the input data.
Then you could try various state-of-the-art solvers.
Look into GLPK. Comes with a few examples, and works with a subset of AMPL, although IMHO works best when you stick to C/C++ for model setup. Copes with pretty big models too.
Linear Programming from Wikipedia covers a few different algorithms that you could do some digging into to see which may work best for you. Does that help or were you wanting something more specific?

Least Squares Regression in C/C++

How would one go about implementing least squares regression for factor analysis in C/C++?
the gold standard for this is LAPACK. you want, in particular, xGELS.
When I've had to deal with large datasets and large parameter sets for non-linear parameter fitting I used a combination of RANSAC and Levenberg-Marquardt. I'm talking thousands of parameters with tens of thousands of data-points.
RANSAC is a robust algorithm for minimizing noise due to outliers by using a reduced data set. Its not strictly Least Squares, but can be applied to many fitting methods.
Levenberg-Marquardt is an efficient way to solve non-linear least-squares numerically.
The convergence rate in most cases is between that of steepest-descent and Newton's method, without requiring the calculation of second derivatives. I've found it to be faster than Conjugate gradient in the cases I've examined.
The way I did this was to set up the RANSAC an outer loop around the LM method. This is very robust but slow. If you don't need the additional robustness you can just use LM.
Get ROOT and use TGraph::Fit() (or TGraphErrors::Fit())?
Big, heavy piece of software to install just of for the fitter, though. Works for me because I already have it installed.
Or use GSL.
If you want to implement an optimization algorithm by yourself Levenberg-Marquard seems to be quite difficult to implement. If really fast convergence is not needed, take a look at the Nelder-Mead simplex optimization algorithm. It can be implemented from scratch in at few hours.
http://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
Have a look at
http://www.alglib.net/optimization/
They have C++ implementations for L-BFGS and Levenberg-Marquardt.
You only need to work out the first derivative of your objective function to use these two algorithms.
I've used TNT/JAMA for linear least-squares estimation. It's not very sophisticated but is fairly quick + easy.
Lets talk first about factor analysis since most of the discussion above is about regression. Most of my experience is with software like SAS, Minitab, or SPSS, that solves the factor analysis equations, so I have limited experience in solving these directly. That said, that the most common implementations do not use linear regression to solve the equations. According to this, the most common methods used are principal component analysis and principal factor analysis. In a text on Applied Multivariate Analysis (Dallas Johnson), no less that seven methods are documented each with their own pros and cons. I would strongly recommend finding an implementation that gives you factor scores rather than programming a solution from scratch.
The reason why there's different methods is that you can choose exactly what you're trying to minimize. There a pretty comprehensive discussion of the breadth of methods here.