I have a linear program with no objective function. So I just want to test its feasibility. I am using GLPK api for simplex to do that. When I run simplex with the default method (meth=GLP_PRIMAL), the solver fails to converge in 100000 iterations (that is the limit I have set). However, when I use the method GLP_DUALP, after a few iterations I get the message "Warning: dual degeneracy; switching to primal simplex" and it goes on to converge in a reasonable number of iterations.
So my question is if it ultimately uses the primal simplex in both cases, why does it not converge in the first case. What might be going on.
Thanks in advance.
It's hard to say what exactly is happening without detailed information about the problem, but basically the primal simplex in the case of dual degeneracy is some sort of "warm-started".
While using the dual algorithm the optimization process is generating a dual problem, which will start with an optimal solution and then tries to find a feasible solution which still contains the optimal condition. Primal simplex instead will start the other way around with a feasible solution and then search for an optimal solution.
When the strong duality theorem is true both optimal solution should be the same. In your Problem you get a "dual degeneracy" warning, which means that in the dual problem there is a equation which turns to be 0. So the variable inside this equation has no influence to the objective function (no matter if X is 100 or just 1), this is plausible since you dont have an objective function. GLPK is then switching to primal simplex, because for the dual problem there exists alternative optimal solutions. With the already derived informations the primal simplex could be faster. I don't know what GLPK is exactly doing, but normally one can use the feasible solutions of the dual problem as a lower bound to the primal problem.
What stalls your primal approach is perhaps the same issue. The Problem degenerated and the simplex algorithm is getting stuck at one variable which has no influence to the objective, therefor it's hard to find an optimal value for this variable.
Related
what's the difference between optimal solution of cplex and the optimaltol solution of cplex?
1、When I solve a integer programming model with CPLEX solver, the result status of some instances show as “optimal”,however,the result status of some instances show as “optimalTol”. I want to konw the difference between optimal solution of cplex and the optimaltol solution of cplex?
2、My integer programming model is minimize the objective. when I solve the integer programming model with CPLEX solver,the result status is “optimalTol” and the objective value of the model is 1000 for example. When I add cplex.setParam(IloCplex::EpGap,0.0) for the solver. Then solve the model with CPLEX solver again. I want to know whether the value of the objective function will become larger or smaller?
This is relevant for any MIP solver.
A solver can stop for different reasons:
time limit or some other limit (iteration, node limit, etc.)
the gap tolerance has been met
the solution is proven optimal
The default gap tolerance is not zero, but rather something like 1e-4 (for the relative gap) and 1e-6 (for the absolute gap). That means that Cplex can stop while not being 100% sure that there are no better solutions. However, the gap tolerance will bound how much better the solution can be. E.g. if the relative gap tolerance is 1% and the absolute gap tolerance is 0 then the best possible solution cannot be farther away than 1%. If you set both to zero Cplex will need more time but always will deliver OPTIMAL (unless you hit a limit). If you allow a small gap, Cplex will most likely return OPTIMAL_TOL (we stopped because we met the gap tolerance) but in some case can be sure there is no better solution, in which case it will return OPTIMAL. For large, practical models we often are happy with a solution that is better than say 5% from the best possible.
I'm given a linear program P in a standard form.
I need to prove that if both the primal slack form of P and primal slack form of the dual problem are feasible, then the optimal solution for P is 0.
I've trie to work with Weak Duality theorem, but can't get the math together.
Any help would be appericiated.
According to the Duality Theorem if a primal problem admits an optimal solution (x*1,….x*n) then the dual problem admits an optimal solution (y*1, ….y*m) such that all feasible solutions of the program in primal form are bounded from above by that of the dual program, and we can say the opposite holds true as well, that feasible solutions of the dual are bounded from below by those of the primal, meaning that if the two have the same solution then it is the optimal solution of the linear program.
Put simply, the optimal solution is bound from below by feasible solutions of the primal program and from above by feasible solutions of the dual program.
In this case it is given that both primal and dual basic slack forms of the program are feasible, meaning that the basic solution to the slack form is a feasible solution. The basic slack form’s feasible solution is 0 (remember we set all non-basic variables to zero when solving for basic solution). Thus we know that for both the dual and primal program 0 is a feasible solution and so from the duality we know that 0 is the optimal solution of the linear program.
We can prove this by negation. Take some non-zero number k and some non-zero number j such that k is a feasible solution for the linear program’s primal form and j is a feasible solution for the linear program’s dual form. The optimal solution of the linear program occurs when j=k. Let us show that this cannot happen for any number other than 0.
For any feasible solution k of the primal program we know that it is bound from above by all feasible solutions of the dual program. Since we know that one such solution is 0, since the basic slack form of the dual program is feasible, then we know that k must be a non-positive number.
For any feasible solution j of the dual program we know that it is bound from below by all feasible solutions of the primal program. Since we know that one such solution is 0, since the basic slack form of the primal program is feasible, then we know that j must be a non-negative number.
Having shown that any feasible solution of the dual, j, is non-negative, and any feasible solution of the primal, k, is non-positive, we see that the j=k for a non zero number is a contradiction.
The only number that can possibly obtain j=k is 0, and thus is the optimal solution.
I have a linear problem of finding all solutions that meet all constraints.
For example my variables are = [0.323, 0.123, 1.32, 6.3...]
Is it possible to get for example top 100 solutions sorted by fitness(maximization/minimization) function?
In a continuous LP enumerating different solutions is a difficult concept. E.g. consider max x, s.t. x <= 1. Obviously x=1, x=0.99999 are solutions and so are the infinite number of solutions in between. We could enumerate "corner solutions" (or basic solutions). See here for an example. Such a scheme could be adapted to find the first 100 different corner points sorted by the objective. For models with discrete variables, many constraint programming solvers will give you the possibility to find many solutions.
If you can define a fitness function as you suggested, then you might first want to solve the LP that maximizes this function. Afterwards you can include an objective cutoff that forces your second solution to be slightly worse than the first. You can implement this by introducing a cut that is your objective function with the right hand side of optimal value - epsilon.
Of course, this will not give you all (basic) solutions, but you might discover which variables are always at the same value or how much variance there is between the different solutions.
How fast is simplex method compared with brute-force or any other algorithm to solve a ts problem?
You can't model a TS problem with a "pure" LP problem (with continuous variables). You can use an integer-programming formulation, wich will use the simplex method at each node of a research tree (branch and bound or branch and cut method). It will work for small problems, but it is slow because the problem is hard: with one binary variable for each edge for instance, you need a lot of constraints to model the fact that the path is a cycle.
Brute-force is intractable (the problem is exponential), do not even try it unless you have a very small problem. Use the MIP formulation, even for small problems.
For big problems, you should use some kind of heuristic (I think simulated annealing give good results on this one), or a "smart" modelization of you problem (column generation for instance) if you want an exact solution.
I am looking for an iterative linear system solver to calculate a continuously changing field. For the simulation to work properly, I need to re-calculate the field (maybe several times) for every time step. Fortunately, I have a good initial guess for each time step, so it is better I can feed it into an iterative solver. And the coefficient matrix is very dense.
The problem is I checked several iterative solvers online like Gmm++, IML++, ITL, DUNE/ISTL and so on. They are either for sparse systems or don't provide interfaces for inputting initial guesses (I might be wrong since I didn't have time to go through all the documents).
So I have two questions:
1 Is there any such c++ solver available online?
2 Since the coefficient matrix can be as large as thousands * thousands, could a direct solver be quicker than an iterative solver with a really good initial guess?
Great Thanks!
He
If you check the header for Conjugate Gradient in IML++ (http://math.nist.gov/iml++/cg.h.txt), you'll see that you can very easily provide the initial guess for the solution in the very variable where you'd expect to get the solution.