How to convert this primal linear program to dual and solve the dual problem? - linear-programming

How do I convert the following primal problem to its dual and finally solve the dual?
Maximize Z=x1+2x2+x3 subject to x1+x2−x3≤2, x1−x2+x3=1, 2x1+x2+x3≥2; x1 ≥ 0, x2 ≤ 0, x3 unrestricted in sign?
I have used the primal-dual correspondence table to find the dual directly from the primal problem. In my given problem, after deriving the dual, the first constraint becomes greater than equal to, second one becomes less than equal and the third one is equal to. y1≥0, y2 unrestricted and y3≤0. Now to convert this to a standard form I am stuck because I introduced y2=y4-y5, y3=-y6 and slack and surplus variables from constraints 1 and 2 respectively. But the problem is, this way I have a total of 8 variables y1,y2,….y8. If I start with the first tableau of the dual simplex, I have 3 constraints but only 2 basic variables y7 and y8 which can never be the case. I am sure I am doing something wrong but what is it. Please help me out! How do I proceed after deriving the dual in order to solve it?

Related

DoCPLEX Solving LP Problem Partially at a time

I am working on Linear Programming Problem with 800K Constraints and the problem takes 20 mins to solve but if I solve the problem for half horizon it just takes 1 min. Is there a way in DoCPLEX where I can solve for partial horizon and then use the solution to solve for other half of the problem without using a for-loop
Three suggestions:
load your problem as LP or SAV into cplex interactive optimizer and run display problem stats. This might show (or rule out) precision issues (ill-conditioned problem). Also it will output number of nonzeros
set datacheck parameters to 2, this might detect numerical issues in data
have you tried different LP algorithms? Using the lpmethod parameter you could try primal, dual or barrier algorithm to see whether one runs faster on your problem.
Reference:
https://www.ibm.com/support/knowledgecenter/SSSA5P_12.10.0/ilog.odms.cplex.help/CPLEX/Parameters/topics/LPMETHOD.html
In DOcplex:
model.parameters.datacheck = 2
model.parameters.lpmethod = 4 # for barrier
From your answers, I can think of the following:
if you are in pure LP (is this true?) I see no point in rounding numbers (but yes, that would help in a MIP, try rounding coefficients whose fractional part is say less than 1e-7: 4.0000001 -> 4)
1e+14 conditioning denotes serious modeling issue: a common source is mixing different objectives with coefficients. Have you tried multi-objective to avoid that?
Another source is big_M formulations, to which you should prefer indicator constraints. If you are not in these two cases, then try to renormalize the data to keep in a smaller condition range...
Finally, you might try setting markowitz tolerance to 0.99, to add extra cautiouness in simplex factorizations, but behavior may vary from one dataset to the other...

Why does CPLEX set unrelated variables as 1?

I have been working on a combinatorial optimization problem which can be modeled as an integer linear programming. I implemented it as a c++ project in visual studio 2017 and CPLEX1271. Since there are exponentially many constraints, I implemented lazy constraints by IloCplex::LazyConstraintCallbackI. In my mind the following procedure is how an optimal soution is produced: everytime an integer solution is identified, LazyConstraintCallbackI will check it and add some violated constraints to the model untill an optimal integer solution is obtained.
However, the objective values given by my implementation for different inputs are not always correct. After almost one year's intermittent debug and test, I finally figured out the reason, which is very problem related but can be explained(hopefully) by the following tiny example:  an integer linear programming involving four bool variables x1, x2, x3 and x4
minimize x1
subject to:
x1 ≥ x2
x1 ≥ x3
x1 ≥ x4
x2 + x3 ≥ 1
x1, x2, x3 and x4 ∈ {0, 1}​
The result given by cplex is:
Solution status = Optimal
Objective value = 1
x1 = 1
x2 = 1
x3 = 1
x4 = 1​
Undoubtedly, the objective value is correct .The weird thing is that cplex sets  x4 = 1. Although whether x4 equals 1 or 0 has no impact on the objective value in this programming. But, when lazy constraint callback is used, this could lead to a problem by adding some incorrect contraint
and integer programming is solved by iteratively adding violated constraints. I'd like to know:
why cplex set "unrelated" variable  x4  as 1, instead of 0?
What should I do to tell CPLEX that I want to leave such "unrelated" variable as 0?
Since there is nothing that forces x4=0 in an optimal solution, there is no guarantee that CPLEX will set x4=0? Why would it? Why is 0 preferred over 1? The model has two optimal solutions, one with x4=0 and one with x4=1. Both have objective value 1. CPLEX is completely free to choose one of them.
Like sascha said in his comments, the only way to force x4=0 in an optimal solution is to add this constraint to the model, for example by setting the objective coefficient to a small positive value.
However, it seems strange that your lazy constraint callback would generate invalid constraints from integer feasible solutions. This looks like a bug: Either the callback makes invalid assumptions about the model (namely that x4=0 in any optimal solution) or there is a bug somewhere in the logic of the callback. Note that in the model at hand it would actually be fine for the callback to cut off the solution with x4=1 since that still leaves the equivalent optimal solution with x4=0 and CPLEX would eventually find that.

How to formulate constraints in linear programming so that a set of consecutive variables are forced to be equal?

Let's say we are optimizing over 2 variables, each a vector of 6. That is, Y=[y0,y1,...y5], and X=[x0, x1, ..., x5]. How do I formulate a constraint in linear programming so that it forces the following solutions: x0=x1=x2=x3 & x4=x5. Or is it better to penalize the differences (e.g. |x0-x1|) in the objective function? Is so, how?
x0=x1 can be expressed as x0-x1 <= 0 and x0-x1 >= 0. The other equalities in the same way.
Edit: As pointed out in the comments, directly stating x0-x1 = 0 is the better way.

Computing the Jacobian matrix in C++ (symbolic math)

Introduction
Let’s assume that I need the Jacobian matrix for the following set of ODE:
dxdt[ 0 ] = -90.0 * x[0] - 50.0 * x[1];
dxdt[ 1 ] = x[0] + 3*x[1];
dxdt[ 2 ] = x[1] + 50*x[2];
In Matlab/Octave this would be pretty easy:
syms x0 x1 x2;
f = [-90.0*x0-50.0*x1, x0+3*x1, x1+50*x2]
v=[x0, x1, x2]
fp = jacobian(f,v)
This would results with following output matrix:
[-90 -50 0 ]
[ 1 3 0 ]
[ 0 1 50]
What I need
Now I want to reproduce the same results in C++. I can’t compute the Jacobian before and hard-code it, as it will depend for example on user inputs and time. So my question is: How to do this? Usually for mathematics operations, I use the Boost library, however in this case I can’t find any solution. There’s only short note about this in implicit systems, but the following code doesn’t work:
sys.second( x , jacobi , t )
It also requests the time (t), so it probably doesn’t generate an analytic form of solution. Do I misunderstand the documentations? Or should I use another function? I would prefer to stay within Boost, as I need the Jacobian as ublas::matrix and I want to avoid conversion.
EDIT:
More specific I will use Jacobian inside rosenbrock4 ODE solver. Example here - lines 47-52. I need automatic generation of this structure as the ODE set may be changed later and I want to avoid manually rewriting Jacobian ever time. Also some variables inside ODE definitions are not constant in time.
I know this is long after the fact, but I have recently been wanting to do the same thing and have come across many auto differentiation (AD) libraries that do this pretty well. I have mostly been using Eigen's AD because I am already using Eigen everywhere. Here's an example of how you can use Eigen's AD to get the jacobian like you asked.
There's also a long list of c++ AD libraries on autodiff.org.
Hope this helps someone!
The Jacobian is based on derivatives of the function. If the function f is only known at run-time (and there are no constraints such as linearity), you have to automatise the differentiation. If you want this to happen exactly (as opposed to a numerical estimation), you need to use symbolic computation. Look for example here and here for libraries supporting this.
Note that the Jacobian usually depends on the state and the time, so it’s impossible to represent it as a constant matrix (such as in your example), unless your problem is so boring that you can solve it analytically anyway.

Find all alternative basic solutions using existing linear-programming tool

I have to find all basic solutions of some tiny linear-programming problems.
Here's an example (in lp_solve format):
max: x1 + x2;
x1 + x2 <= 1;
x1 <= 0.8;
x2 <= 0.8;
All 2 basic solutions:
x1 = 0.2, x2 = 0.8
x1 = 0.8, x2 = 0.2
Of course there is a way of finding alternative solutions, but I really prefer using existing libraries instead of crafting my own simplex code.
I'm using Python as my programming language, and hoping there's some method in lp_solve or GLPK's C API can do this.
Thanks.
There is no routine to do that with glpk; and IMHO it is very unlikely that any real-world solver implements something like that, since it is not very useful in practise and it is certainly not a simple problem.
What is indeed easy to find one other basic solution once you reached optimality with the simplex algorithm, which does not mean that it is easy to list them all.
Consider a LP whose domain has dimension n; the set S of the optimal solutions is a convex polyhedron whose dimension m can be anything from 0 to n-1.
You want a method to list all the basic solutions of the problem, that is all the vertices of S: as soon as m is greater than 2, you will need to carefully avoid cycling when you move from one basic solution to another.
However, there is (luckily!) no need to write your own simplex code: you can access the internals of the current basis with the glpk library, and probably with lpsolve too.
Edit: two possible solutions
The better way would be to use another library such as PPL for this.
Assume that you have a problem of the form:
min cx; subject to: Ax <= b
First solve your problem with glpk, this will give you the optimal value V of the problem. From this point, you can use PPL to get the description of the polyedron of optimal values:
cx = V and Ax <= b
as the convex hull of its extreme points, which correspond to the BFSs you are looking for.
You can (probably) use the glpk simplex routines. Once you get an optimal BFS, you can get the reduced cost associated with all non-basic columns using the routine glp_get_row_dual (the basis status of the variable can be obtained with glp_get_row_stat), so you can find a non-basic variables with a null reduced cost. Then, I think that you can use function glp_set_row_stat to change the basis status of this column in order to have it enter the basis.
(And then, you can iterate this process as long as you avoid cycling.)
Note that I did not try any of those solutions myself; I think that the first one is by far the best, although it will require that you learn the PPL API. If you wish to go for the second one, I strongly suggest that you send an e-mail to the glpk maintainer (or look at the source code), because I am really not sure it will work as is.