I am looking for a simple way to obtain a lot of "good" solutions in a LP problem (not MIP) with CPLEX, and not only (one of the) optimal basic solution(s). By "good" solutions I mean that the corresponding objective values are not so far from the real optimal value. Such pool of solutions could help the decision-maker...
More precisely, given a certain polyedron Ax<=b with x>=0 and an objective function z=cx I want to maximize, after running the LP, I can obtain the optimal value z*. Then I want to enumerate all the extreme points of the polyhedron given by the set of constraints
Ax <= b
cx >= z* - epsilon
x >= 0
when epsilon is a given tolerance.
I know that CPLEX offers way to generate solution pool (see here), but it will not function because this method is for MIP : it enumerates all the solutions of an IP (or one solution for every given set of fixed integer variables if the problem is a MIP).
An interesting efficient way is to visit the adjacent solutions of the optimal basic solution, i.e. all the adjacent extreme points : if I suppose the polyhedron is not degenerative, for each pair of basic variable x_B and non-basic variable x_N, I compute the basic solution obtained when x_B leaves the basis and x_N enters in the basis. Then I throw the solutions with cx < z*-epsilon, and for the others I repeat the procedure. [I know that I could improve this algorithm, but this is the general idea].
The routine CPPXpivot of the Callable Library could help to do this pivoting operation, but I did not find an equivalent in the C++ API (concert technology). Does someone know if such an equivalent exist, or could propose me an other way to answer my original problem ?
Thanks a lot :) !
Rémi L.
There is one interesting way to make this suitable for use with the Cplex solution pool. Use binary variables to encode the current basis, e.g. basis[k] = 0 meaning nonbasic and basis[k] = 1 indicating variable (or row) k is basic. Of course we have sum(k, basis[k]) = m (number of rows). Finally we have x[k] <= basis[k] * upperbound[k] (i.e. if nonbasic then zero -- assuming positive variables). When we add this to the LP model we end up with a MIP and can enumerate (all or some, optimal or near optimal) bases using the Cplex solution pool. See here and here.
Related
what is the difference between Constraint Programming (CP) and Linear Programming (LP) or Mixed Integer Programming (MIP) ? I know what LP and MIP is but dont understand the difference to CP - or is CP just the same as MIP and LP ? I am a but confused on this ...
This may be a little exhaustive, but I will try to provide all the information to cover a good scope of this topic.
I'll start with an example and the corresponding information will make more sense.
**Example**: Say we need to sequence a set of tasks on a machine. Each task i has a specific fixed processing time pi. Each task can be started after its release date ri , and must be completed before its deadline di. Tasks cannot overlap in time. Time is represented as a discrete set of time points, say {1, 2,…, H} (H stands for horizon)
MIP Model:
Variables: Binary variable xij represents whether task i starts at time period j
Constraints:
Each task starts on exactly one time point
* ∑j xij = 1 for all tasks i
Respect release date and deadline
j*xij = 0 for all tasks i and (j < ri ) or (j > di - pi )
Tasks cannot overlap
Variant 1:
∑i xij ≤ 1 for all time points j we also need to take processing times into account; this becomes messy
Variant 2:
introduce binary variable bi representing whether task i comes before task k must be linked to xij; this becomes messy
MIP models thus consists of linear/quadratic optimization functions, linear/ quadratic optimization constraints and binary/integer variables.
CP model:
Variables:
Let starti represent the starting time of task i takes a value from domain {1,2,…, H} - this immediately ensures that each task starts at exactly one time point
Constraints:
Respect release date and deadline
ri ≤ starti ≤ di - pi
Tasks cannot overlap:
for all tasks i and j (starti + pi < startj) OR (starti + pi < starti)
and that is it!
You could probably say that the structure of the CP models and MIP models are the same: using decision variables, objective function and a set of constraints. Both MIP and CP problems are non-convex and make use of some systematic and exhaustive search algorithms.
However, we see the major difference in modeling capacity. With CP we have n variables and one constraint. In MIP we have nm variables and n+m constraints. This way to map global constraints to MIP constraints using binary variables is quite generic
CP and MIP solves problems in a different way. Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time. The main difference lies in what happens at each node of the resulting problem tree. In MIP one usually solves a linear relaxation of the problem and uses the result to guide search. This is a branch and bound search. In CP, logical inferences based on the combinatorial nature of each global constraint are performed. This is an implicit enumeration search.
Optimization differences:
A constraint programming engine makes decisions on variables and values and, after each decision, performs a set of logical inferences to reduce the available options for the remaining variables' domains. In contrast, an mathematical programming engine, in the context of discrete optimization, uses a combination of relaxations (strengthened by cutting-planes) and "branch and bound."
A constraint programming engine proves optimality by showing that no better solution than the current one can be found, while an mathematical programming engine uses a lower bound proof provided by cuts and linear relaxation.
A constraint programming engine doesn't make assumptions on the mathematical properties of the solution space (convexity, linearity etc.), while an mathematical programming engine requires that the model falls in a well-defined mathematical category (for instance Mixed Integer Quadratic Programming (MIQP).
In deciding how you should define your problem - as MIP or CP, Google Optimization tools guide suggests: -
If all the constraints for the problem must hold for a solution to be feasible (constraints connected by "and" statements), then MIP is generally faster.
If many of the constraints have the property that just one of them needs to hold for a solution to be feasible (constraints connected by "or" statements), then CP is generally faster.
My 2 cents:
CP and MIP solves problems in a different way. Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time. The main difference lies in what happens at each node of the resulting problem tree. In MIP one usually solves a linear relaxation of the problem and uses the result to guide search. This is a branch and bound search. In CP, logical inferences based on the combinatorial nature of each global constraint are performed.
There is no one specific answer to which approach would you use to formulate your model and solve the problem. CP would probably work better when the number of variables increase by a lot and the problem is difficult to formulate the constraints using linear equalities. If the MIP relaxation is tight, it can give better results - If you lower bound doesn't move enough while traversing your MIP problem, you might want to take higher degrees of MIP or CP into consideration. CP works well when the problem can be represented by Global constraints.
Some more reading on MIP and CP:
Mixed-Integer Programming problems has some of the decision variables constrained to integers (-n … 0 … n) at the optimal solution. This makes it easier to define the problems in terms of a mathematical program. MP focuses on special class of problems and is useful for solving relaxations or subproblems (vertical structure).
Example of a mathematical model:
Objective: minimize cT x
Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
some or all xj must take integer values (integrality constraints)
Or the model could be define by Quadratic functions or constraints, (MIQP/ MIQCP problems)
Objective: minimize xT Q x + qT x
Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
xT Qi x + qiT x ≤ bi (quadratic constraints)
some or all x must take integer values (integrality constraints)
The most common algorithm used to converge MIP problems is the Branch and Bound approach.
CP:
CP stems from a problems in AI, Operations Research and Computer Science, thus it is closely affiliated to Computer Programming.- Problems in this area assign symbolic values to variables that need to satisfy certain constraints.- These symbolic values have a finite domain and can be labelled with integers.- CP modelling language is more flexible and closer to natural language.
Quoted from one of the IBM docs, constraint Programming is a technology where:
business problems are modeled using a richer modeling language than what is traditionally found in mathematical optimization
problems are solved with a combination of tree search, artificial intelligence and graph theory techniques
The most common constraint(global) is the "alldifferent" constraint, which ensures that the decision variables assume some permutation (non-repeating ordering) of integer values. Ex. If the domain of the problem is 5 decision variables viz. 1,2,3,4,5, they can be ordered in any non-repetitive way.
The answer to this question depends on whether you see MIP and CP as algorithms, as problems, or as scientific fields of study.
E.g., each MIP problem is clearly a CP problem, as the definition of a MIP problem is to find a(n optimal) solution to a set of linear constraints, while the definition of a CP problem is to find a(n optimal) solution to a set of (non-specified) constraints. On the other hand, many important CP problems can straightforwardly be converted to sets of linear constraints, so seeing CP problems through a MIP perspective makes sense as well.
Algorithmically, CP algorithms historically tend to involve more search branching and complex constraint propagation, while MIP algorithms rely heavily on solving the LP relaxation to a problem. There exist hybrid algorithms though (e.g., SCIP, which literally means "Solving Constraint Integer Programs"), and state-of-the-art solvers often borrow techniques from the other side (e.g., no-good learning and backjumps originated in CP, but are now present in MIP solvers as well).
From a scientific field of study point of view, the difference is purely historical: MIP is part of Operations Research, originating at the end of WWII out of a need to optimize large-scale "operations", while CP grew out of logic programming in the field of Artificial Intelligence to model and solve problems declaratively. But there is a good case to be made that both these fields study the same problem. Note that there even is a big shared conference: CPAIOR.
So all in all, I would say MIP and CP are the same in most respects, except on the main techniques used in typical algorithms for each.
LP and MIP are solved using mathematical programming, while there are specific methods to solve constraint programming problems. The following reference is helpful in understanding the differences:
http://ibmdecisionoptimization.github.io/docplex-doc/mp_vs_cp.html
I have to find all basic solutions of some tiny linear-programming problems.
Here's an example (in lp_solve format):
max: x1 + x2;
x1 + x2 <= 1;
x1 <= 0.8;
x2 <= 0.8;
All 2 basic solutions:
x1 = 0.2, x2 = 0.8
x1 = 0.8, x2 = 0.2
Of course there is a way of finding alternative solutions, but I really prefer using existing libraries instead of crafting my own simplex code.
I'm using Python as my programming language, and hoping there's some method in lp_solve or GLPK's C API can do this.
Thanks.
There is no routine to do that with glpk; and IMHO it is very unlikely that any real-world solver implements something like that, since it is not very useful in practise and it is certainly not a simple problem.
What is indeed easy to find one other basic solution once you reached optimality with the simplex algorithm, which does not mean that it is easy to list them all.
Consider a LP whose domain has dimension n; the set S of the optimal solutions is a convex polyhedron whose dimension m can be anything from 0 to n-1.
You want a method to list all the basic solutions of the problem, that is all the vertices of S: as soon as m is greater than 2, you will need to carefully avoid cycling when you move from one basic solution to another.
However, there is (luckily!) no need to write your own simplex code: you can access the internals of the current basis with the glpk library, and probably with lpsolve too.
Edit: two possible solutions
The better way would be to use another library such as PPL for this.
Assume that you have a problem of the form:
min cx; subject to: Ax <= b
First solve your problem with glpk, this will give you the optimal value V of the problem. From this point, you can use PPL to get the description of the polyedron of optimal values:
cx = V and Ax <= b
as the convex hull of its extreme points, which correspond to the BFSs you are looking for.
You can (probably) use the glpk simplex routines. Once you get an optimal BFS, you can get the reduced cost associated with all non-basic columns using the routine glp_get_row_dual (the basis status of the variable can be obtained with glp_get_row_stat), so you can find a non-basic variables with a null reduced cost. Then, I think that you can use function glp_set_row_stat to change the basis status of this column in order to have it enter the basis.
(And then, you can iterate this process as long as you avoid cycling.)
Note that I did not try any of those solutions myself; I think that the first one is by far the best, although it will require that you learn the PPL API. If you wish to go for the second one, I strongly suggest that you send an e-mail to the glpk maintainer (or look at the source code), because I am really not sure it will work as is.
Here's the problem:
I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Edit: You might just want to stick to linear programming with equality and inequality constraints, but here's an interesting exact solution that does not incorporate the constraint that your unknowns are between 0 and 1.
Here's a powerpoint discussing your problem: http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf
I'll translate your problem into math to make things a bit easier to figure out:
you have a 6x20 matrix A and a vector x with 20 elements. You want to minimize (x^T)e subject to Ax=y. According to the slides, if you were just minimizing the sum of x, then the answer is A^T(AA^T)^(-1)y. I'll take another look at this as soon as I get the chance and see what the solution is to minimizing (x^T)e (ie your specific problem).
Edit: I looked in the powerpoint some more and near the end there's a slide entitled "General norm minimization with equality constraints". I am going to switch the notation to match the slide's:
Your problem is that you want to minimize ||Ax-b||, where b = 0 and A is your e vector and x is the 20 unknowns. This is subject to Cx=d. Apparently the answer is:
x=(A^T A)^-1 (A^T b -C^T(C(A^T A)^-1 C^T)^-1 (C(A^T A)^-1 A^Tb - d))
it's not pretty, but it's not as bad as you might think. There's really aren't that many calculations. For example (A^TA)^-1 only needs to be calculated once and then you can reuse the answer. And your matrices aren't that big.
Note that I didn't incorporate the constraint that the elements of x are within [0,1].
It looks like the solution for what I am doing is with Linear Programming. It is starting to come back to me, but if I have other problems I will post them in their own dedicated questions instead of turning this into an encyclopedia.
My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.
I'm having a trouble with my math:
Assume that we have a function: F(x,y) = P; And my question is: what would be the most efficient way of counting up suitable (x,y) plots for this function ? It means that I don't need the coordinates themself, but I need a number of them. P is in a range: [0 ; 10^14]. "x" and "y" are integers. Is it solved using bruteforce or there are some advanced tricks(math / programming language(C,C++)) to solve this fast enough ?
To be more concrete, the function is: x*y - ((x+y)/2) + 1.
x*y - ((x+y)/2) + 1 == P is equivalent to (2x-1)(2y-1) == (4P-3).
So, you're basically looking for the number of factorizations of 4P-3. How to factor a number in C or C++ is probably a different question, but each factorization yields a solution to the original equation. [Edit: in fact two solutions, since if A*B == C then of course (-A)*(-B) == C also].
As far as the programming languages C and C++ are concerned, just make sure you use a type that's big enough to contain 4 * 10^14. int won't do, so try long long.
You have a two-parameter function and want to solve it for a given constant.
This is a pretty big field in mathematics, and there are probably dozens of algorithms of solving your equation. One key idea that many use is the fact that if you find a point where F<P and then a point F>P, then somewhere between these two points, F must equal P.
One of the most basic algorithms for finding a root (or zero, which you of course can convert to by taking F'=F-P) is Newton's method. I suggest you start with that and read your way up to more advanced algorithms. This is a farily large field of study, so happy reading!
Wikipedia has a list of root-finding algorithms that you can use as a starting place.