When defining a mixed integer linear programming problem using pulp, one may define sos like so:
x1 = LpVariable('x1', cat = LpInteger)
x2 = LpVariable('x2', cat = LpInteger)
prob.sos1['sos'] = x1 + 2*x2
(an "sos", or specially ordered set, is a special constraint specifying that only a single variable in the set may be nonzero).
We see that this allows specifying weights for the sos variables (1,2 in this case). Presumably they define the precedence of each variable, i.e. which variables to allow to be nonzero first when branching.
But how exactly are the weights defined?
The underlying solver is coin-or-cbc, and I couldn't find anything about how they use SOS weights.
Weights can be used for branching, although not all solvers use them that way. I believe CBC does, but you'll probably need to check the source code to confirm.
Weights in SOS2 are often needed to specify the ordering (SOS2 has the concept of neighbors). SOS1 does not have this issue.
Finally, if you have good bounds, binary variables are often better than SOS1 variables. Solvers do better bounding and generate better cuts when using binary variables. My rule is: if you can formulate a SOS1 structure with binary variables using good big-M values use binary variables. If you cannot find good big-M values, consider SOS1.
Related
Consider two variables sex and age, where sex is treated as categorical variable and age is treated as continuous variable. I want to specify the full factorial using the non-interacted age-term as baseline (the default for sex##c.age is to omit one of the sex-categories).
The best I could come up with, so far, is to write out the factorial manually (and omitting `age' from the regression):
reg y sex#c.age i.sex
This is mathematically equivalent to
reg y sex##c.age
but allows me to directly infer the regression coefficients (and standard errors!) on the interaction term sex*age for both genders.
Is there a way to stick to the more economic "##" notation but make the continuous variable the omitted category?
(I know the manual approach has little overhead notation in the example given here, but the overhead becomes huge when working with triple-interaction terms).
Not a complete answer, but a workaround: One can use `lincom' to obtain linear combinations of coefficients and standard errors. E.g., one could obtain the combined effects for both sexes as follows:
reg y sex##c.age
forval i = 1/2 {
lincom age + `i'.sex#c.age
}
what is the difference between Constraint Programming (CP) and Linear Programming (LP) or Mixed Integer Programming (MIP) ? I know what LP and MIP is but dont understand the difference to CP - or is CP just the same as MIP and LP ? I am a but confused on this ...
This may be a little exhaustive, but I will try to provide all the information to cover a good scope of this topic.
I'll start with an example and the corresponding information will make more sense.
**Example**: Say we need to sequence a set of tasks on a machine. Each task i has a specific fixed processing time pi. Each task can be started after its release date ri , and must be completed before its deadline di. Tasks cannot overlap in time. Time is represented as a discrete set of time points, say {1, 2,…, H} (H stands for horizon)
MIP Model:
Variables: Binary variable xij represents whether task i starts at time period j
Constraints:
Each task starts on exactly one time point
* ∑j xij = 1 for all tasks i
Respect release date and deadline
j*xij = 0 for all tasks i and (j < ri ) or (j > di - pi )
Tasks cannot overlap
Variant 1:
∑i xij ≤ 1 for all time points j we also need to take processing times into account; this becomes messy
Variant 2:
introduce binary variable bi representing whether task i comes before task k must be linked to xij; this becomes messy
MIP models thus consists of linear/quadratic optimization functions, linear/ quadratic optimization constraints and binary/integer variables.
CP model:
Variables:
Let starti represent the starting time of task i takes a value from domain {1,2,…, H} - this immediately ensures that each task starts at exactly one time point
Constraints:
Respect release date and deadline
ri ≤ starti ≤ di - pi
Tasks cannot overlap:
for all tasks i and j (starti + pi < startj) OR (starti + pi < starti)
and that is it!
You could probably say that the structure of the CP models and MIP models are the same: using decision variables, objective function and a set of constraints. Both MIP and CP problems are non-convex and make use of some systematic and exhaustive search algorithms.
However, we see the major difference in modeling capacity. With CP we have n variables and one constraint. In MIP we have nm variables and n+m constraints. This way to map global constraints to MIP constraints using binary variables is quite generic
CP and MIP solves problems in a different way. Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time. The main difference lies in what happens at each node of the resulting problem tree. In MIP one usually solves a linear relaxation of the problem and uses the result to guide search. This is a branch and bound search. In CP, logical inferences based on the combinatorial nature of each global constraint are performed. This is an implicit enumeration search.
Optimization differences:
A constraint programming engine makes decisions on variables and values and, after each decision, performs a set of logical inferences to reduce the available options for the remaining variables' domains. In contrast, an mathematical programming engine, in the context of discrete optimization, uses a combination of relaxations (strengthened by cutting-planes) and "branch and bound."
A constraint programming engine proves optimality by showing that no better solution than the current one can be found, while an mathematical programming engine uses a lower bound proof provided by cuts and linear relaxation.
A constraint programming engine doesn't make assumptions on the mathematical properties of the solution space (convexity, linearity etc.), while an mathematical programming engine requires that the model falls in a well-defined mathematical category (for instance Mixed Integer Quadratic Programming (MIQP).
In deciding how you should define your problem - as MIP or CP, Google Optimization tools guide suggests: -
If all the constraints for the problem must hold for a solution to be feasible (constraints connected by "and" statements), then MIP is generally faster.
If many of the constraints have the property that just one of them needs to hold for a solution to be feasible (constraints connected by "or" statements), then CP is generally faster.
My 2 cents:
CP and MIP solves problems in a different way. Both use a divide and conquer approach, where the problem to be solved is recursively split into sub problems by fixing values of one variable at a time. The main difference lies in what happens at each node of the resulting problem tree. In MIP one usually solves a linear relaxation of the problem and uses the result to guide search. This is a branch and bound search. In CP, logical inferences based on the combinatorial nature of each global constraint are performed.
There is no one specific answer to which approach would you use to formulate your model and solve the problem. CP would probably work better when the number of variables increase by a lot and the problem is difficult to formulate the constraints using linear equalities. If the MIP relaxation is tight, it can give better results - If you lower bound doesn't move enough while traversing your MIP problem, you might want to take higher degrees of MIP or CP into consideration. CP works well when the problem can be represented by Global constraints.
Some more reading on MIP and CP:
Mixed-Integer Programming problems has some of the decision variables constrained to integers (-n … 0 … n) at the optimal solution. This makes it easier to define the problems in terms of a mathematical program. MP focuses on special class of problems and is useful for solving relaxations or subproblems (vertical structure).
Example of a mathematical model:
Objective: minimize cT x
Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
some or all xj must take integer values (integrality constraints)
Or the model could be define by Quadratic functions or constraints, (MIQP/ MIQCP problems)
Objective: minimize xT Q x + qT x
Constraints: A x = b (linear constraints)
l ≤ x ≤ u (bound constraints)
xT Qi x + qiT x ≤ bi (quadratic constraints)
some or all x must take integer values (integrality constraints)
The most common algorithm used to converge MIP problems is the Branch and Bound approach.
CP:
CP stems from a problems in AI, Operations Research and Computer Science, thus it is closely affiliated to Computer Programming.- Problems in this area assign symbolic values to variables that need to satisfy certain constraints.- These symbolic values have a finite domain and can be labelled with integers.- CP modelling language is more flexible and closer to natural language.
Quoted from one of the IBM docs, constraint Programming is a technology where:
business problems are modeled using a richer modeling language than what is traditionally found in mathematical optimization
problems are solved with a combination of tree search, artificial intelligence and graph theory techniques
The most common constraint(global) is the "alldifferent" constraint, which ensures that the decision variables assume some permutation (non-repeating ordering) of integer values. Ex. If the domain of the problem is 5 decision variables viz. 1,2,3,4,5, they can be ordered in any non-repetitive way.
The answer to this question depends on whether you see MIP and CP as algorithms, as problems, or as scientific fields of study.
E.g., each MIP problem is clearly a CP problem, as the definition of a MIP problem is to find a(n optimal) solution to a set of linear constraints, while the definition of a CP problem is to find a(n optimal) solution to a set of (non-specified) constraints. On the other hand, many important CP problems can straightforwardly be converted to sets of linear constraints, so seeing CP problems through a MIP perspective makes sense as well.
Algorithmically, CP algorithms historically tend to involve more search branching and complex constraint propagation, while MIP algorithms rely heavily on solving the LP relaxation to a problem. There exist hybrid algorithms though (e.g., SCIP, which literally means "Solving Constraint Integer Programs"), and state-of-the-art solvers often borrow techniques from the other side (e.g., no-good learning and backjumps originated in CP, but are now present in MIP solvers as well).
From a scientific field of study point of view, the difference is purely historical: MIP is part of Operations Research, originating at the end of WWII out of a need to optimize large-scale "operations", while CP grew out of logic programming in the field of Artificial Intelligence to model and solve problems declaratively. But there is a good case to be made that both these fields study the same problem. Note that there even is a big shared conference: CPAIOR.
So all in all, I would say MIP and CP are the same in most respects, except on the main techniques used in typical algorithms for each.
LP and MIP are solved using mathematical programming, while there are specific methods to solve constraint programming problems. The following reference is helpful in understanding the differences:
http://ibmdecisionoptimization.github.io/docplex-doc/mp_vs_cp.html
Is it possible to constrain parameters in Stata's xttobit to be non-negative? I read a paper where the authors said they did just that, and I am trying to work out how.
I know that you can constrain parameters to be strictly positive by exponentially transforming the variables (e.g. gen x1_e = exp(x1)) and then calling nlcom after estimation (e.g. nlcom exp(_b[x1:_y]) where y is the independent variable. (That may not be exactly right, but I am pretty sure the general idea is correct. Here is a similar question from Statlist re: nlsur).
But what would a non-negative constraint look like? I know that one way to proceed is by transforming the variables, for example squaring them. However, I tried this with the author's data and still found negative estimates from xttobit. Sorry if this is a trivial question, but it has me a little confused.
(Note: this was first posted on CV by mistake. Mea culpa.)
Update: It seems I misunderstand what transformation means. Suppose we want to estimate the following random effects model:
y_{it} = a + b*x_{it} + v_i + e_{it}
where v_i is the individual random effect for i and e_{it} is the idiosyncratic error.
From the first answer, would, say, an exponential transformation to constrain all coefficients to be positive look like:
y_{it} = exp(a) + exp(b)*x_{it} + v_i + e_{it}
?
I think your understanding of constraining parameters by transforming the associated variable is incorrect. You don't transform the variable, but rather you fit your model having reexpressed your model in terms of transformed parameters. For more details, see the FAQ at http://www.stata.com/support/faqs/statistics/regression-with-interval-constraints/, and be prepared to work harder on your problem than you might have expected to, since you will need to replace the use of xttobit with mlexp for the transformed parameterization of the tobit log-likelihood function.
With regard to the difference between non-negative and strictly positive constraints, for continuous parameters all such constraints are effectively non-negative, because (for reasonable parameterization) a strictly positive constraint can be made arbitrarily close to zero.
I am looking for a simple way to obtain a lot of "good" solutions in a LP problem (not MIP) with CPLEX, and not only (one of the) optimal basic solution(s). By "good" solutions I mean that the corresponding objective values are not so far from the real optimal value. Such pool of solutions could help the decision-maker...
More precisely, given a certain polyedron Ax<=b with x>=0 and an objective function z=cx I want to maximize, after running the LP, I can obtain the optimal value z*. Then I want to enumerate all the extreme points of the polyhedron given by the set of constraints
Ax <= b
cx >= z* - epsilon
x >= 0
when epsilon is a given tolerance.
I know that CPLEX offers way to generate solution pool (see here), but it will not function because this method is for MIP : it enumerates all the solutions of an IP (or one solution for every given set of fixed integer variables if the problem is a MIP).
An interesting efficient way is to visit the adjacent solutions of the optimal basic solution, i.e. all the adjacent extreme points : if I suppose the polyhedron is not degenerative, for each pair of basic variable x_B and non-basic variable x_N, I compute the basic solution obtained when x_B leaves the basis and x_N enters in the basis. Then I throw the solutions with cx < z*-epsilon, and for the others I repeat the procedure. [I know that I could improve this algorithm, but this is the general idea].
The routine CPPXpivot of the Callable Library could help to do this pivoting operation, but I did not find an equivalent in the C++ API (concert technology). Does someone know if such an equivalent exist, or could propose me an other way to answer my original problem ?
Thanks a lot :) !
Rémi L.
There is one interesting way to make this suitable for use with the Cplex solution pool. Use binary variables to encode the current basis, e.g. basis[k] = 0 meaning nonbasic and basis[k] = 1 indicating variable (or row) k is basic. Of course we have sum(k, basis[k]) = m (number of rows). Finally we have x[k] <= basis[k] * upperbound[k] (i.e. if nonbasic then zero -- assuming positive variables). When we add this to the LP model we end up with a MIP and can enumerate (all or some, optimal or near optimal) bases using the Cplex solution pool. See here and here.
I'm using LPSolve IDE to solve a LP problem. I have to test the model against about 10 or 20 sets of different parameters and compare them.
Is there any way for me to keep the general model, but to specify the constants as I wish? For example, if I have the following constraint:
A >= [c]*B
I want to test how the model behaves when [c] = 10, [c] = 20, and so on. For now, I'm simply preparing different .lp files via search&replace, but:
a) it doesn't seem too efficient
b) at some point, I need to consider the constraint of the form A >= B/[c] // =(1/[c]*B). It seems, however, that LPSolve doesn't recogize the division operator. Is specifying 1/[c] directly each time the only option?
It is not completely clear what format you use with lp_solve. With the cplex lp format for example, there is no better way: you cannot use division for the coefficient (or even multiplication for that matter) and there is no function to 'include' another file or introduce a symbolic names for a parameter. It is a very simple language, and not suitable for any complex task.
There are several solutions for your problem; it depends if you are interested in something fast to implement, or 'clean', reusable and with a short runtime (of course this is a compromise).
You have the possibility to generate your lp files from another language, e.g. python, bash, etc. This is a 'quick and dirty' solution: very slow at runtime, but probably the faster to implement.
As every lp solver I know, lp_solve comes with several modelling interfaces: you can for example use the GNU mp format instead of the current one. It recognizes multiplication, divisions, conditionals, etc. (everything you are looking for, see the section 3.1 'numeric expressions')
Finally, you have the possibility to use directly the lp_solve interface from another programming language (e.g. C) which will be the most flexible option, but it may require a little bit more work.
See the lp_solve documentation for more details on the supported input formats and the API reference.