Typical optimization problem are of the type:
minimize ax + by
x1 <= x <= x2
y1 <= y <= y2
where we can think of a and b as the costs related to the two variables.
Is it possible to solve an optimization problem where variables have multiple possible interval bounds?
Ex.:
minimize x + y
x1 <= x <= x2 OR x3 <= x <= x4
y1 <= y <= y2 OR y3 <= y <= y4
possibly with different costs in the different ranges? I don't know how to express this latter requirement in a formula.
In the documentation I found only a single lower bound and a single upper bound and costs related to variables
Math tells us this is not convex and this we will never be able to formulate this as a pure LP. We can however introduce binary variables (the problem will become a MIP) and write
c1 ≤ x ≤ c2 OR c3 ≤ x ≤ c4
as
δ1 * c1 ≤ x1 ≤ δ1 * c2
δ2 * c3 ≤ x2 ≤ δ2 * c4
δ1 + δ2 = 1
x1 + x2 = x
δ1, δ2 ∈ {0,1}
I assumed the c's are constants for the problem (if not, i.e. they are decision variables, the reformulation is similar but slightly more complicated).
Of course if the c's are ordered we can also do:
c1 ≤ x ≤ c4 (these are bounds)
x <= c2 OR x >= c3
I'll leave that as an exercise (hint: again a binary variable is needed)
Related
Can lp_solve return a unifrom solution? (Is there a flag or something that will force this kinf of behavior?)
Say that I have this:
max: x + y + z + w;
x + y + z + w <= 100;
Results in:
Actual values of the variables:
x 100
y 0
z 0
w 0
However, I would like to have something like:
Actual values of the variables:
x 25
y 25
z 25
w 25
This is an oversimplyfied example, but the idea is that if the variables have the same factor in the objective function, then the result should idealy be more uniform and not everything for one, and the other what is left.
Is this possible to do? (I've tested other libs and some of them seem to do this by default like the solver on Excel or Gekko for Python).
EDIT:
For instance, Gekko has already this behavior without me especifing anything...
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = [m.Var() for i in range(4)]
#upper bounds
x1.upper = 100
x2.upper = 100
x3.upper = 100
x4.upper = 100
# Constrain
m.Equation(x1 + x2 + x3 + x4 <= 100)
# Objective
m.Maximize(x1 + x2 + x3 + x4)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
>> [24.999999909] [24.999999909] [24.999999909] [24.999999909]
You would need to explicitly model this (as another objective). A solver does nothing automatically: it just finds a solution that obeys the constraints and optimizes the objective function.
Also, note that many linear solvers will produce so-called basic solutions (corner points). So "all variables in the middle" does not come naturally at all.
The example in Gekko ended on [25,25,25,25] because of how the solver took a step towards the solution from an initial guess of [0,0,0,0] (default in Gekko). The problem is under-specified so there are an infinite number of feasible solutions. Changing the guess values gives a different solution.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
Solution with guess values [50,0,0,0]
[3.1593723566] [32.280209196] [32.280209196] [32.280209196]
Here is one method with equality constraints m.Equations([x1==x2,x1==x3,x1==x4]) to modify the problem to guarantee a unique solution that can be used by any linear programming solver.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
m.Equations([x1==x2,x1==x3,x1==x4])
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
This gives a solution:
[25.000000002] [25.000000002] [25.000000002] [25.000000002]
QP Solution
Switching to a QP solver allows a slight penalty for deviations but doesn't consume a degree of freedom.
from gekko import GEKKO
m = GEKKO()
x1,x2,x3,x4 = m.Array(m.Var,4,lb=0,ub=100)
x1.value=50 # change initial guess
m.Equation(x1 + x2 + x3 + x4 <= 100)
m.Maximize(x1 + x2 + x3 + x4)
penalty = 1e-5
m.Minimize(penalty*(x1-x2)**2)
m.Minimize(penalty*(x1-x3)**2)
m.Minimize(penalty*(x1-x4)**2)
m.solve(disp=False)
print(x1.value, x2.value, x3.value, x4.value)
Solution with QP penalty
[24.999998377] [25.000000544] [25.000000544] [25.000000544]
I have a situation where I want to associate a fixed cost in my optimization function if a summation is even. That is, if
(x1 + x2 + x3)%2 = 0
Is there a way this can be modeled in CPLEX? All the variables are binary, so really, I just want to express x1 + x2 + x3 = 0 OR x1 + x2 + x3 = 2
Yes, you can do this by introducing a new binary variable. (Note that we are modifying the underlying formulation, not tinkering with CPLEX per se for the modulo.)
Your constraints are
x1 + x2 + x3 = 0 OR 2
Let's introduce a new binary variable Y and rewrite the constraint.
Combined Constraint: x1 + x2 + x3 = 0(1-Y) + 2Y
This works because if Y is 0, one of the choices gets selected, and if Y=1 the other choice gets selected.
When simplified:
x1+x2+x3-2Y = 0
x_i, Y binary
Addendum
In your specific case, the constraint got simplified because one of the rhs terms was 0. Instead, more generally, if you had b1 or b2 as the two rhs choices,
the constraint would become
x1 + x2 + x3 = b1(Y) + b2(1-Y).
If you had inequalities in your constraint (<=), you'd use the Big-M trick, and then introduce a new binary variable, thereby making the model choose one of the constraints.
Hope that helps.
I have an integer-valued bounded variable, call it X. (Somewhere around 0<=X<=100)
I want to have a binary variable, call it Y, such that Y=1 if X >= A and X <= B, otherwise Y=0.
The best I've come up with thus far is the following (where T<x> are introduced binary variables, and M is a large number)
(minimize Y)
(X - A) <= M*Ta
(B - X) <= M*Tb
Y <= Ta
Y <= Tb
Y >= Ta + Tb - 1
(In other words, introducing two binary variables that are true if the variable satisfies the lower and upper bounds of the range, respectively, and setting the result to the binary multiplication of those variables)
This... Works, sort of, but has a couple major flaws. In particular, it's not rigorously defined - Y can be 1 even if X is outside the range.
So: is there a better way to do this? In particular: is there a way to rigorously define it, or if not, a way to at least prevent false positives?
Edit: to clarify: A and B are variables, not parameters.
I think the below works.
(I) A * Y <= X <= B * Y + 100 * (1 - Y)
(II) (X - A) <= M * Ta
(III) (B - X) <= M * Tb
(IV) Y >= Ta + Tb - 1
So X < A makes:
(I)Y=0
and (II), (III), (IV) do not matter.
X > B makes:
(I) Y = 0
and (II), (III), (IV) do not matter.
A <= X <= B makes:
(I) Y = 1 or Y = 0
(II) Ta = 1
(III) Tb = 1
(IV) Y = 1
Rewriting loannis's answer in a linear form by expanding out multiplication of binary variables with a continuous variable:
Tc <= M*Y
Tc <= A
Tc >= A - M*(1-Y)
Tc >= 0
Tc <= X
Td <= M*Y
Td <= B
Td >= B - M*(1-Y)
Td >= 0
X <= Td + 100*(1-Y)
(X - A + 1) <= M * Ta
(B - X + 1) <= M * Tb
Y >= Ta + Tb - 1
This seems to work, although I have not yet had the chance to expand it out to prove it. Also, some of these constraints may be unnecessary; I have not checked.
The expansion I did was according to the following rule:
If b is a binary variable, and c is a continuous one, and 0 <= c <= M, then y=b*c is equivalent to the following:
y <= M*b
y <= c
y >= c - M*(1 - b)
y >= 0
Lets have two points, (x1, y1) and (x2,y2)
dx = |x1 - x2|
dy = |y1 - y2|
D_manhattan = dx + dy where dx,dy >= 0
I am a bit stuck with how to get x1 - x2 positive for |x1 - x2|, presumably I introduce a binary variable representing the polarity, but I am not allowed multiplying a polarity switch to x1 - x2 as they are all unknown variables and that would result in a quadratic.
If you are minimizing an increasing function of |x| (or maximizing a decreasing function, of course),
you can always have the aboslute value of any quantity x in a lp as a variable absx such as:
absx >= x
absx >= -x
It works because the value absx will 'tend' to its lower bound, so it will either reach x or -x.
On the other hand, if you are minimizing a decreasing function of |x|, your problem is not convex and cannot be modelled as a lp.
For all those kind of questions, it would be much better to add a simplified version of your problem with the objective, as this it often usefull for all those modelling techniques.
Edit
What I meant is that there is no general solution to this kind of problem: you cannot in general represent an absolute value in a linear problem, although in practical cases it is often possible.
For example, consider the problem:
max y
y <= | x |
-1 <= x <= 2
0 <= y
it is bounded and has an obvious solution (2, 2), but it cannot be modelled as a lp because the domain is not convex (it looks like the shape under a 'M' if you draw it).
Without your model, it is not possible to answer the question I'm afraid.
Edit 2
I am sorry, I did not read the question correctly. If you can use binary variables and if all your distances are bounded by some constant M, you can do something.
We use:
a continuous variable ax to represent the absolute value of the quantity x
a binary variable sx to represent the sign of x (sx = 1 if x >= 0)
Those constraints are always verified if x < 0, and enforce ax = x otherwise:
ax <= x + M * (1 - sx)
ax >= x - M * (1 - sx)
Those constraints are always verified if x >= 0, and enforce ax = -x otherwise:
ax <= -x + M * sx
ax >= -x - M * sx
This is a variation of the "big M" method that is often used to linearize quadratic terms. Of course you need to have an upper bound of all the possible values of x (which, in your case, will be the value of your distance: this will typically be the case if your points are in some bounded area)
Closest distance between two points(disjoint set)
In the Bichromatic Closest Pair Problem, we are given a set R of red points and a set B of blue points in the plane. The problem is to return the pair of points, one red and one blue, whose distance is minimum over all red-blue pairs.
My algorithm is a kind of them.
1: If both R and B store only a constant number of points, sort the points according to <y , com- pute a bichromatic closest pair using a brute- force algorithm, and return.
2: Assume |R| >= |B|, otherwise reverse the roles of R and B.
3: Pick a random element r from R.
4: Find the closest element of b in B to r.
5: Compute the left envelope of disks having radius |rb| centered at each of the points in B.
6: Remove all elements of R that are outside the envelope.
I can't think any idea which implementing to 4,5,6.
some reference said Graham's algorithm. I think it cannot be a circle. and I want to know how I implement them.
and How to make envelope some kind of region? T T..
It is easy draw in a paper, but implementing is always hard to me. someone who advise in C++?
We're still using an L1 metric, right? Let's assume that the red points have x <= 0 and the blue points have x >= 0.
Every red-blue pair has a shortest path with at most three segments: one horizontal from the red point to the y-axis, one vertical on the y-axis, one horizontal from the y-axis to the blue point. It suffices to compute the intersection of the y-axis and the Voronoi diagram for the red points and then locate the projection of each blue point on the y-axis in the diagram (e.g., with binary search :P; it may be faster, however, to sort the blue points by y-coordinate and merge).
|
R****
*
*
******B
|
|y=0
Let's say that a red point R = (x, y) dominates another red point R' = (x', y') iff x' <= x (R is at least as close to the y-axis as R') and |y - y'| <= x - x'.
Lemma 1 If R dominates R', then every point on the y-axis is at least as close to R as R'.
Proof: let P = (x, y'). Essentially by hypothesis, d(R, P) <= d(R', P). For every point Q on the y-axis, d(R', Q) = d(R', P) + d(P, Q) >= d(R, P) + d(P, Q) >= d(R, Q).
|
Q
*
R'**P*****
* |
R |
Lemma 2 Let R1 = (x1, y1), R2 = (x2, y2), R3 = (x3, y3). Assume that y1 <= y2 <= y3. If R3 dominates R1, then R2 dominates R1 or R3 dominates R2. Symmetrically, if R1 dominates R3, then R1 dominates R2 or R2 dominates R3.
Proof: we have x1 <= x3. There are three cases.
Case 1: x2 < x1. In this case, x2 <= x3 and |y3 - y2| <= |y3 - y1| <= x3 - x1 <= x3 - x2, so R3 dominates R2.
Case 2: x1 <= x2 <= x3. In this case, |y2 - y1| + |y3 - y2| = |y3 - y1| <= x3 - x1 = (x2 - x1) + (x3 - x2), so |y2 - y1| <= x2 - x1 and R2 dominates R1 or |y3 - y2| <= x3 - x2 and R3 dominates R2.
Case 3: x3 < x2. In this case, x1 <= x2 and |y2 - y1| <= |y3 - y1| <= x3 - x1 <= x2 - x1, so R2 dominates R1.
Lemma 3 Suppose that R dominates R' and R' dominates R. Then R = R'.
Lemma 4 Suppose that R = (x, y) is dominated by no other red point. Then (0, y) is closer to R than every other red point.
Together, the lemmas imply that by repeatedly eliminating all R' with some neighbor R (in sorted order by y-coordinate) that dominates R', we obtain the Voronoi cells in order.