Converting conditional constraints to linear constraints for Linear Programming - linear-programming

I've two variables: C which is binary, and X which is non-negative
If C = 0 then X = 0;
If C = 1 then X = X (meaning there's no constraint on X)
How should I format this conditional constraint into a linear constraint for LP?

Note that strictly speaking LP models only contain continuous variables. So we assume this is a MIP model to be solved with a MIP solver.
Here are three ways to attack this, depending on the capabilities of the solver.
(1) If you use a solver that supports indicator constraints, you can simply use:
c=0 ==> x=0
(2) For other solvers you can use:
x <= M*c
where M is a (tight as possible) upper bound on x.
(3) Finally, if your solver supports SOS1 (Special Ordered Sets of type 1) sets, you can use:
d = 1-c
{d,x} ∈ SOS1
d >= 0
(1) and (3) have the advantage that no bound is needed. If you have a good, tight bound on x, (2) is a good option.

Related

Either or constraint WITH ONE POINT IN COMMON in linear programming operations-research

suppose we have a quantity variable x (which is upperbounded by n), and a logic variable y which is equal to
y = 1 if x >= s; where s is a generic number
y = 0 otherwise => if x is strictly lesser than s => if x < s
Surfing around on the internet I found out this clear explanation https://youtu.be/iQ3PlKKorXA?t=35 which turned out to be the common pattern about either-or constraint. Therefore following the video, the solution would be:
s - x <= (1 - y)*n
x - s <= y*n
And yet x could be equal to s in both the cases.
How can we fix this?
You can consider the following two constraints:
x-s ≤ My - ɛ(1-y)
s-x ≤ M(1-y)
where M is a sufficiently large upper bound and ɛ is a small positive constant.
The first enforce the logical constraint if x >= s then y = 1 and the second the constraint if x < s then y = 0.
Note that these are often referred to as indicator constraints and are supported by several solvers (e.g., cplex) with advantages in terms of a more numerically stable model.
With many solvers you can use logical constraints.
For example with CPLEX OPL you can write
int s=3;
dvar int x;
dvar boolean y;
subject to
{
y==(x>=s);
}

MILP BigM : variable must remains in defined boundaries or set to 0

I'm modeling some energy systems via MILP / Pyomo.
In that context I'm modeling a bi-directional elec. converter.
Power/energy can flow in both ways:
However, In one way, the power P can only be within boundaries [LB;UB], otherwise it must be 0.
I use this formulation to ensure that:
LB - P <= LB * x
P <= UB * (1 - x)
x being a binary variable
and it seems to be working...
in the other way, the power P can only be within [-UB;-LB], otherwise 0.
but I'm struggling hard to ensure that, I just can't get the logic behind to build that kind of constraint...
Any help and explanation would be appreciated.
Thanks a lot,
Max
I would think you have three states (or feasible regions):
p = 0
p ∈ [L,U]
p ∈ [-U,-L]
I don't think that can be done with one binary variable.
You can do something like :
δ1*L-δ2*U ≤ p ≤ δ1*U-δ2*L
δ1+δ2 ≤ 1
δ1,δ2 ∈ {0,1}
This is essentially:
δ1=δ2=0 => p = 0
δ1=1,δ2=0 => p ∈ [L,U]
δ1=0,δ2=1 => p ∈ [-U,-L]
δ1=δ2=1: not allowed

What does Big M method do in constraints when converting nonlinear programming into linear programming?

I got a question regarding this constraints in the paper. This paper says it used big M method in order to make non-linear programming model into LP. I get that big number M1is a huge number, but I don't get what big number M1 really does on the constraints. Would you guys give me some insight on the use of the big M in this constraints?
Below is constraints with big number M1.
The paper says these constraints are
when K[m][i] = p[i]*x[m][i],
maximize sum(m in M, i in I) (K[m][i]-c[i]*x[m][i]
K[m][i]-M[1]*(1-x[m][i]) <= p[i]
K[m][i]+M[1]*(1-x[m][i]) >= p[i]
K[m][i]-M[1]*x[m][i] <= 0
it originally looked like this in non linear programming
maximize sum(m in M, i in I)(p[i]-c[i])*x[m][i]
So, basically, converting nonlinear programming into linear programming led to a little change in some decision variables and 3 additional constraints with big number M.
Here is another constraint that includes big number M.
sum (j in J) b[i][j]*p[j]-p[i]<= M[1]*y[i]
which originally looked like
p[i]<sum (j in J) b[i][j]*p[j], if y[i]==1
Here is the last constraint with big number M
(r[m][j]=p[j])*b[i][j]*x[m][i] >= -y[i]*m[1]
which was
(r[m][j]-p[j])*b[i][j]*x[m][i](1-y[i])>=0
in nonlinear program.
I really want to know what does big M do in the model.
It would be really appreciated if anyone gives me some insight.
Thank you.
As you said, the big-M is used to model the non-linear constraint
K[m][i] = p[i] * x[m][i]
in case x is a binary variable. The assumption is that M is an upper bound on K[m][i] and that K[m][i] is a non-negative variable, i.e. 0 <= K[m][i] <= M. Also p is assumed to be non-negative.
Since x[m][i] is binary, we can have two cases in a feasible solution:
x[m][i] = 0. In that case the product p[i] * x[m][i] is 0 and thus K[m][i] should be zero as well. This is enforced by constraint K[m][i] - M * x[m][i] <= 0 which in this case becomes just K[m][i] <= 0. The two other constraints involving M become redundant in this case. For example, the first constraint reduces to K[m][i] <= p[i] + M which is always true since M is an upper bound on K[m][i] and p is non-negative.
x[m][i] = 1. In that case the product p[i] * x[m][i] is just p[i] and the first two constraints involving M become K[m][i] <= p[i] and K[m][i] >= p[i] (which is equivalent to K[m][i] = p[i]). The last constraint involving M becomes K[m][i] <= M which is redundant since M is an upper bound on K[m][i].
So the role of M here is to "enable/disable" certain constraints depending on the value of x.
to model logical constraints you may either use logical constraints or rely on big M
https://www.ibm.com/support/pages/difference-between-using-indicator-constraints-and-big-m-formulation
I tend to suggest logical constraint as the default choice.
In https://www.linkedin.com/pulse/how-opl-alex-fleischer/
let me share the example
How to multiply a decision variable by a boolean decision variable in CPLEX ?
// suppose we want b * x <= 7
dvar int x in 2..10;
dvar boolean b;
dvar int bx;
maximize x;
subject to
{
// Linearization
bx<=7;
2*b<=bx;
bx<=10*b;
bx<=x-2*(1-b);
bx>=x-10*(1-b);
// if we use CP we could write directly
// b*x<=7
// or rely on logical constraints within CPLEX
// (b==1) => (bx==x);
// (b==0) => (bx==0);
}

How to include if-statement with decision variables in cplex constraints properly

This is similar to a problem of moving from a decentralized system to a centralized one. Therefore, I want to identify the optimal locations to use as centralized points and the locations that need to be closed. These are my binary decision variables Xi and Yj.
I have two constraints that include an if-statement with decision variables. I have read that in this case I must use logical constraints, so I did.
forall (i in Drives, j in Locations)(Y[j]==1 && Distance[j][i]<=20) => X[i]==0;
I want this constraint to say that if a location j is chosen (Yj = 1) and if the distance between i and j is less than 20 , then => I want to close location i (Xi = 0)
forall (j in Locations, k in Locations)(Y[j]==1 && Distance2[j][k]<=40) => Y[k]==0;
Similarly, this constraint says that if a location j is chosen (Yj = 1) and if the distance between 2 potential locations is less than 40, then I do not want to choose location k (Yk = 0)
The model gives a result but as I check the numbers, it seems to ignore these 2 constraints. So, something is not working properly in the terms used.
The constraints look mostly correct to me. What looks a bit fishy in the second constraint is that you don't exclude the case j==k. If Y[j]==1 then probably Distance2[j][j]==0 and thus the second constraint implies Y[j]==0. A contradiction!
Are you sure that CPLEX claims your solution optimal? Or are you maybe looking at a relaxed solution (which would then be allowed to violate constraints)?
Assuming Distance is data and not a decision variable, your constraints could be written in a more efficient way. For example the first one:
forall(i in Drives)
forall(j in Locations : Distance[j][i] <= 20)
X[i] <= 1 - Y[j]; // If Y[j]==1 then the right-hand side becomes zero and forces X[i]==0
Similary, the second constraint could be written as
forall(j in Locations)
forall(k in Locations : k != j && Distance2[j][k] <= 40)
Y[k] <= 1 - Y[j]; // If Y[j]==1 then the right-hand side becomes zero and forces Y[k]==0
Can you try with these more explicit constraints or at least with excluding the case j==k in the second constraint?

Integer Linear Programming formulation for Test Cover?

The Test Cover problem can be defined as follows:
Suppose we have a set of n diseases and a set of m tests we can perform to check for symptoms. We also are given the following:
an nxn matrix A where A[i][j] is a binary value representing the result of running the jth test on a patient with the the ith disease (1 indicates a positive result, 0 indicates negative);
the cost of running test j, c_j; and that
any patient will have exactly one disease
The task is to find a set of tests that can uniquely identify each of the the n diseases at minimal cost.
This problem can be formulated as an Integer Linear Program, where we want to minimize the objective function \sum_{j=1}^{m} c_j x_j, where x_j = 1 if we choose to include test j in our set, and 0 otherwise.
My question is:
What is the set of linear constraints for this problem?
Incidentally, I believe this problem is NP-hard (as is Integer Linear Programming in general).
Well if I am correct you just need to ensure
\sum_j x_j.A_ij >= 1 forall i
Let T be the matrix that results from deleting the jth column of A for all j such that x_j = 0.
Then choosing a set of tests that can uniquely distinguish any two diseases is equivalent to ensuring that every row of T is unique.
Observe that two rows k and l are identical if and only if (T[k][j] XOR T[l][j]) = 0 for all j.
So, the constraints we want are
\sum_{j=1}^{m} x_j(A[k][j] XOR A[l][j]) >= 1
for all 1 <= k <= m and 1 <= l <= 1 such that k != l.
Note that the constraints above are linear, since we can just pre-compute the coefficient (A[k][j] XOR A[l][j]).