Either or constraint WITH ONE POINT IN COMMON in linear programming operations-research - linear-programming

suppose we have a quantity variable x (which is upperbounded by n), and a logic variable y which is equal to
y = 1 if x >= s; where s is a generic number
y = 0 otherwise => if x is strictly lesser than s => if x < s
Surfing around on the internet I found out this clear explanation https://youtu.be/iQ3PlKKorXA?t=35 which turned out to be the common pattern about either-or constraint. Therefore following the video, the solution would be:
s - x <= (1 - y)*n
x - s <= y*n
And yet x could be equal to s in both the cases.
How can we fix this?

You can consider the following two constraints:
x-s ≤ My - ɛ(1-y)
s-x ≤ M(1-y)
where M is a sufficiently large upper bound and ɛ is a small positive constant.
The first enforce the logical constraint if x >= s then y = 1 and the second the constraint if x < s then y = 0.
Note that these are often referred to as indicator constraints and are supported by several solvers (e.g., cplex) with advantages in terms of a more numerically stable model.

With many solvers you can use logical constraints.
For example with CPLEX OPL you can write
int s=3;
dvar int x;
dvar boolean y;
subject to
{
y==(x>=s);
}

Related

How to write constraint with sum of absolutes in Integer Programming?

I found a solution for just one term here.
How can we formulate constraints of the form
|x1- a1| +|x2-a2| + .... + |xn - an| >= K
in Mixed Integer Linear Programming ?
Let's write this as:
sum(i, |x[i]-a[i]|) >= K
This is non-convex and needs special treatment. Sorry: this is not very simple.
One way to model this is:
x[i] - a[i] = zplus[i] - zmin[i]
sum(i, zplus[i] + zmin[i]) >= K
zplus[i] <= δ[i]*M[i]
zmin[i] <= (1-δ[i])*M[i]
zmin[i],zplus[i] >= 0
δ[i] ∈ {0,1}
Here M[i] are large enough constants. This needs some thought to find good values. Basically we want: M[i]=max(|x[i] - a[i]|).
Alternative formulations are possible using indicator constraints and SOS1 variables. Some modeling tools and solvers have direct support for absolute values.

MILP BigM : variable must remains in defined boundaries or set to 0

I'm modeling some energy systems via MILP / Pyomo.
In that context I'm modeling a bi-directional elec. converter.
Power/energy can flow in both ways:
However, In one way, the power P can only be within boundaries [LB;UB], otherwise it must be 0.
I use this formulation to ensure that:
LB - P <= LB * x
P <= UB * (1 - x)
x being a binary variable
and it seems to be working...
in the other way, the power P can only be within [-UB;-LB], otherwise 0.
but I'm struggling hard to ensure that, I just can't get the logic behind to build that kind of constraint...
Any help and explanation would be appreciated.
Thanks a lot,
Max
I would think you have three states (or feasible regions):
p = 0
p ∈ [L,U]
p ∈ [-U,-L]
I don't think that can be done with one binary variable.
You can do something like :
δ1*L-δ2*U ≤ p ≤ δ1*U-δ2*L
δ1+δ2 ≤ 1
δ1,δ2 ∈ {0,1}
This is essentially:
δ1=δ2=0 => p = 0
δ1=1,δ2=0 => p ∈ [L,U]
δ1=0,δ2=1 => p ∈ [-U,-L]
δ1=δ2=1: not allowed

Converting conditional constraints to linear constraints for Linear Programming

I've two variables: C which is binary, and X which is non-negative
If C = 0 then X = 0;
If C = 1 then X = X (meaning there's no constraint on X)
How should I format this conditional constraint into a linear constraint for LP?
Note that strictly speaking LP models only contain continuous variables. So we assume this is a MIP model to be solved with a MIP solver.
Here are three ways to attack this, depending on the capabilities of the solver.
(1) If you use a solver that supports indicator constraints, you can simply use:
c=0 ==> x=0
(2) For other solvers you can use:
x <= M*c
where M is a (tight as possible) upper bound on x.
(3) Finally, if your solver supports SOS1 (Special Ordered Sets of type 1) sets, you can use:
d = 1-c
{d,x} ∈ SOS1
d >= 0
(1) and (3) have the advantage that no bound is needed. If you have a good, tight bound on x, (2) is a good option.

What does Big M method do in constraints when converting nonlinear programming into linear programming?

I got a question regarding this constraints in the paper. This paper says it used big M method in order to make non-linear programming model into LP. I get that big number M1is a huge number, but I don't get what big number M1 really does on the constraints. Would you guys give me some insight on the use of the big M in this constraints?
Below is constraints with big number M1.
The paper says these constraints are
when K[m][i] = p[i]*x[m][i],
maximize sum(m in M, i in I) (K[m][i]-c[i]*x[m][i]
K[m][i]-M[1]*(1-x[m][i]) <= p[i]
K[m][i]+M[1]*(1-x[m][i]) >= p[i]
K[m][i]-M[1]*x[m][i] <= 0
it originally looked like this in non linear programming
maximize sum(m in M, i in I)(p[i]-c[i])*x[m][i]
So, basically, converting nonlinear programming into linear programming led to a little change in some decision variables and 3 additional constraints with big number M.
Here is another constraint that includes big number M.
sum (j in J) b[i][j]*p[j]-p[i]<= M[1]*y[i]
which originally looked like
p[i]<sum (j in J) b[i][j]*p[j], if y[i]==1
Here is the last constraint with big number M
(r[m][j]=p[j])*b[i][j]*x[m][i] >= -y[i]*m[1]
which was
(r[m][j]-p[j])*b[i][j]*x[m][i](1-y[i])>=0
in nonlinear program.
I really want to know what does big M do in the model.
It would be really appreciated if anyone gives me some insight.
Thank you.
As you said, the big-M is used to model the non-linear constraint
K[m][i] = p[i] * x[m][i]
in case x is a binary variable. The assumption is that M is an upper bound on K[m][i] and that K[m][i] is a non-negative variable, i.e. 0 <= K[m][i] <= M. Also p is assumed to be non-negative.
Since x[m][i] is binary, we can have two cases in a feasible solution:
x[m][i] = 0. In that case the product p[i] * x[m][i] is 0 and thus K[m][i] should be zero as well. This is enforced by constraint K[m][i] - M * x[m][i] <= 0 which in this case becomes just K[m][i] <= 0. The two other constraints involving M become redundant in this case. For example, the first constraint reduces to K[m][i] <= p[i] + M which is always true since M is an upper bound on K[m][i] and p is non-negative.
x[m][i] = 1. In that case the product p[i] * x[m][i] is just p[i] and the first two constraints involving M become K[m][i] <= p[i] and K[m][i] >= p[i] (which is equivalent to K[m][i] = p[i]). The last constraint involving M becomes K[m][i] <= M which is redundant since M is an upper bound on K[m][i].
So the role of M here is to "enable/disable" certain constraints depending on the value of x.
to model logical constraints you may either use logical constraints or rely on big M
https://www.ibm.com/support/pages/difference-between-using-indicator-constraints-and-big-m-formulation
I tend to suggest logical constraint as the default choice.
In https://www.linkedin.com/pulse/how-opl-alex-fleischer/
let me share the example
How to multiply a decision variable by a boolean decision variable in CPLEX ?
// suppose we want b * x <= 7
dvar int x in 2..10;
dvar boolean b;
dvar int bx;
maximize x;
subject to
{
// Linearization
bx<=7;
2*b<=bx;
bx<=10*b;
bx<=x-2*(1-b);
bx>=x-10*(1-b);
// if we use CP we could write directly
// b*x<=7
// or rely on logical constraints within CPLEX
// (b==1) => (bx==x);
// (b==0) => (bx==0);
}

How can I obtain from n a number which raised to the power of itself equals n?

I have an integer n, and I want to obtain a number which raised to the power of itself equals n. How can I do that?
So we want to solve the equation x^x = n. This is a quite different thing from finding y = n-th root of n in equivalent to y^n = n.
The first thing to do when looking at powers is to consider logs now using natural logs,
x ln x = ln n. This does not help us too much and it's not a standard function, so some form of convergence routine will be needed and we want to solve f(x) = x ln x - ln n = 0. This function is nicely monotonic increasing a little faster than just x so it should be easy to solve.
We can use use Newton's method. First find the derivative
f'(x) = log x + 1. Starting with a guess x1 an updated guess will be x2 = x1 - f(x1) / f'(x). If you do this a few times it should converge nicely. In my experiment to find x such that x^x = 21 it took
just under 6 itterations to converge.
In psudocode
x[0] = ln(n);
for(i=0; i<6;++i ) {
fx = x[i] * ln( x[i] ) - ln(n);
df = ln( x[i] ) + 1;
x[i+1] = x[i] - fx / df;
}
println(x[6], pow(x[6], x[6]))
You question states two things.
I want to get the nth root of n
This means finding the solution to x^n=n. For this std::pow(n, 1./n) would be a good thing. Note that 1/n will likely perform integer division if n is an integer, so you may well end up with std::pow(n, 0) which is 1.
I want to obtain a number which raised to the power of itself equals n
This is something completely different. You are trying to solve x^x=n for x. Taking the specific case of n=2 and asking Wolfram Alpha about it, it returns
x = exp(W(log(2)))
where W would be the Lambert W function. As far as I know this is not part of the C++ standard, so you'll likely have to find a library (in source code or dynamically linked) to compute that function for you. GSL might serve. Generalizing to different values of n should be obvious, though.
TL;DR: use std::pow.
You want to find 1/nth power of n. There's a standard function which finds yth power of x, called std::pow. It's always a good idea to use a standard function unless you have a strong reason to not.
So, it's better to rephrase this question to "do you have any reasons to not use std::pow?", and, since you're asking community, looks like you don't.