How do I encode Manhattan distance in Mixed Integer Programming - linear-programming

Lets have two points, (x1, y1) and (x2,y2)
dx = |x1 - x2|
dy = |y1 - y2|
D_manhattan = dx + dy where dx,dy >= 0
I am a bit stuck with how to get x1 - x2 positive for |x1 - x2|, presumably I introduce a binary variable representing the polarity, but I am not allowed multiplying a polarity switch to x1 - x2 as they are all unknown variables and that would result in a quadratic.

If you are minimizing an increasing function of |x| (or maximizing a decreasing function, of course),
you can always have the aboslute value of any quantity x in a lp as a variable absx such as:
absx >= x
absx >= -x
It works because the value absx will 'tend' to its lower bound, so it will either reach x or -x.
On the other hand, if you are minimizing a decreasing function of |x|, your problem is not convex and cannot be modelled as a lp.
For all those kind of questions, it would be much better to add a simplified version of your problem with the objective, as this it often usefull for all those modelling techniques.
Edit
What I meant is that there is no general solution to this kind of problem: you cannot in general represent an absolute value in a linear problem, although in practical cases it is often possible.
For example, consider the problem:
max y
y <= | x |
-1 <= x <= 2
0 <= y
it is bounded and has an obvious solution (2, 2), but it cannot be modelled as a lp because the domain is not convex (it looks like the shape under a 'M' if you draw it).
Without your model, it is not possible to answer the question I'm afraid.
Edit 2
I am sorry, I did not read the question correctly. If you can use binary variables and if all your distances are bounded by some constant M, you can do something.
We use:
a continuous variable ax to represent the absolute value of the quantity x
a binary variable sx to represent the sign of x (sx = 1 if x >= 0)
Those constraints are always verified if x < 0, and enforce ax = x otherwise:
ax <= x + M * (1 - sx)
ax >= x - M * (1 - sx)
Those constraints are always verified if x >= 0, and enforce ax = -x otherwise:
ax <= -x + M * sx
ax >= -x - M * sx
This is a variation of the "big M" method that is often used to linearize quadratic terms. Of course you need to have an upper bound of all the possible values of x (which, in your case, will be the value of your distance: this will typically be the case if your points are in some bounded area)

Related

Given N lines on a Cartesian plane. How to find the bottommost intersection of lines efficiently?

I have N distinct lines on a cartesian plane. Since slope-intercept form of a line is, y = mx + c, slope and y-intercept of these lines are given. I have to find the y coordinate of the bottommost intersection of any two lines.
I have implemented a O(N^2) solution in C++ which is the brute-force approach and is too slow for N = 10^5. Here is my code:
int main() {
int n;
cin >> n;
vector<pair<int, int>> lines(n);
for (int i = 0; i < n; ++i) {
int slope, y_intercept;
cin >> slope >> y_intercept;
lines[i].first = slope;
lines[i].second = y_intercept;
}
double min_y = 1e9;
for (int i = 0; i < n; ++i) {
for (int j = i + 1; j < n; ++j) {
if (lines[i].first ==
lines[j].first) // since lines are distinct, two lines with same slope will never intersect
continue;
double x = (double) (lines[j].second - lines[i].second) / (lines[i].first - lines[j].first); //x-coordinate of intersection point
double y = lines[i].first * x + lines[i].second; //y-coordinate of intersection point
min_y = min(y, min_y);
}
}
cout << min_y << endl;
}
How to solve this efficiently?
In case you are considering solving this by means of Linear Programming (LP), it could be done efficiently, since the solution which minimizes or maximizes the objective function always lies in the intersection of the constraint equations. I will show you how to model this problem as a maximization LP. Suppose you have N=2 first degree equations to consider:
y = 2x + 3
y = -4x + 7
then you will set up your simplex tableau like this:
x0 x1 x2 x3 b
-2 1 1 0 3
4 1 0 1 7
where row x0 represents the negation of the coefficient of "x" in the original first degree functions, x1 represents the coefficient of "y" which is generally +1, x2 and x3 represent the identity matrix of dimensions N by N (they are the slack variables), and b represents the value of the idepent term. In this case, the constraints are subject to <= operator.
Now, the objective function should be:
x0 x1 x2 x3
1 1 0 0
To solve this LP, you may use the "simplex" algorithm which is generally efficient.
Furthermore, the result will be an array representing the assigned values to each variable. In this scenario the solution is:
x0 x1 x2 x3
0.6666666667 4.3333333333 0.0 0.0
The pair (x0, x1) represents the point which you are looking for, where x0 is its x-coordinate and x1 is it's y-coordinate. There are other different results that you could get, for an example, there could exist no solution, you may find out more at plenty of books such as "Linear Programming and Extensions" by George Dantzig.
Keep in mind that the simplex algorithm only works for positive values of X0, x1, ..., xn. This means that before applying the simplex, you must make sure the optimum point which you are looking for is not outside of the feasible region.
EDIT 2:
I believe making the problem feasible could be done easily in O(N) by shifting the original functions into a new position by means of adding a big factor to the independent terms of each function. Check my comment below. (EDIT 3: this implies it won't work for every possible scenario, though it's quite easy to implement. If you want an exact answer for any possible scenario, check the following explanation on how to convert the infeasible quadrants into the feasible back and forth)
EDIT 3:
A better method to address this problem, one that is capable of precisely inferring the minimum point even if it is in the negative side of either x or y: converting to quadrant 1 all of the other 3.
Consider the following generic first degree function template:
f(x) = mx + k
Consider the following generic cartesian plane point template:
p = (p0, p1)
Converting a function and a point from y-negative quadrants to y-positive:
y_negative_to_y_positive( f(x) ) = -mx - k
y_negative_to_y_positive( p ) = (p0, -p1)
Converting a function and a point from x-negative quadrants to x-positive:
x_negative_to_x_positive( f(x) ) = -mx + k
x_negative_to_x_positive( p ) = (-p0, p1)
Summarizing:
quadrant sign of corresponding (x, y) converting f(x) or p to Q1
Quadrant 1 (+, +) f(x)
Quadrant 2 (-, +) x_negative_to_x_positive( f(x) )
Quadrant 3 (-, -) y_negative_to_y_positive( x_negative_to_x_positive( f(x) ) )
Quadrant 4 (+, -) y_negative_to_y_positive( f(x) )
Now convert the functions from quadrants 2, 3 and 4 into quadrant 1. Run simplex 4 times, one based on the original quadrant 1 and the other 3 times based on the converted quadrants 2, 3 and 4. For the cases originating from a y-negative quadrant, you will need to model your simplex as a minimization instance, with negative slack variables, which will turn your constraints to the >= format. I will leave to you the details on how to model the same problem based on a minimization task.
Once you have the results of each quadrant, you will have at hands at most 4 points (because you might find out, for example, that there is no point on a specific quadrant). Convert each of them back to their original quadrant, going back in an analogous manner as the original conversion.
Now you may freely compare the 4 points with each other and decide which one is the one you need.
EDIT 1:
Note that you may have the quantity N of first degree functions as huge as you wish.
Other methods for solving this problem could be better.
EDIT 3: Check out the complexity of simplex. In the average case scenario, it works efficiently.
Cheers!

Converting conditional constraints to linear constraints in Linear Programming

I have two variables: x>= 0 and y binary (either 0 or 1), and I have a constant z >= 0. How can I use linear constraints to describe the following condition:
If x = z then y = 1 else y = 0.
I tried to solve this problem by defining another binary variable i and a large-enough positive constant U and adding constraints
y - U * i = 0;
x - U * (1 - i) = z;
Is this correct?
Really there are two classes of constraints that you are asking about:
If y=1, then x=z. For some large constant M, you could add the following two constraints to achieve this:
x-z <= M*(1-y)
z-x <= M*(1-y)
If y=1 then these constraints are equivalent to x-z <= 0 and z-x <= 0, meaning x=z, and if y=0, then these constraints are x-z <= M and z-x <= M, which should not be binding if we selected a sufficiently large M value.
If y=0 then x != z. Technically you could enforce this constraint by adding another binary variable q that controls whether x > z (q=1) or x < z (q=0). Then you could add the following constraints, where m is some small positive value and M is some large positive value:
x-z >= mq - M(1-q)
x-z <= Mq - m(1-q)
If q=1 then these constraints bound x-z to the range [m, M], and if q=0 then these constraints bound x-z to the range [-M, -m].
In practice when using a solver this usually will not actually ensure x != z, because small bounds violations are typically allowed. Therefore, instead of using these constraints I would suggest not adding any constraints to your model to enforce this rule. Then, if you get a final solution with y=0 and x=z, you could adjust x to take value x+epsilon or x-epsilon for some infinitesimally small value of epsilon.
So I change the conditional constraints to
if x = z then y = 0 else y = 1
Then the answer will be
x - z <= M * y;
x - z >= m * y;
where M is a large enough positive number, m is a small enough positive number.

Linear Programming - variable that equals the sign of an expression

I am trying to write a linear program and need a variable z that equals the sign of x-c, where x is another variable, and c is a constant.
I considered z = (x-c)/|x-c|. Unfortunately, if x=c, then this creates division by 0.
I cannot use z=x-c, because I don't want to weight it by the magnitude of the difference between x and c.
Does anyone know of a good way to express z so that it is the sign of x-c?
Thank you for any help and suggestions!
You can't model z = sign(x-c) exactly with a linear program (because the constraints in an LP are restricted to linear combinations of variables).
However, you can model sign if you are willing to convert your linear program into a mixed integer program, you can model this with the following two constraints:
L*b <= x - c <= U*(1-b)
z = 1 - 2*b
Where b is a binary variable, and L and U are lower and upper bounds on the quantity x-c. If b = 0, we have 0 <= x - c <= U and z = 1. If b = 1, we have L <= x - c <= 0 and z = 1 - 2*1 = -1.
You can use a solver like Gurobi to solve mixed integer programs.
For k » 1 this is a smooth approximation of the sign function:
Also
when ε → 0
These two approximations haven't the division by 0 issue but now you must tune a parameter.
In some languages (e.g. C++ / C) you can simply write something like this:
double sgn(double x)
{
return (x > 0.0) - (x < 0.0);
}
Anyway consider that many environments / languages already have a sign function, e.g.
Sign[x] in Mathematica
sign(x) in Matlab
Math.signum(x) in Java
sign(1, x) in Fortran
sign(x) in R
Pay close attention to what happens when x is equal to 0 (e.g. the Fortran function will return 1, with other languages you'll get 0).

Bresenham's algorithm

How to find the decision parameter for drawing different functions like parabola, sine curve, bell curve?
Please tell me about the approach why do we sometimes multiply by constant?
For Example
in case of ellipse, p = a^2(d1 - d2),p = b^2(d1 - d2) for upper and lower half region
respectively where a, b constants
in case of line, p = deltax(d1 - d2) where p is decision parameter d1,d2 are
distances,deltax is constant and is equal to xend - xstart
why not only take (d1 -d2) as parameter
Bresenham's algorithm as stated by the OP is a bit amiss, but I assume the following.
The decision parameter could adjust d1 - d2 and not scale by some constant as you suggest were it not for the initialization of the decision parameter. It is not generally scalable by that constant.
// code from http://en.wikipedia.org/wiki/Bresenham's_line_algorithm
plotLine(x0,y0, x1,y1)
dx=x1-x0
dy=y1-y0
D = 2*dy - dx // Not scalable by 2
plot(x0,y0)
y=y0
for x from x0+1 to x1
if D > 0
y = y+1
plot(x,y)
D = D + (2*dy-2*dx) // Scalable by 2
else
plot(x,y)
D = D + (2*dy) // Scalable by 2

Probability density function from a paper, implemented using C++, not working as intended

So i'm implementing a heuristic algorithm, and i've come across this function.
I have an array of 1 to n (0 to n-1 on C, w/e). I want to choose a number of elements i'll copy to another array. Given a parameter y, (0 < y <= 1), i want to have a distribution of numbers whose average is (y * n). That means that whenever i call this function, it gives me a number, between 0 and n, and the average of these numbers is y*n.
According to the author, "l" is a random number: 0 < l < n . On my test code its currently generating 0 <= l <= n. And i had the right code, but i'm messing with this for hours now, and i'm lazy to code it back.
So i coded the first part of the function, for y <= 0.5
I set y to 0.2, and n to 100. That means it had to return a number between 0 and 99, with average 20.
And the results aren't between 0 and n, but some floats. And the bigger n is, smaller this float is.
This is the C test code. "x" is the "l" parameter.
//hate how code tag works, it's not even working now
int n = 100;
float y = 0.2;
float n_copy;
for(int i = 0 ; i < 20 ; i++)
{
float x = (float) (rand()/(float)RAND_MAX); // 0 <= x <= 1
x = x * n; // 0 <= x <= n
float p1 = (1 - y) / (n*y);
float p2 = (1 - ( x / n ));
float exp = (1 - (2*y)) / y;
p2 = pow(p2, exp);
n_copy = p1 * p2;
printf("%.5f\n", n_copy);
}
And here are some results (5 decimals truncated):
0.03354
0.00484
0.00003
0.00029
0.00020
0.00028
0.00263
0.01619
0.00032
0.00000
0.03598
0.03975
0.00704
0.00176
0.00001
0.01333
0.03396
0.02795
0.00005
0.00860
The article is:
http://www.scribd.com/doc/3097936/cAS-The-Cunning-Ant-System
pages 6 and 7.
or search "cAS: cunning ant system" on google.
So what am i doing wrong? i don't believe the author is wrong, because there are more than 5 papers describing this same function.
all my internets to whoever helps me. This is important to my work.
Thanks :)
You may misunderstand what is expected of you.
Given a (properly normalized) PDF, and wanting to throw a random distribution consistent with it, you form the Cumulative Probability Distribution (CDF) by integrating the PDF, then invert the CDF, and use a uniform random predicate as the argument of the inverted function.
A little more detail.
f_s(l) is the PDF, and has been normalized on [0,n).
Now you integrate it to form the CDF
g_s(l') = \int_0^{l'} dl f_s(l)
Note that this is a definite integral to an unspecified endpoint which I have called l'. The CDF is accordingly a function of l'. Assuming we have the normalization right, g_s(N) = 1.0. If this is not so we apply a simple coefficient to fix it.
Next invert the CDF and call the result G^{-1}(x). For this you'll probably want to choose a particular value of gamma.
Then throw uniform random number on [0,n), and use those as the argument, x, to G^{-1}. The result should lie between [0,1), and should be distributed according to f_s.
Like Justin said, you can use a computer algebra system for the math.
dmckee is actually correct, but I thought that I would elaborate more and try to explain away some of the confusion here. I could definitely fail. f_s(l), the function you have in your pretty formula above, is the probability distribution function. It tells you, for a given input l between 0 and n, the probability that l is the segment length. The sum (integral) for all values between 0 and n should be equal to 1.
The graph at the top of page 7 confuses this point. It plots l vs. f_s(l), but you have to watch out for the stray factors it puts on the side. You notice that the values on the bottom go from 0 to 1, but there is a factor of x n on the side, which means that the l values actually go from 0 to n. Also, on the y-axis there is a x 1/n which means these values don't actually go up to about 3, they go to 3/n.
So what do you do now? Well, you need to solve for the cumulative distribution function by integrating the probability distribution function over l which actually turns out to be not too bad (I did it with the Wolfram Mathematica Online Integrator by using x for l and using only the equation for y <= .5). That however was using an indefinite integral and you are really integration along x from 0 to l. If we set the resulting equation equal to some variable (z for instance), the goal now is to solve for l as a function of z. z here is a random number between 0 and 1. You can try using a symbolic solver for this part if you would like (I would). Then you have not only achieved your goal of being able to pick random ls from this distribution, you have also achieved nirvana.
A little more work done
I'll help a little bit more. I tried doing what I said about for y <= .5, but the symbolic algebra system I was using wasn't able to do the inversion (some other system might be able to). However, then I decided to try using the equation for .5 < y <= 1. This turns out to be much easier. If I change l to x in f_s(l) I get
y / n / (1 - y) * (x / n)^((2 * y - 1) / (1 - y))
Integrating this over x from 0 to l I got (using Mathematica's Online Integrator):
(l / n)^(y / (1 - y))
It doesn't get much nicer than that with this sort of thing. If I set this equal to z and solve for l I get:
l = n * z^(1 / y - 1) for .5 < y <= 1
One quick check is for y = 1. In this case, we get l = n no matter what z is. So far so good. Now, you just generate z (a random number between 0 and 1) and you get an l that is distributed as you desired for .5 < y <= 1. But wait, looking at the graph on page 7 you notice that the probability distribution function is symmetric. That means that we can use the above result to find the value for 0 < y <= .5. We just change l -> n-l and y -> 1-y and get
n - l = n * z^(1 / (1 - y) - 1)
l = n * (1 - z^(1 / (1 - y) - 1)) for 0 < y <= .5
Anyway, that should solve your problem unless I made some error somewhere. Good luck.
Given that for any values l, y, n as described, the terms you call p1 and p2 are both in [0,1) and exp is in [1,..) making pow(p2, exp) also in [0,1) thus I don't see how you'd ever get an output with the range [0,n)