Prove that Bellman Ford maximises objective function - linear-programming

Prove that Bellman-Ford when applied to the constraint graph of a linear programming problem with constraints of the form Xj - Xi <= Wij maximizes the function X1 + ... + Xn subject to constraints Xj - Xi <= Wij and Xi <=0.
I am completely stuck in here. Please give some hints to guide me through the solution.

Related

Convert a Sage multivariate polynomial into a univariate one

I am trying to implement Schoof's algorithm in Sage.
I have multivariate polynomials f(x, y) with coefficients in
a finite field F_q, for which the variable y only appears
in even powers (for example f(x, y) = x * y^4 + x^3 * y^2 + x).
Furthermore, using an equation y^2 = x^3 + A * x + B,
I want to replace powers of y^2 in the polynomial with
corresponding powers of x^3 + A * x + B,
so that the polynomial depends only on x.
My idea was this:
J = ideal(f, y ** 2 - (x ** 3 + A * x + B))
f = R(J.elimination_ideal(y).gens()[0])
(R is a univariate polynomial ring).
My problem is, sometimes it works, sometimes it does not
(I don't know why).
Is there a better solution or standard solution?
For counting points on elliptic curves,
the following resources might be useful:
SageMath constructions: algebraic geometry: point couting on curves
SageMath thematic tutorial on Schoof-Elkies-Atkin point counting
Regarding polynomials, consider providing
an example that works and one that fails.
That might help analyse the problem and answer the question.
Note that Ask Sage
has more experts ready to answer SageMath questions.

Linearization of non-linear program or other solver

I have a linear program that represent a time series of actions (the variables are ordered). The objective function is MIN. For every variable I have the constraint
Xi <= max_value.
I want the constraint to represent the real world more precisely, so I need to change that constrain as follow:
if sum(X1,...,Xi-1) > some_value then Xi <= max_value_1, else Xi <= max_value_2.
Is there a way to make it linear?
If not, is there a ready made solver (like ortools) for this?
Thanks
Provided your variable are integral, or can be scaled up to be integral, you can use CP-SAT for this.
This is explained in the section of the documentation.
if sum(X1,...,Xi-1) > some_value then Xi <= max_value_1, else Xi <= max_value_2.
y(i) = sum(X(1),...,X(i-1))
y(i) <= (1-δ(i))*some_value + δ(i)*M
y(i) >= δ(i)*(some_value+0.00001) - (1-δ(i))*M
X(i) <= max_value_1*δ(i) + max_value_2*(1-δ(i))
δ(i) ∈ {0,1}
Here M is a large enough constant, and y is an extra continuous variable.

Implementation of an objective with unknown number of variables

I'm quite new in CPLEX with C++.
I'm trying to model a piecewise-linear objective function (in the LaTex notation):
F = \sum_{i = 1}^{N} f_i = \sum_{i = 1}^{N} K_i * (x_i - x_i^*),
K_i = K_i^1, if x_i < x_i^*; K_i^2, if x_i > x_i^*,
that is the sum of values of deviations with weight coefficients. The weight coefficients depend on the sign of the deviation and the index i.
The question is: How to define this kind of the objective function in modelling by row in terms of Ilo class or something else?
There are some examples in the internet where an objective function is represented with 2-3 variables, for example,
model.add(IloMaximize(env, x[0] + 2 * x[1] + 3 * x[2]));
but if I have much more values? And I don't understand how to take into account the weight coefficients in such a representation.
Tnx.

How do I encode Manhattan distance in Mixed Integer Programming

Lets have two points, (x1, y1) and (x2,y2)
dx = |x1 - x2|
dy = |y1 - y2|
D_manhattan = dx + dy where dx,dy >= 0
I am a bit stuck with how to get x1 - x2 positive for |x1 - x2|, presumably I introduce a binary variable representing the polarity, but I am not allowed multiplying a polarity switch to x1 - x2 as they are all unknown variables and that would result in a quadratic.
If you are minimizing an increasing function of |x| (or maximizing a decreasing function, of course),
you can always have the aboslute value of any quantity x in a lp as a variable absx such as:
absx >= x
absx >= -x
It works because the value absx will 'tend' to its lower bound, so it will either reach x or -x.
On the other hand, if you are minimizing a decreasing function of |x|, your problem is not convex and cannot be modelled as a lp.
For all those kind of questions, it would be much better to add a simplified version of your problem with the objective, as this it often usefull for all those modelling techniques.
Edit
What I meant is that there is no general solution to this kind of problem: you cannot in general represent an absolute value in a linear problem, although in practical cases it is often possible.
For example, consider the problem:
max y
y <= | x |
-1 <= x <= 2
0 <= y
it is bounded and has an obvious solution (2, 2), but it cannot be modelled as a lp because the domain is not convex (it looks like the shape under a 'M' if you draw it).
Without your model, it is not possible to answer the question I'm afraid.
Edit 2
I am sorry, I did not read the question correctly. If you can use binary variables and if all your distances are bounded by some constant M, you can do something.
We use:
a continuous variable ax to represent the absolute value of the quantity x
a binary variable sx to represent the sign of x (sx = 1 if x >= 0)
Those constraints are always verified if x < 0, and enforce ax = x otherwise:
ax <= x + M * (1 - sx)
ax >= x - M * (1 - sx)
Those constraints are always verified if x >= 0, and enforce ax = -x otherwise:
ax <= -x + M * sx
ax >= -x - M * sx
This is a variation of the "big M" method that is often used to linearize quadratic terms. Of course you need to have an upper bound of all the possible values of x (which, in your case, will be the value of your distance: this will typically be the case if your points are in some bounded area)

Polynomial fitting using L1-norm

I have n points (x0,y0),(x1,y1)...(xn,yn). n is small (10-20). I want to fit these points with a low order (3-4) polynomial: P(x)=a0+a1*x+a2*x^2+a3*x^3.
I have accomplished this using least squares as error metric, i.e. minimize f=(p0-y0)^2+(p1-y1)^2+...+(pn-yn)^2. My solution is utilizing singular value decomposition (SVD).
Now I want to use L1 norm (absolute value distance) as error metric, i.e. minimize f=|p0-y0|+|p1-y1|+...+|pn-yn|.
Are there any libraries (preferably open source) which can do this, and that can be called from C++? Is there any source code available which can be quickly modified to suit my needs?
L_1 regression is actually quite simply formulated as a linear program. You want to
minimize error
subject to x_1^4 * a_4 + x_1^3 * a_3 + x_1^2 * a_2 + x_1 * a_1 + a_0 + e_1 >= y_1
x_1^4 * a_4 + x_1^3 * a_3 + x_1^2 * a_2 + x_1 * a_1 + a_0 - e_1 <= y_1
.
.
.
x_n^4 * a_4 + x_n^3 * a_3 + x_n^2 * a_2 + x_n * a_1 + a_0 + e_n >= y_n
x_n^4 * a_4 + x_n^3 * a_3 + x_n^2 * a_2 + x_n * a_1 + a_0 - e_n <= y_n
error - e_1 - e_2 - ... - e_n = 0.
Your variables are a_0, a_1, a_2, a_3, a_4, error, and all of the e variables. x and y are the data of your problem, so it's no problem that x appears to second, third, and fourth powers.
You can solve linear programming problems with GLPK (GPL) or lp_solve (LGPL) or any number of commercial packages. I like GLPK and I recommend using it if its licence is not a problem.
Yes, it should be doable. A standard way of formulating polynomial fitting problems as a multiple linear regression is to define variables x1, x2, etc., where xn is defined as x.^n (element-wise exponentiation in Matlab notation). Then you can concatenate all these vectors, including an intercept, into a design matrix X:
X = [ 1 x1 x2 x3 ]
Then your polynomial fitting problem is a regression problem:
argmin_a ( | y - X * a| )
where the | | notation is your desired cost function (for your case, L1 norm) and a is a vector of weights (sorry, SO doesn't have good math markups as far as I can tell). Regressions of this sort are known as "robust regressions," and Numerical Recipes has a routine to compute them: http://www.aip.de/groups/soe/local/numres/bookfpdf/f15-7.pdf
Hope this helps!
The problem with the L1 norm is that it's not differentiable, so any minimisers which rely on derivatives may fail. When I've tried to minimise those kinds of functions using e.g. conjugate gradient minimisation, I find that the answer gets stuck at the kink, i.e. x=0 in the function y=|x|.
I often solve these mathematical computing problems from first principles. One idea that might work here is that the target function is going to be piecewise linear in the coefficients of your low-order polynomial. So it might be possible to solve by starting from the polynomial that comes out of least squares, and then improving the solution by solving a series of linear problems, but each time only stepping from your current best solution to the nearest kink.