i have the following constraints
i tried to model it in AMPL using the following code:
var y {1..njobs} binary;
subject to overlap
{i in 1..njobs, j in i+1..njobs: i<>j}:
xi[i] + si[i] <= xi[j]+m*y[i];
subject to order
{i in 1..njobs, j in i+1..njobs: i<j}:
y[i] + y[j] = 1;
i'm new to this topic and seem to miss something in the code above. any suggestions?
According to the constraints, y has two indices, i and j, but your code only gives it a single index.
Should be something like:
var y {1..njobs,1..njobs} binary;
subject to overlap
{i in 1..njobs, j in i+1..njobs: i<>j}:
xi[i] + si[i] <= xi[j]+m*y[i,j];
subject to order
{i in 1..njobs, j in i+1..njobs: i<j}:
y[i,j] + y[j,i] = 1;
Currently the behaviour for when i = j is undefined. You may want to either add a constraint that defines the behaviour in that case, or exclude it from the index space when you declare y, e.g.:
var y {i in 1..njobs,j in 1..njobs: i <> j} binary;
Related
I am very new to Gurobi. I am trying to solve the following ILP
minimize \sum_i c_i y_i + \sum_i \sum_j D_{ij} x_{ij}
Here D is stored as a 2D numpy array.
My constraints are as follows
x_{ij} <= y_i
y_i + \sum_j x_{ij} = 1
Here's the image of the algebra :
My code so far is as follows,
from gurobipy import *
def gurobi(D,c):
n = D.shape[0]
m = Model()
X = m.addVars(n,n,vtype=GRB.BINARY)
y = m.addVars(n,vtype=GRB.BINARY)
m.update()
for j in range(D.shape[0]):
for i in range(D.shape[0]):
m.addConstr(X[i,j] <= y[i])
I am not sure about, how to implement the second constraint and specify the objective function, as objective terms includes a numpy array. Any help ?
Unfortunately I don't have GUROBI because it's really expensive...
but, according to this tutorial the second constraint should be implemented like this :
for i in range(n):
m.addConstr(y[i] + quicksum(X[i,j] for j in range(n), i) == 1)
while the objective function can be defined as :
m.setObjective(quicksum(c[i]*y[i] for i in range(n)) + quicksum(quicksum(D[i,j] * x[i,j]) for i in range(n) for j in range(n)), GRB.MINIMIZE)
N.B: I'm assuming D is a matrix n x n
This is a very simple case. You can write the first constraint this way. It is a good habit to name your constraints.
m.addConstrs((x[i,j] <= y[j] for i in range(D.shape[0]) for j in range(D.shape[0])), name='something')
If you want to add the second constraint, you can write it like this
m.addConstrs((y[i] + x.sum(i, '*') <= 1 for i in range(n)), name='something')
you could write the second equations ass well using quicksum as suggested by digEmAll.
The advantage of using quicksum is that you can add if condition so that you don't um over all values of j. Here is how you could do it
m.addConstrs((y[i] + quicksum(x[i, j] for j in range(n)) <= 1 for i in range(n)), name='something')
if you only needed some values of j to sum over then you could:
m.addConstrs((y[i] + quicksum(x[i, j] for j in range(n) if j condition) <= 1 for i in range(n)), name='something')
I hope this helps
There is a function of the dtw package
dtw(x, y=NULL, dist.method="Euclidean", step.pattern=symmetric2, window.type="none", keep.internals=FALSE, distance.only=FALSE, open.end=FALSE, open.begin=FALSE, ... )
In the function, there are three methods of calculating distances
symmetric1 , symmetric2 , asymmetric
I am interested in the method step.pattern = symmetric2.
I have a C ++ function that works exactly like symmetric1
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double dtw_rcpp(const NumericVector& x, const NumericVector& y) {
size_t n = x.size(), m = y.size();
NumericMatrix res = no_init(n + 1, m + 1);
std::fill(res.begin(), res.end(), R_PosInf);
res(0, 0) = 0;
double cost = 0;
size_t w = std::abs(static_cast<int>(n - m));
for (size_t i = 1; i <= n; ++i) {
for (size_t j = std::max(1, static_cast<int>(i - w)); j <= std::min(m, i + w); ++j) {
cost = std::abs(x[i - 1] - y[j - 1]);
res(i, j) = cost + std::min(std::min(res(i - 1, j), res(i, j - 1)), res(i - 1, j - 1));
}
}
return res(n, m);
}
What do I need to change in this с++ function that it considered the method of distance symmetric2.
I do not understand how it works symmetric2.
here it is said very little about it
1. Well-known step patterns
These common transition types are used in quite a lot of implementations.
symmetric1 (or White-Neely) is the commonly used quasi-symmetric, no local constraint, non-normalizable. It is biased in favor of oblique steps. symmetric2 is normalizable, symmetric, with no local slope constraints. Since one diagonal step costs as much as the two equivalent steps along the sides, it can be normalized dividing by N+M (query+reference lengths).
in the source code, I could not understand because I am a beginner programmer
I do not speak English so forgive me for the mistakes.
thank you
OP is asking about dynamic time warping alignments in R. Printing the symmetric2 object should clarify the recursion rule:
g[i,j] = min(
g[i-1,j-1] + 2 * d[i ,j ] ,
g[i ,j-1] + d[i ,j ] ,
g[i-1,j ] + d[i ,j ] ,
)
g is the global cost matrix, d the local distance. I can't comment on the rest of your code.
If you only need the distance value under this specific step pattern, and no other features, the code may be much simplified (see e.g. the pseudocode on Wikipedia).
I have a problem that could be boiled down to finding a way of mapping a triangular matrix to a vector skipping the diagonal.
Basically I need to translate this C++ code using the Gecode libraries
// implied constraints
for (int k=0, i=0; i<n-1; i++)
for (int j=i+1; j<n; j++, k++)
rel(*this, d[k], IRT_GQ, (j-i)*(j-i+1)/2);
Into this MiniZinc (functional) code
constraint
forall ( i in 1..m-1 , j in i+1..m )
( (differences[?]) >= (floor(int2float(( j-i )*( j-i+1 )) / int2float(2)) ));
And I need to figure out the index in differences[?].
MiniZinc is a functional/mathematical language with no proper for loops.
So I have to map those indexes i and j that are touching all and only the cells of an upper triangular matrix, skipping its diagonal, to a k that numbers those cells from 0 to whatever.
If this was a regular triangular matrix (it's not), a solution like this would do
index = x + (y+1)*y/2
The matrix I'm handling is a square n*n matrix with indexes going from 0 to n-1, but it would be nice to provide a more general solution for an n*m matrix.
Here's the full Minizinc code
% modified version of the file found at https://github.com/MiniZinc/minizinc-benchmarks/blob/master/golomb/golomb.mzn
include "alldifferent.mzn";
int: m;
int: n = m*m;
array[1..m] of var 0..n: mark;
array[int] of var 0..n: differences = [mark[j] - mark[i] | i in 1..m, j in i+1..m];
constraint mark[1] = 0;
constraint forall ( i in 1..m-1 ) ( mark[i] < mark[i+1] );
% this version of the constraint works
constraint forall ( i in 1..m-1 , j in i+1..m )
( (mark[j] - mark[i]) >= (floor(int2float(( j-i )*( j-i+1 )) / int2float(2))) );
%this version does not
%constraint forall ( i in 1..m-1, j in i+1..m )
% ( (differences[(i-1) + ((j-2)*(j-1)) div 2]) >= (floor(int2float(( j-i )*( j-i+1 )) / int2float(2))) );
constraint alldifferent(differences);
constraint differences[1] < differences[(m*(m-1)) div 2];
solve :: int_search(mark, input_order, indomain, complete) minimize mark[m];
output ["golomb ", show(mark), "\n"];
Thanks.
Be careful. The formula you found from that link, index = x + (y+1)*y/2, includes the diagonal entries, and is for a lower triangular matrix, which I gather is not what you want. The exact formula you are looking for is actually index = x + ((y-1)y)/2
(see: https://math.stackexchange.com/questions/646117/how-to-find-a-function-mapping-matrix-indices).
Again, watch out, this formula I gave you assumes your indices: x,y, are zero-based. Your MiniZinc code is using indices i,j that start from 1 (1 <= i <= m), 1 <= j <= m)). For indices that start from 1, the formula is T(i,j) = i + ((j-2)(j-1))/2. So your code should look like:
constraint
forall ( i in 1..m-1 , j in i+1..m )
((distances[(i + ((j-2)*(j-1)) div 2]) >= ...
Note that (j-2)(j-1) will always be a multiple of 2, so we can just use integer division with divisor 2 (no need to worry about converting to/from floats).
The above assumes you are using a square m*m matrix.
To generalise to a M*N rectangular matrix, one formula could be:
where 0 <= i < M, 0<= j < N [If you again, need your indices to start from 1, replace i with i-1 and j with j-1 in the above formula]. This touches all of cells of an upper triangular matrix as well as the 'extra block on the side' of the square that occurs when N > M. That is, it touches all cells (i,j) such that i < j for 0 <= i < M, 0 <= j < N.
Full code:
% original: https://github.com/MiniZinc/minizinc-benchmarks/blob/master/golomb/golomb.mzn
include "alldifferent.mzn";
int: m;
int: n = m*m;
array[1..m] of var 0..n: mark;
array[1..(m*(m-1)) div 2] of var 0..n: differences;
constraint mark[1] = 0;
constraint forall ( i in 1..m-1 ) ( mark[i] < mark[i+1] );
constraint alldifferent(differences);
constraint forall (i,j in 1..m where j > i)
(differences[i + ((j-1)*(j-2)) div 2] = mark[j] - mark[i]);
constraint forall (i,j in 1..m where j > i)
(differences[i + ((j-1)*(j-2)) div 2] >= (floor(int2float(( j-i )*( j-i+1 )) / int2float(2))));
constraint differences[1] < differences[(m*(m-1)) div 2];
solve :: int_search(mark, input_order, indomain, complete)
minimize mark[m];
output ["golomb ", show(mark), "\n"];
Lower triangular version (take previous code and swap i and j where necessary):
% original: https://github.com/MiniZinc/minizinc-benchmarks/blob/master/golomb/golomb.mzn
include "alldifferent.mzn";
int: m;
int: n = m*m;
array[1..m] of var 0..n: mark;
array[1..(m*(m-1)) div 2] of var 0..n: differences;
constraint mark[1] = 0;
constraint forall ( i in 1..m-1 ) ( mark[i] < mark[i+1] );
constraint alldifferent(differences);
constraint forall (i,j in 1..m where i > j)
(differences[j + ((i-1)*(i-2)) div 2] = mark[i] - mark[j]);
constraint forall (i,j in 1..m where i > j)
(differences[j + ((i-1)*(i-2)) div 2] >= (floor(int2float(( i-j )*( i-j+1 )) / int2float(2))));
constraint differences[1] < differences[(m*(m-1)) div 2];
solve :: int_search(mark, input_order, indomain, complete)
minimize mark[m];
output ["golomb ", show(mark), "\n"];
I'm reading the following code (taken from here)
void linear_interpolation_CPU(float2* result, float2* data,
float* x_out, int M, int N) {
float a;
for(int j = 0; j < N; j++) {
int k = floorf(x_out[j]);
a = x_out[j] - floorf(x_out[j]);
result[j].x = a*data[k+1].x + (-data[k].x*a + data[k].x);
result[j].y = a*data[k+1].y + (-data[k].y*a + data[k].y);
}
}
but I don't get it.
Why isn't the result[y] calculated by using the
formula?
It is calculated that way.
Look at the first two lines:
int k = floorf(x_out[j]);
a = x_out[j] - floorf(x_out[j]);
The first line defines x0 using the floor function. This is because the article assumes a lattice spacing of one for the sample points, as per the line:
the samples are obtained on the 0,1,...,M lattice
Now we could rewrite the second line for clarity as:
a = x_out[j] - k;
The second line is therefore x-x0.
Now, let us examine the equation:
result[j].y = a*data[k+1].y + (-data[k].y*a + data[k].y);
Rewriting this in terms of y, x, and x0 gives:
y = (x-x0)*data[k+1].y + (-data[k].y*(x-x0) + data[k].y);
Let's rename data[k+1].y as y1 and data[k].y as y0:
y = (x-x0)*y1 + (-y0*(x-x0) + y0);
Let's rearrange this by pulling out x-x0:
y = (x-x0)*(y1-y0) + y0;
And rearrange again:
y = y0 + (y1-y0)*(x-x0);
Again, the lattice spacing is important:
the samples are obtained on the 0,1,...,M lattice
Thus, x1-x0 is always 1. If we put it back in, we get
y = y0 + (y1-y0)*(x-x0)/(x1-x0);
Which is just the equation you were looking for.
Granted, it's ridiculous that the code is not written so as to make that apparent.
This is what i have so far but I do not think it is right.
for (int i = 0 ; i < 5; i++)
{
for (int j = 0; j < 5; j++)
{
matrix[i][j] += matrix[i][j] * matrix[i][j];
}
}
Suggestion: if it's not a homework don't write your own linear algebra routines, use any of the many peer reviewed libraries that are out there.
Now, about your code, if you want to do a term by term product, then you're doing it wrong, what you're doing is assigning to each value it's square plus the original value (n*n+n or (1+n)*n, whatever you like best)
But if you want to do an authentic matrix multiplication in the algebraic sense, remember that you had to do the scalar product of the first matrix rows by the second matrix columns (or the other way, I'm not very sure now)... something like:
for i in rows:
for j in cols:
result(i,j)=m(i,:)·m(:,j)
and the scalar product "·"
v·w = sum(v(i)*w(i)) for all i in the range of the indices.
Of course, with this method you cannot do the product in place, because you'll need the values that you're overwriting in the next steps.
Also, explaining a little bit further Tyler McHenry's comment, as a consecuence of having to multiply rows by columns, the "inner dimensions" (I'm not sure if that's the correct terminology) of the matrices must match (if A is m x n, B is n x o and A*C is m x o), so in your case, a matrix can be squared only if it's square (he he he).
And if you just want to play a little bit with matrices, then you can try Octave, for example; squaring a matrix is as easy as M*M or M**2.
I don't think you can multiply a matrix by itself in-place.
for (i = 0; i < 5; i++) {
for (j = 0; j < 5; j++) {
product[i][j] = 0;
for (k = 0; k < 5; k++) {
product[i][j] += matrix[i][k] * matrix[k][j];
}
}
}
Even if you use a less naïve matrix multiplication (i.e. something other than this O(n3) algorithm), you still need extra storage.
That's not any matrix multiplication definition I've ever seen. The standard definition is
for (i = 1 to m)
for (j = 1 to n)
result(i, j) = 0
for (k = 1 to s)
result(i, j) += a(i, k) * b(k, j)
to give the algorithm in a sort of pseudocode. In this case, a is a m x s matrix and b is an s x n, the result is a m x n, and subscripts begin with 1..
Note that multiplying a matrix in place is going to get the wrong answer, since you're going to be overwriting values before using them.
It's been too long since I've done matrix math (and I only did a little bit of it, on top), but the += operator takes the value of matrix[i][j] and adds to it the value of matrix[i][j] * matrix[i][j], which I don't think is what you want to do.
Well it looks like what it's doing is squaring the row/column, then adding it to the row/column. Is that what you want it to do? If not, then change it.