Gurobi constraints and objective function - linear-programming

I am very new to Gurobi. I am trying to solve the following ILP
minimize \sum_i c_i y_i + \sum_i \sum_j D_{ij} x_{ij}
Here D is stored as a 2D numpy array.
My constraints are as follows
x_{ij} <= y_i
y_i + \sum_j x_{ij} = 1
Here's the image of the algebra :
My code so far is as follows,
from gurobipy import *
def gurobi(D,c):
n = D.shape[0]
m = Model()
X = m.addVars(n,n,vtype=GRB.BINARY)
y = m.addVars(n,vtype=GRB.BINARY)
m.update()
for j in range(D.shape[0]):
for i in range(D.shape[0]):
m.addConstr(X[i,j] <= y[i])
I am not sure about, how to implement the second constraint and specify the objective function, as objective terms includes a numpy array. Any help ?

Unfortunately I don't have GUROBI because it's really expensive...
but, according to this tutorial the second constraint should be implemented like this :
for i in range(n):
m.addConstr(y[i] + quicksum(X[i,j] for j in range(n), i) == 1)
while the objective function can be defined as :
m.setObjective(quicksum(c[i]*y[i] for i in range(n)) + quicksum(quicksum(D[i,j] * x[i,j]) for i in range(n) for j in range(n)), GRB.MINIMIZE)
N.B: I'm assuming D is a matrix n x n

This is a very simple case. You can write the first constraint this way. It is a good habit to name your constraints.
m.addConstrs((x[i,j] <= y[j] for i in range(D.shape[0]) for j in range(D.shape[0])), name='something')
If you want to add the second constraint, you can write it like this
m.addConstrs((y[i] + x.sum(i, '*') <= 1 for i in range(n)), name='something')
you could write the second equations ass well using quicksum as suggested by digEmAll.
The advantage of using quicksum is that you can add if condition so that you don't um over all values of j. Here is how you could do it
m.addConstrs((y[i] + quicksum(x[i, j] for j in range(n)) <= 1 for i in range(n)), name='something')
if you only needed some values of j to sum over then you could:
m.addConstrs((y[i] + quicksum(x[i, j] for j in range(n) if j condition) <= 1 for i in range(n)), name='something')
I hope this helps

Related

Modeling a binary constraint in AMPL - CPLEX

i have the following constraints
i tried to model it in AMPL using the following code:
var y {1..njobs} binary;
subject to overlap
{i in 1..njobs, j in i+1..njobs: i<>j}:
xi[i] + si[i] <= xi[j]+m*y[i];
subject to order
{i in 1..njobs, j in i+1..njobs: i<j}:
y[i] + y[j] = 1;
i'm new to this topic and seem to miss something in the code above. any suggestions?
According to the constraints, y has two indices, i and j, but your code only gives it a single index.
Should be something like:
var y {1..njobs,1..njobs} binary;
subject to overlap
{i in 1..njobs, j in i+1..njobs: i<>j}:
xi[i] + si[i] <= xi[j]+m*y[i,j];
subject to order
{i in 1..njobs, j in i+1..njobs: i<j}:
y[i,j] + y[j,i] = 1;
Currently the behaviour for when i = j is undefined. You may want to either add a constraint that defines the behaviour in that case, or exclude it from the index space when you declare y, e.g.:
var y {i in 1..njobs,j in 1..njobs: i <> j} binary;

Matrix multiplication with Python

I have a numerical analysis assignment and I need to find some coefficients by multiplying matrices. We were given an example in Mathcad, but now we have to do it in another programming language so I chose Python.
The problem is, that I get different results by multiplying matrices in respective environments. Here's the function in Python:
from numpy import *
def matrica(C, n):
N = len(C) - 1
m = N - n
A = [[0] * (N + 1) for i in range(N+1)]
A[0][0] = 1
for i in range(0, n + 1):
A[i][i] = 1
for j in range(1, m + 1):
for i in range(0, N + 1):
if i + j <= N:
A[i+j][n+j] = A[i+j][n+j] - C[i]/2
A[int(abs(i - j))][n+j] = A[int(abs(i - j))][n+j] - C[i]/2
M = matrix(A)
x = matrix([[x] for x in C])
return [float(y) for y in M.I * x]
As you can see I am using numpy library. This function is consistent with its analog in Mathcad until return statement, the part where matrices are multiplied, to be more specific. One more observation: this function returns correct matrix if N = 1.
I'm not sure I understand exactly what your code do. Could you explain a little more, like what math stuff you're actually computing. But if you want a plain regular product and if you use a numpy.matrix, why don't you use the already written matrix product?
a = numpy.matrix(...)
b = numpy.matrix(...)
p = a * b #matrix product

Time Complexity on triple Nested For loops where indexes are dependent on each other

I have this c++ like pseudo code here:
for ( i = 1; i ≤ (n – 2); i++)
for (j = i + 1; j ≤ (n – 1); j ++)
for (k = j + 1; k ≤ n; k++)
Print “Hello World”;
I am fairly certain the time complexity of this particular block of code is O(n^3) because it is triple nested for loop and they are all going to at minimum n - 2 so I generalized (n-2) * (n-1) * n
But I have been trying to solve the actual time complexity function. This is how far I got and could not proceed any further:
summation from i = 1 to n-2, summation from j = (i+1) to n-1, summation from k = (j+1) to n.
I understand that the inner most loop performs n - (j+1) steps, the middle loop performs (n-1)-(i+1) steps, and the outer loop performs (n-2)-i steps. I just need some pointers on how to simplify the summations to come to a time complexity function.
Thank you!
If interested, the loops iterate through every combination of n things taken 3 at a time, starting with (1,2,3), (1,2,4), ... , and ending with (n-2,n-1,n), which is n! / (( 3! )( (n-3)!) ) = (n)(n-1)(n-2)/6 = (n^3 - 3n^2 + 2n) / 6 , which leads to O(n^3).
Don't run the loop from 1 to less or equal a value. Your code is equal to:
for ( i = 0; i < (n – 2); i++)
for (j = i; j < (n – 1); j ++)
for (k = j; k < n; k++)
Print “Hello World”;
So your inner loop runs n-j, the middle one multiplies it with n-1-i and the outer one multiplies it with n-2. So you get (n-j)*(n-1-i)*(n-2). n has O(n) complexity. Because of i runs from 0 to (n-1), you could replace it with O(n) (because sum(0, n) = 0 + 1 + .. + N = 0.5 * n^2 = O(n^2)). It is the same with j. So you get (O(n)-O(n))*(O(n)-1-O(n))*(O(n)-2) = O(n)*(n)*O(n) = O(n^3).
For details why you could replace i with O(n) see "Nested loops" at this.

The formula of computing the Mel-filterbank coefficient

I am working with MFCC in a project about Speech Recognition. According to the document in this website http://practicalcryptography.com/miscellaneous/machine-learning/guide-mel-frequency-cepstral-coefficients-mfccs/, the formula of computing the Mel-filterbank is as follows:
`H (k, m) = 0 if ( k < f[m-1] )
= (k - f(m-1)) / (f[m] - f[m-1]) if ( f[m-1] <= k <= f[m] )
= (f[m+1] - k) / (f[m+1] - f[m]) if ( f[m] <= k <= f[m+1] )
= 0 if ( k > f[m+1] )`
I think something was wrong here. What is "k"? This website isn't the only one. I have search many document and it's still remained. Besides, if m == 1 , f[0] isn't computed, so the condition ( k < f[m-1] ) is wrong, isn't it? Can anybody help me?
You're defining a function H which takes formal arguments k and m. That's how k is defined. f[0] is perfectly well defined.
Basically, the formula describes this form ___/\___ with the peak at k=f[m].

How do you multiply a matrix by itself?

This is what i have so far but I do not think it is right.
for (int i = 0 ; i < 5; i++)
{
for (int j = 0; j < 5; j++)
{
matrix[i][j] += matrix[i][j] * matrix[i][j];
}
}
Suggestion: if it's not a homework don't write your own linear algebra routines, use any of the many peer reviewed libraries that are out there.
Now, about your code, if you want to do a term by term product, then you're doing it wrong, what you're doing is assigning to each value it's square plus the original value (n*n+n or (1+n)*n, whatever you like best)
But if you want to do an authentic matrix multiplication in the algebraic sense, remember that you had to do the scalar product of the first matrix rows by the second matrix columns (or the other way, I'm not very sure now)... something like:
for i in rows:
for j in cols:
result(i,j)=m(i,:)·m(:,j)
and the scalar product "·"
v·w = sum(v(i)*w(i)) for all i in the range of the indices.
Of course, with this method you cannot do the product in place, because you'll need the values that you're overwriting in the next steps.
Also, explaining a little bit further Tyler McHenry's comment, as a consecuence of having to multiply rows by columns, the "inner dimensions" (I'm not sure if that's the correct terminology) of the matrices must match (if A is m x n, B is n x o and A*C is m x o), so in your case, a matrix can be squared only if it's square (he he he).
And if you just want to play a little bit with matrices, then you can try Octave, for example; squaring a matrix is as easy as M*M or M**2.
I don't think you can multiply a matrix by itself in-place.
for (i = 0; i < 5; i++) {
for (j = 0; j < 5; j++) {
product[i][j] = 0;
for (k = 0; k < 5; k++) {
product[i][j] += matrix[i][k] * matrix[k][j];
}
}
}
Even if you use a less naïve matrix multiplication (i.e. something other than this O(n3) algorithm), you still need extra storage.
That's not any matrix multiplication definition I've ever seen. The standard definition is
for (i = 1 to m)
for (j = 1 to n)
result(i, j) = 0
for (k = 1 to s)
result(i, j) += a(i, k) * b(k, j)
to give the algorithm in a sort of pseudocode. In this case, a is a m x s matrix and b is an s x n, the result is a m x n, and subscripts begin with 1..
Note that multiplying a matrix in place is going to get the wrong answer, since you're going to be overwriting values before using them.
It's been too long since I've done matrix math (and I only did a little bit of it, on top), but the += operator takes the value of matrix[i][j] and adds to it the value of matrix[i][j] * matrix[i][j], which I don't think is what you want to do.
Well it looks like what it's doing is squaring the row/column, then adding it to the row/column. Is that what you want it to do? If not, then change it.