Eigen SparseMatrix - set row values - c++

I write a simulation with Eigen and now I need to set a list of rows of my ColumnMajor SparseMatrix like this:
In row n:
for column elements m:
if m == n set value to one
else set value to zero
There is always the element with column index = row index inside the sparse matrix. I tried to use the InnerIterator but it did not work well since I have a ColumnMajor matrix. The prune method that was suggested in https://stackoverflow.com/a/21006998/3787689 worked but i just need to set the non-diagonal elements to zero temporarily and prune seems to actually delete them which slows a different part of the program down.
How should I proceed in this case?
Thanks in advance!
EDIT: I forgot to make clear: the sparse matrix is already filled with values.

Use triplets for effective insertion:
const int N = 5;
const int M = 10;
Eigen::SparseMatrix<double> myMatrix(N,M); // N by M matrix with no coefficient, hence this is the null matrix
std::vector<Eigen::Triplet<double>> triplets;
for (int i=0; i<N; ++i) {
triplets.push_back({i,i,1.});
}
myMatrix.setFromTriplets(triplets.begin(), triplets.end());

I solved it like this: Since I want to stick to a ColumnMajor matrix I do a local RowMajor version and use the InnerIterator to assign the values to the specific rows. After that I overwrite my matrix with the result.
Eigen::SparseMatrix<float, Eigen::RowMajor> rowMatrix;
rowMatrix = colMatrix;
for (uint i = 0; i < rowTable.size(); i++) {
int rowIndex = rowTable(i);
for (Eigen::SparseMatrix<float, Eigen::RowMajor>::InnerIterator
it(rowMatrix, rowIndex); it; ++it) {
if (it.row() == it.col())
it.valueRef() = 1.0f;
else
it.valueRef() = 0.0f;
}
}
colMatrix = rowMatrix;

For beginners, the simplest way set to zero a row/column/block is just to multiply it by 0.0.
So to patch an entire row in the way you desire it is enough to do:
A.row(n) *= 0; //Set entire row to 0
A.coeffRef(n,n) = 1; //Set diagonal to 1
This way you don't need to change your code depending of RowMajor/ColMajor orders. Eigen will do all the work in a quick way.
Also, if you are really interested in freeing memory after setting the row to 0, just add a A.prune(0,0) after you have finished editing all the rows in your matrix.

Related

Permute Columns of Matrix in Eigen

I read this answer Randomly permute rows/columns of a matrix with eigen
But they initialize the permutation matrix as the identity matrix and do a random shuffle. I'm wondering how I can initialize the matrix to a specific permutation.
For example, if I have a vector of integers where each (index, value) pair means I want to move column "index" to column "value" how can I do this?
Eigen::MatrixXi M = Eigen::MatrixXi::Random(3,3);
std::vector<int> my_perm = {1,2,0};
some_function to return Matrix [M.col(1), M.col(2), M.col(0)]
EDIT: dtell kindly answered my original question below.
ADDITIONAL INFO:
For anyone else looking at this -- if you want to permute a matrix with a vector of unknown (at compile time) quanties, you can do the following:
Eigen::VectorXi indices(A.cols());
for(long i = 0; i < indices.size(); ++i) {
indices[i] = vector_of_indices[i];
}
Eigen::PermutationMatrix<Eigen::Dynamic, Eigen::Dynamic> perm;
perm.indices() = indices;
Eigen::MatrixXd A_permute = A * perm; \\ permute the columns
If I understand you correctly, the answer to your question is this slight modification of the answer you have linked
Matrix3i A = Matrix3i::Random();
PermutationMatrix<3, 3> perm;
// Your permutation
perm.indices() = { 1, 2, 0 };
// Permutate rows
A = perm * A;

How could I subtract a 1xN eigen matrix from a MxN matrix, like numpy does?

I could not summarize a 1xN matrix from a MxN matrix like I do in numpy.
I create a matrix of np.arange(9).reshape(3,3) with eigen like this:
int buf[9];
for (int i{0}; i < 9; ++i) {
buf[i] = i;
}
m = Map<MatrixXi>(buf, 3,3);
Then I compute mean along row direction:
m2 = m.rowwise().mean();
I would like to broadcast m2 to 3x3 matrix, and subtract it from m, how could I do this?
There is no numpy-like broadcasting available in Eigen, what you can do is reuse the same pattern that you used:
m.colwise() -= m2
(See Eigen tutorial on this)
N.B.: m2 needs to be a vector, not a matrix. Also the more fixed the dimensions, the better the compiler can generate efficient code.
You need to use appropriate types for your values, MatrixXi lacks the vector operations (such as broadcasting). You also seem to have the bad habit of declaring your variables well before you initialise them. Don't.
This should work
std::array<int, 9> buf;
std::iota(buf.begin(), buf.end(), 0);
auto m = Map<Matrix3i>(buf.data());
auto v = m.rowwise().mean();
auto result = m.colwise() - v;
While the .colwise() method already suggested should be preferred in this case, it is actually also possible to broadcast a vector to multiple columns using the replicate method.
m -= m2.replicate<1,3>();
// or
m -= m2.rowwise().replicate<3>();
If 3 is not known at compile time, you can write
m -= m2.rowwise().replicate(m.cols());

Eigen: random binary vector with t 1s

I want to compute K*es where K is an Eigen matrix (dimension pxp) and es is a px1 random binary vector with 1s.
For example if p=5 and t=2 a possible es is [1,0,1,0,0]' or [0,0,1,1,0]' and so on...
How do I easily generate es with Eigen?
I came up with even a better solution, which is a combination of std::vector, Egien::Map and std::shuffle.
std::vector<int> esv(p,0);
std::fill_n(esv.begin(),t,1);
Eigen::Map<Eigen::VectorXi> es (esv.data(), esv.size());
std::random_device rd;
std::mt19937 g(rd());
std::shuffle(std::begin(esv), std::end(esv), g);
This solution is memory efficient (since Eigen::Map doesn't copy esv) and has the big advantage that if we want to permute es several times (like in this case), then we just need to repeat std::shuffle(std::begin(esv), std::end(esv), g);
Maybe I'm wrong, but this solution seems more elegant and efficient than the previous ones.
So you're using Eigen. I'm not sure what matrix type you're using, but I'll go off the class Eigen::MatrixXd.
What you need to do is:
Create a 1xp matrix that's all 0
Choose random spots to flip from 0 to 1 that are between 0-p, and make sure that spot is unique.
The following code should do the trick, although you could implement it other ways.
//Your p and t
int p = 5;
int t = 2;
//px1 matrix
MatrixXd es(1, p);
//Initialize the whole 1xp matrix
for (int i = 0; i < p; ++i)
es(1, i) = 0;
//Get a random position in the 1xp matrix from 0-p
for (int i = 0; i < t; ++i)
{
int randPos = rand() % p;
//If the position was already a 1 and not a 0, get a different random position
while (es(1, randPos) == 1)
randPos = rand() % p;
//Change the random position from a 0 to a 1
es(1, randPos) = 1;
}
When t is close to p, Ryan's method need to generate much more than t random numbers. To avoid this performance degrade, you could solve your original problem
find t different numbers from [0, p) that are uniformly distributed
by the following steps
generate t uniformly distributed random numbers idx[t] from [0, p-t+1)
sort these numbers idx[t]
idx[i]+i, i=0,...,t-1 are the result
The code:
VectorXi idx(t);
VectorXd es(p);
es.setConstant(0);
for(int i = 0; i < t; ++i) {
idx(i) = int(double(rand()) / RAND_MAX * (p-t+1));
}
std::sort(idx.data(), idx.data() + idx.size());
for(int i = 0; i < t; ++i) {
es(idx(i)+i) = 1.0;
}

How can I most efficiently map a kernel range for a hermitian (symmetric) matrix in OpenCL?

I'm working on an OpenCL project to generate very large hermitian (symmetric) matrices, and I am trying to determine the best way to generate the work IDs.
A hermitian matrix is symmetric along the diagonal, so that M(i,j) = M*(j,i).
In the brute force way, the for loop looks like:
for(int i = 0; i < N; i++)
{
for(int j = 0; j < N; j++)
{
complex<float> result = doSomeCalculation();
M(i,j) = result;
}
}
However, taking advantage of the hermitian property, the loop can be made to be twice as efficient by only calculating the upper triangular part of the matrix and duplicating the result in the lower triangular part:
for(int i = 0; i < N; i++)
{
for(int j = i; j < N; j++)
{
complex<float> result = doSomeCalculation();
M(i,j) = result;
M(j,i) = conj(result);
}
}
In both loops, doSomeCalculation() is an expensive operation, and each entry in the matrix is completely uncorrelated from every other entry (i.e. the problem is stupidly parallel).
My question is this:
How can I implement the second loop with doSomeCalculation as an OpenCL kernel so that the thread IDs are most efficiently used (i.e. so that the thread calculates both M(i,j) and M(j,i) without having to call doSomeCalculation() twice)?
You need to use a linear index, for example you can index every element of your matrix in this way:
0 1 2 ... N-1
* N-2 ... 2N-2
....
* * 2N-1 ... N(N+1)/2 -1
That is, the index K is given by:
k=iN-i*(i+1)/2+j
Where N is the size of the matrix and (i,j) are respectively the 0-based indices of the row and the column.
This relationship can be inverted; see the answer of this question, which I report here for completeness:
i = floor( ( 2*N+1 - sqrt( (2N+1)*(2N+1) - 8*k ) ) / 2 ) ;
j = k - N*i + i*(i+1)/2 ;
So you need to enqueue a 1D kernel with N(N+1)/2 work items, and you can decide by yourself the size of the workgroup (usually 64 items per work group is a good choice).
Then in the OpenCL code you can retrieve the index K by using:
int k = get_group_id(0)*64 + get_local_id(0);
And then use the two relationships above the index of the matrix element you need to compute.
Moreover, notice that you can also save space by representing your hermitian matrix as a linear vector with N(N+1)/2 elements.
If your matrices are really big, than you can dice up your NxN matrix into (N/k)x(N/k) tiles, each of size kxk. As soon as you need only a half of the data, you create 1D NDRange of size local_group_size * (N/k)x(N/k)/2 roughly.
Every tile of matrix is processed by one LocalGroup (size of LocalGroup is of your choice). The idea is that you create an array on Host side, which contain position of every WorkGroup in matrix. Kernel stub should look like follows:
void __kernel myKernel(
__global int* coords,
....)
{
int2 WorkGroupPositionInMatrix = vload2(get_group_id(0), coords);
...
DoCalculation();
...
WriteResultTwice();
...
return;
}
What you need to do by hand - is to cope with thouse WorkGroups, which will be placed on the matrix diagonal. If matrix size is big, than overhead for LocalGroups, placed on diagonal is negligible.
A right triangle can be cut in half vertically and the smaller portion rotated to fit with the larger portion to form a rectangle of equal area. Therefore it is easy to make your triangular global work area into one that is rectangular, which fits OpenCL.
See my answer here: OpenCL efficient way to group a lower triangular matrix

Initialize an Eigen::MatrixXd from a 2d std::vector

This should hopefully be pretty simple but i cannot find a way to do it in the Eigen documentation.
Say i have a 2D vector, ie
std::vector<std::vector<double> > data
Assume it is filled with 10 x 4 data set.
How can I use this data to fill out an Eigen::MatrixXd mat.
The obvious way is to use a for loop like this:
#Pseudo code
Eigen::MatrixXd mat(10, 4);
for i : 1 -> 10
mat(i, 0) = data[i][0];
mat(i, 1) = data[i][1];
...
end
But there should be a better way that is native to Eigen?
Sure thing. You can't do the entire matrix at once, because vector<vector> stores single rows in contiguous memory, but successive rows may not be contiguous. But you don't need to assign all elements of a row:
std::vector<std::vector<double> > data;
MatrixXd mat(10, 4);
for (int i = 0; i < 10; i++)
mat.row(i) = VectorXd::Map(&data[i][0],data[i].size());