I'm using Armadillo. The eigs_gen function (for SpMat sparse matrices) has a parameter k for the number of eigenvalues to compute.
I have a 3x3 matrix my_matrix, when I run
arma::cx_fvec values;
arma::cx_fmat vectors;
arma::eigs_gen (values, vectors, my_matrix, 3);
I get the following exception
eigs_gen(): n_eigvals + 1 must be less than the number of rows in the matrix
To have 3 eigenvalues for a 3x3 matrix is well-defined in general, so I don't understand this restriction.
On the other hand, the eig_gen function, which computes all eigenvalues, only compiles for the dense matrix Mat type.
How do I find all eigenvalues for a sparse matrix with Armadillo?
Related
I am running repeated matrix diagonalization routines for small complex valued square matrices (dimension < 10), but encountered a failure on a small constant value matrix. The ComplexEigenSolver doesn't converge, returning empty objects for the eigenvalues and eigenvectors.
I have checked this problem by trying to solve the matrix with values all 1, but this works fine. My problem must be related to the small values of my matrix.
MatrixXcd matrix(2,2);
matrix(0,0) = std::complex<double>(1.4822e-322, 0);
matrix(0,1) = std::complex<double>(1.4822e-322, 0);
matrix(1,0) = std::complex<double>(1.4822e-322, 0);
matrix(1,1) = std::complex<double>(1.4822e-322, 0);
ComplexEigenSolver<MatrixXcd> ces;
ces.compute(matrix);
ces.eigenvalues();
ces.eigenvectors();
ces.info();
This gives empty eigenvalues and eigenvectors, and returns 2 from ces.info().
I expect it to simply give eigenvalues with entries 0 and 2.96e-322 (a scaled version of the matrix of ones given here: https://en.wikipedia.org/wiki/Matrix_of_ones)
Are the values too small?
I am performing a series of matrix multiplications with fairly large matrices. To run through all of these operations takes a long time, and I need my program to do this in a large loop. I was wondering if anyone has any ideas to speed this up? I just started using Eigen, so I have very limited knowledge.
I was using ROOT-cern's built in TMatrix class, but the speed for performing the matrix operations is very poor. I set up some diagonal matrices using Eigen with the hope that it handled the multiplication operation in a more optimal way. It may, but I cannot really see the performance difference.
// setup matrices
int size = 8000;
Eigen::MatrixXf a(size*2,size);
// fill matrix a....
Eigen::MatrixXf r(2*size,2*size); // diagonal matrix of row sums of a
// fill matrix r
Eigen::MatrixXf c(size,size); // diagonal matrix of col sums of a
// fill matrix c
// transpose a in place
a.transposeInPlace();
Eigen::MatrixXf c_dia;
c_dia = c.diagonal().asDiagonal();
Eigen::MatrixXf r_dia;
r_dia = r.diagonal().asDiagonal();
// calc car
Eigen::MatrixXf car;
car = c_dia*a*r_dia;
You are doing way too much work here. If you have diagonal matrices, only store the diagonal (and directly use that for products). Once you store a diagonal matrix in a square matrix, the information of the structure is lost to Eigen.
Also, you don't need to store the transposed variant of a, just use a.transpose() inside a product (that is only a minor issue here ...)
// setup matrices
int size = 8000;
Eigen::MatrixXf a(size*2,size);
// fill matrix a....
a.setRandom();
Eigen::VectorXf r = a.rowwise().sum(); // diagonal matrix of row sums of a
Eigen::VectorXf c = a.colwise().sum(); // diagonal matrix of col sums of a
Eigen::MatrixXf car = c.asDiagonal() * a.transpose() * r.asDiagonal();
Finally, of course make sure to compile with optimization enabled, and enable vectorization if available (with gcc or clang compile with -O2 -march=native).
Consider the code:
glm::mat4x4 T = glm::mat4x4(1);
glm::vec4 vrpExpanded;
vrpExpanded.x = this->vrp.x;
vrpExpanded.y = this->vrp.y;
vrpExpanded.z = this->vrp.z;
vrpExpanded.w = 1;
this->vieworientationmatrix = T * (-vrpExpanded);
Why does T*(-vrpExpanded) yield a vector? According to my knowledge of linear algebra this should yield a mat4x4.
According to my knowledge of linear algebra this should yield a mat4x4.
Then that's the problem.
According to linear algebra, a matrix can be multipled by a scalar (which does element-wise multiplication) or by another matrix. But even then, a matrix * matrix multiplication only works if the number of rows in the first matrix equals the number of columns in the second. And the resulting matrix is one which has the number of columns in the first and the number of rows in the second.
So if you have an AxB matrix and you multiply it with a CxD matrix, this only works if B and C are equal. And the result is an AxD matrix.
Multiplying a matrix by a vector means to pretend the vector is a matrix. So if you have a 4x4 matrix and you right-multiply it with a 4-element vector, this will only make sense if you treat that vector as a 4x1 matrix (since you cannot multiply a 4x4 matrix by a 1x4 matrix). And the result of a 4x4 matrix * a 4x1 matrix is... a 4x1 matrix.
AKA: a vector.
GLM is doing exactly what you asked.
Diagonalizing a 2x2 hermitian matrix is simple, it can be done analytically. However, when it comes to calculating the eigenvalues and eigenvectors over >10^6 times, it is important to do it as efficient as possible. Especially if the off-diagonal elements can vanish it is not possible to use one formula for the eigenvectors: An if-statement is necessary, which of course slows down the code. Thus, I thought using Eigen, where it's stated that the diagonalization of 2x2 and 3x3 matrices is optimized, would be still a good choice:
using
const std::complex<double> I ( 0.,1. );
inline double block_distr ( double W )
{
return (-W/2. + rand() * W/RAND_MAX);
}
a test-loop would be
...
SelfAdjointEigenSolver<Matrix<complex< double >, 2, 2> > ces;
Matrix<complex< double >, 2, 2> X;
for (int i = 0 ; i <iter_MAX; ++i) {
a00=block_distr(100.);
a11=block_distr(100.);
re_a01=block_distr(100.);
im_a01=block_distr(100.);
X(0,0)=a00;
X(1,0)=re_a01-I*im_a01;
//only the lower triangular part is referenced! X(0,1)=0.; <--- not necessary
X(1,1)=a11;
ces.compute(X,ComputeEigenvectors);
}
Writing the loop without Eigen, using directly the formulas for eigenvalues and eigenvectors of a hermitian matrix and an if-statement to check if the off diagonal is zero, is a factor of 5 faster. Am I not using Eigen properly or is such an overhead normal? Are there other lib.s which are optimized for small self-adjoint matrices?
By default, the iterative method is used. To use the analytical version for the 2x2 and 3x3, you have to call the computeDirect function:
ces.computeDirect(X);
but it is unlikely to be faster than your implementation of the analytic formulas.
I'm multiplying two matrices with OpenCV, A in NxM and B is MxP.
According to the documentation:
All the arrays must have the same type and the same size (or ROI
size). For types that have limited range this operation is saturating.
However, by the theory of matrix multiplication:
Assume two matrices are to be multiplied (the generalization to any
number is discussed below). If A is an n×m matrix and B is an m×p
matrix, the result would be AB of their multiplication is an n×p matrix defined
only if the number of columns m in A is equal to the number of rows m
in B.
shouldn't this code be working?
- (CvMat *) multMatrix:(CvMat *)AMatrix BMatrix:(CvMat *)BMatrix
{
CvMat *result = cvCreateMat(AMatrix->rows, BMatrix->cols, kMatrixType);
cvMul(AMatrix, BMatrix, result, 1.0);
return result;
}
I get the following exception:
OpenCV Error: Assertion failed (src1.size == dst.size &&
src1.channels() == dst.channels()) in cvMul, file
/Users/Aziz/Documents/Projects/opencv_sources/trunk/modules/core/src/arithm.cpp,
line 2728
kMatrixType is CV_32F, A is 6x234, B is 234x5 and result is 6x5...
Am I doing something wrong? Or is this an OpenCV restriction to matrix multiplication ?
You are doing element-wise multiplication with cvMul.
You should look at cvMatMul for doing proper matrix multiplication.
http://opencv.willowgarage.com/wiki/Matrix_operations