How to get inverse of a complex double matrix using Eigen library? - c++

I tried to a cpp program to find inverse of a 10 by 10 covariance matrix 'sn' [where sn=x*x^H; H is hermitian ] of data type complex double with Eigen library. Since sn has rank 2, I tried to make it a full rank matrix by adding a 10 by 10 matrix 'a' whose diagonal elements are 0.01 and off diagonal elemennts are 0. But now when I use .inverse(), it gives wrong results on comparing with MATLAB.
int main()
{
MatrixXcd sn(10,10),sn1(10,10),a(10,10);
for(i=0;i<10;i++)
for(j=0;j<10;j++)
if(i==j)
a(i,j)=0.01;
sn1=sn+a;//sn is known
cout<<"sn1 inverse"<<sn1.inverse()<<endl;
}
When I tried to find inverse of a simple matrix of order 2 by 2 , inverse() worked perfectly. Please help me.

Related

Fast matrix multiplication of XDX^T for D diagonal

Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.

Opencv Multiplication of Large matrices

I have 2 matrices of dimension 1*280000.
I wanted to multiply one matrix with transposed second matrix using opencv.
I tried to multiply them using Multiplication operator(*).
But it is giving me error :'The total size matrix does not fit to size_t type'
As after multiplication the size will be 280000*28000 of matrix.
So,I am thinking multiplication should 32 bit.
Is there any method to do the 32bit multiplication?
Why do you want to multiply them like that? But because this is an answer, I would like to help you thinking more than just do it:
supposing that you have the two matrix: A and B (A.size() == B.size() == [1x280000]).
and A * B.t() = AB (AB is the result)
then AB = [A[0][0]*B A[0][1]*B ... A[0][279999]*B] (each column is the transposed matrix multiplied by the corresponding element of the other matrix)
AB may also be written as:
[ B[0][0]*A
B[0][1]*A
...
B[0][279999]*A]
(each row of the result will be the row matrix multiplied by the corresponding element of the column (transposed) matrix)
Hope that this will help you in what you are doing... Using a for loop you can print, or store, or what you need with the result

Multiplying matrices in Eigen c++ gives wrong dimensions

I'm having trouble understanding why I am getting a 10x10 matrix as a result from multiplying a 10x3 matrix with a 3x10 matrix using the Eigen library in c++.
By following the documentation at http://eigen.tuxfamily.org/dox-devel/group__TutorialMatrixArithmetic.html I came up with
const int NUM_OBSERVATIONS = 10;
const int NUM_DIMENSIONS = 3;
MatrixXf localspace(NUM_DIMENSIONS, NUM_OBSERVATIONS);
MatrixXf rotatedlocalspace(NUM_OBSERVATIONS, NUM_DIMENSIONS);
MatrixXf covariance(NUM_DIMENSIONS, NUM_DIMENSIONS);
covariance = (rotatedlocalspace * localspace) / (NUM_OBSERVATIONS - 1);
cout << covariance << endl;
Output gives a 10x10 matrix, when I am trying to obtain a 3x3 covariance matrix for each dimension (These are mean centered XYZ points). "localspace" and "rotatedlocalspace" are both filled with float values when covariance is calculated.
How do I get the correct covariance matrix?
Eigen is correct, as it reproduces basic math: if A is a matrix of dimension n x m and B has dimension m x k, then A*B has the dimension n x k.
Applied to your problem, if your matrix rotatedlocalspace is of dimension 10 x 3 and localspace has dimension 3 x 10, then rotatedlocalspace*localspace has dimension
(10 x 3) * (3 x 10) -> 10 x 10.
The scalar division you apply further doesn't change the dimension.
If you expect a different dimension, then try to commute the factors in the matrix product. This you will obtain a 3x3 matrix.

c++ eigenvalue and eigenvector corresponding to the smallest eigenvalue

I am trying to find out the eigenvalues and the eigenvector corresponding to the smallest eigenvalue. I have a matrix A (nx2) and I have computed B = transpose(A) * a. When I am using c++ eigen function compute() and print the eigenvalues of matrix B, it shows something like this:
(4.4, 0)
(72.1, 0)
Printing the eigenvectors it gives output:
(-0.97, 0) (0.209, 0)
(-0.209, 0) (-0.97, 0)
I am confused. Eigenvectors can't be zero I guess. So, for the smallest eigenvalue 4.4, is the corresponding eigenvector (-0.97, -0.209)?
P.S. - when I print
mysolution.eigenvalues()[0]
it prints (4.4, 0). And when I print
mysolution.eigenvectors().col(0)
it prints (-0.97, 0) (0.209, 0). That's why I guess I can assume that for eigenvalue 4.4, the corresponding eigenvector is (-0.97, -0.209).
I guess you are correct.
None of your eigenvalues is null, though. It seems that you are working with complex numbers.
Could it be that you selected a complex floating point matrix to do your computations? Something along the lines of MatrixX2cf or MatrixX2cd.
Every square matrix has a set of eigenvalues. But even if the matrix itself only consists of real numbers, the eigenvalues and -vectors might contain complex numbers (take (0 1;-1 0) for example)
If Eigen knows nothing about your matrix structure (i.e. is it symmetric/self-adjoint? Is it orthonormal/unitary?) but still wants to provide you with exact eigenvalues, the only general type that can hold all possible eigenvalues is a complex number.
Thus, Eigen always returns complex numbers which are represented as pairs (a, b) for a + bi. Eigen will only return real numbers if the matrix is self-adjoint, i.e. SelfAdjointView is used to access the matrix.
If you know for a fact that your matrix only has real eigenvalues, you can just extract the real part by eigenvalue.real since Eigen returns std::complex values.
EDIT: I just realized that if your matrix A has no complex entries, B=transposed(A)*A is self-adjoint and thus you could just use a SelfAdjointView of the matrix to compute the real eigenvalues and -vectors.

Diagonalization of a 2x2 self-adjoined (hermitian) matrix

Diagonalizing a 2x2 hermitian matrix is simple, it can be done analytically. However, when it comes to calculating the eigenvalues and eigenvectors over >10^6 times, it is important to do it as efficient as possible. Especially if the off-diagonal elements can vanish it is not possible to use one formula for the eigenvectors: An if-statement is necessary, which of course slows down the code. Thus, I thought using Eigen, where it's stated that the diagonalization of 2x2 and 3x3 matrices is optimized, would be still a good choice:
using
const std::complex<double> I ( 0.,1. );
inline double block_distr ( double W )
{
return (-W/2. + rand() * W/RAND_MAX);
}
a test-loop would be
...
SelfAdjointEigenSolver<Matrix<complex< double >, 2, 2> > ces;
Matrix<complex< double >, 2, 2> X;
for (int i = 0 ; i <iter_MAX; ++i) {
a00=block_distr(100.);
a11=block_distr(100.);
re_a01=block_distr(100.);
im_a01=block_distr(100.);
X(0,0)=a00;
X(1,0)=re_a01-I*im_a01;
//only the lower triangular part is referenced! X(0,1)=0.; <--- not necessary
X(1,1)=a11;
ces.compute(X,ComputeEigenvectors);
}
Writing the loop without Eigen, using directly the formulas for eigenvalues and eigenvectors of a hermitian matrix and an if-statement to check if the off diagonal is zero, is a factor of 5 faster. Am I not using Eigen properly or is such an overhead normal? Are there other lib.s which are optimized for small self-adjoint matrices?
By default, the iterative method is used. To use the analytical version for the 2x2 and 3x3, you have to call the computeDirect function:
ces.computeDirect(X);
but it is unlikely to be faster than your implementation of the analytic formulas.