issue with eigs_sym for obtaining eigenvalues with smallest magnitude - c++

i'm trying to get a limited number of eigen-values with smallest magnitude of a squared symmetric matrix.
To do this, i'm using first the example in the doc of Armadillo (http://arma.sourceforge.net/docs.html#eigs_sym) :
sp_mat A = sprandu<sp_mat>(1000, 1000, 0.1);
sp_mat B = A.t()*A;
arma::vec eigval;
mat eigvec;
eigs_sym(eigval, eigvec, B, 10, "sm");//i add "sm" to get the eigen-
//values of smallest magnitude
cout<<eigval<<endl;
Here i obtain an error saying the ddcomposition fails [failed to converge].
However, when i called eigs_sym like this:
eigs_sym(eigval, eigvec, B, 10); //obtain the eigen-values with
//LARGEST magnitude (default call)
this work well and i get the expected result:
1.1596e+02
1.1680e+02
1.1785e+02
1.1815e+02
1.1927e+02
1.2017e+02
1.2108e+02
1.2256e+02
1.2323e+02
2.5413e+03
i'm on Ubuntu Os, and here is my .pro file (Qt) :
LIBS += -lgsl -lgslcblas -lX11 -lpthread -llapack -lm -fopenmp
-larmadillo
Any idea for resolving this issue?
Thank you

I solved this issure by choosing a higher number of eigenvalues to extract.
Apparently, a lower number of eigenvalues to extract makes the eigensolver to note converge. If you replace
eigs_sym(eigval, eigvec, B, 10,"sm")
by
eigs_sym(eigval, eigvec, B, 100,"sm")
this will work.

Related

BLAS function returns zero in Fortran90

I am learning to use BLAS in Fortran90, and wrote a simple program using the subroutine SAXPY and the function SNRM2. The program computes the distance between two points by subtracting one vector from the other, then taking the euclidean norm of the result.
I am specifying the return value of SNRM2 as external according to the answer to a similar question, "Calling BLAS functions".
My full program:
program test
implicit none
real :: dist
real, dimension(3) :: a, b
real, external :: SNRM2
a = (/ 3.0, 0.0, 0.0 /)
b = (/ 0.0, 4.0, 0.0 /)
call SAXPY(3, -1.0, a,1, b,1)
print *, 'difference vector: ', b
dist = 6.66 !to show that SNRM2 is doing something
dist = SNRM2(3, b, 1)
print *, 'length of diff vector: ', dist
end program test
The result of the program is:
difference vector: -3.00000000 4.00000000 0.00000000
length of diff vector: 0.00000000
The difference vector is correct, but the length ought to be 5. So why is SNRM2 returning a value of zero?
I know the variable dist is modified by SNRM2, so I don't suspect my openBLAS installation is broken. I'm running macos10.13 and installed everything with homebrew.
I am compiling with gfortran with many flags enabled, and I get no warnings:
gfortran test.f90 -lblas -g -fimplicit-none -fcheck=all -fwhole-file -fcheck=all -fbacktrace -Wall -Wextra -Wline-truncation -Wcharacter-truncation -Wsurprising -Waliasing -Wconversion -Wno-unused-parameter -pedantic -o test
I tried looking at the code for snrm2.f, but I don't see any potential problems.
I also tried declaring my variables with real(4) or real(selected_real_kind(6)) with no change in behavior.
Thanks!
According to this page, there seems to be some issue with single precision routines in the BLAS shipped with Apple's Accelerate Framework.
On my Mac (OSX10.11), gfortran-8.1 (installed via Homebrew) + default BLAS (in the system) gives a wrong result:
$ gfortran-8 test.f90 -lblas
or
$ gfortran-8 test.f90 -L/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/ -lBLAS
$ ./a.out
difference vector: -3.00000000 4.00000000 0.00000000
length of diff vector: 0.00000000
while explicitly linking with OpenBLAS (installed via Homebrew) gives the correct result:
$ gfortran-8 test.f90 -L/usr/local/Cellar/openblas/0.2.20_2/lib -lblas
$ ./a.out
difference vector: -3.00000000 4.00000000 0.00000000
length of diff vector: 5.00000000
The above page suggests that the problem occurs when linking with the system BLAS in a way that is not compliant with the old g77 style. Indeed, attaching -ff2c option gives the correct result:
$ gfortran-8 -ff2c test.f90 -lblas
$ ./a.out
difference vector: -3.00000000 4.00000000 0.00000000
length of diff vector: 5.00000000
But I guess it may be better to use the latest OpenBLAS (than using -ff2c option) ...
The following is a separate test in C (to check that the problem is not specific to gfortran).
// test.c
#include <stdio.h>
float snrm2_( int*, float*, int* );
int main()
{
float b[3] = { -3.0f, 4.0f, 0.0f };
int n = 3, inc = 1;
float dist = snrm2_( &n, b, &inc );
printf( "b = %10.7f %10.7f %10.7f\n", b[0], b[1], b[2] );
printf( "dist = %10.7f\n", dist );
return 0;
}
$ gcc-8 test.c -lblas
$ ./a.out
b = -3.0000000 4.0000000 0.0000000
dist = 0.0000000
$ gcc-8 test.c -lblas -L/usr/local/Cellar/openblas/0.2.20_2/lib
$ ./a.out
b = -3.0000000 4.0000000 0.0000000
dist = 5.0000000
As far as I've tried, the double-precision version (DNRM2) works even with the system BLAS, so the problem seems only with the single-precision version (as suggested in the above page).

How to get inverse of a complex double matrix using Eigen library?

I tried to a cpp program to find inverse of a 10 by 10 covariance matrix 'sn' [where sn=x*x^H; H is hermitian ] of data type complex double with Eigen library. Since sn has rank 2, I tried to make it a full rank matrix by adding a 10 by 10 matrix 'a' whose diagonal elements are 0.01 and off diagonal elemennts are 0. But now when I use .inverse(), it gives wrong results on comparing with MATLAB.
int main()
{
MatrixXcd sn(10,10),sn1(10,10),a(10,10);
for(i=0;i<10;i++)
for(j=0;j<10;j++)
if(i==j)
a(i,j)=0.01;
sn1=sn+a;//sn is known
cout<<"sn1 inverse"<<sn1.inverse()<<endl;
}
When I tried to find inverse of a simple matrix of order 2 by 2 , inverse() worked perfectly. Please help me.

Solving system linear equation of small matrices via Cramer's rule has large numerical error

I made the observation that when I solve a system of linear equation via the Cramer's rule (quotient of two determinants) of matrices of order N < 10, then I get quite a large residual error compared to LAPACK solution.
Here is an example:
float B00[36] __attribute__((aligned(16))) = {127.3611, -46.75962, 62.8739, -9.175959, 27.23792, 1.395347,
-46.75962, 841.5496, 406.2475, -119.3715, -33.60108, 6.269638,
62.8739, 406.2475, 1302.981, -542.8405, 95.03378, 42.77704,
-9.175959, -119.3715, -542.8405, 434.3342, 34.96918, -33.74546,
27.23792, -33.60108, 95.03378, 34.96918, 59.10199, -1.880791,
1.395347, 6.269638, 42.77704, -33.74546, -1.880791, 2.650853};
float c00[6] __attribute__((aligned(16))) = {-0.102149, -5.76615, -17.02828, 12.47396, 1.158018, -0.9571021};
Now linsolving this, yields for LAPACK (from Intel MKL):
x = [-0.000314947
-0.000589154
-0.00587876
0.0184799
0.01738
-0.0170484]
and the Cramer's rule (own implementation) yields:
x = [-0.000314933
-0.000798058
-0.00587888
0.0184808
0.017381
-0.0170508]
Note x[1] difference.
I can guarantee that the determinant calculation of mine is correct. Has anyone made a similar observation or can tell something about this?

Simulating matlab's mldivide with OpenCV

I asked this question yesterday: Simulating matlab's mrdivide with 2 square matrices
And thats got mrdivide working. However now I'm having problems with mldivide, which is currently implemented as follows:
cv::Mat mldivide(const cv::Mat& A, const cv::Mat& B )
{
//return b * A.inv();
cv::Mat a;
cv::Mat b;
A.convertTo( a, CV_64FC1 );
B.convertTo( b, CV_64FC1 );
cv::Mat ret;
cv::solve( a, b, ret, cv::DECOMP_NORMAL );
cv::Mat ret2;
ret.convertTo( ret2, A.type() );
return ret2;
}
By my understanding the fact that mrdivide is working should mean that mldivide is working but I can't get it to give me the same results as matlab. Again the results are nothing alike.
Its worth noting I am trying to do a [19x19] \ [19x200] so not square matrices this time.
Like I've previously mentioned in your other question, I am using MATLAB along with mexopencv, that way I can easily compare the output of both MATLAB and OpenCV.
That said, I can't reproduce your problem: I generated randomly matrices, and repeated the comparison N=100 times. I'm running MATLAB R2015a with mexopencv compiled against OpenCV 3.0.0:
N = 100;
r = zeros(N,2);
d = zeros(N,1);
for i=1:N
% double precision, i.e CV_64F
A = randn(19,19);
B = randn(19,200);
x1 = A\B;
x2 = cv.solve(A,B); % this a MEX function that calls cv::solve
r(i,:) = [norm(A*x1-B), norm(A*x2-B)];
d(i) = norm(x1-x2);
end
All results agreed and the errors were very small in the order of 1e-11:
>> mean(r)
ans =
1.0e-12 *
0.2282 0.2698
>> mean(d)
ans =
6.5457e-12
(btw I also tried x2 = cv.solve(A,B, 'IsNormal',true); which sets the cv::DECOMP_NORMAL flag, and the results were not that different either).
This leads me to believe that either your matrices happen to accentuate some edge case in the OpenCV solver, where it failed to give a proper solution, or more likely you have a bug somewhere else in your code.
I'd start by double checking how you load your data, and especially watch out for how the matrices are laid out (obviously MATLAB is column-major, while OpenCV is row-major)...
Also you never told us anything about your matrices; do they exhibit a certain characteristic, are there any symmetries, are they mostly zeros, their rank, etc..
In OpenCV, the default solver method is LU factorization, and you have to explicitly change it yourself if appropriate. MATLAB on the hand will automatically choose a method that best suits the matrix A, and LU is just one of the possible decompositions.
EDIT (response to comments)
When using SVD decompositition in MATLAB, the sign of the left and right eigenvectors U and V is arbitrary (this really comes from the DGESVD LAPACK routine). In order to get consistent results, one convention is to require that the first element of each eigenvector be a certain sign, and multiplying each vector by +1 or -1 to flip the sign as appropriate. I would also suggest checking out eigenshuffle.
One more time, here is a test I did to confirm that I get similar results for SVD in MATLAB and OpenCV:
N = 100;
r = zeros(N,2);
d = zeros(N,3);
for i=1:N
% double precision, i.e CV_64F
A = rand(19);
% compute SVD in MATLAB, and apply sign convention
[U1,S1,V1] = svd(A);
sn = sign(U1(1,:));
U1 = bsxfun(#times, sn, U1);
V1 = bsxfun(#times, sn, V1);
r(i,1) = norm(U1*S1*V1' - A);
% compute SVD in OpenCV, and apply sign convention
[S2,U2,V2] = cv.SVD.Compute(A);
S2 = diag(S2);
sn = sign(U2(1,:));
U2 = bsxfun(#times, sn, U2);
V2 = bsxfun(#times, sn', V2)'; % Note: V2 was transposed w.r.t V1
r(i,2) = norm(U2*S2*V2' - A);
% compare
d(i,:) = [norm(V1-V2), norm(U1-U2), norm(S1-S2)];
end
Again, all results were very similar and the errors close to machine epsilon and negligible:
>> mean(r)
ans =
1.0e-13 *
0.3381 0.1215
>> mean(d)
ans =
1.0e-13 *
0.3113 0.3009 0.0578
One thing I'm not sure about in OpenCV, but MATLAB's svd function returns the singular values sorted in decreasing order (unlike the eig function), with the columns of the eigenvectors in corresponding order.
Now if the singular values in OpenCV are not guaranteed to be sorted for some reason, you have to do it manually as well if you want to compare the results against MATLAB, as in:
% not needed in MATLAB
[U,S,V] = svd(A);
[S, ord] = sort(diag(S), 'descend');
S = diag(S);
U = U(:,ord)
V = V(:,ord);

c++ eigenvalue and eigenvector corresponding to the smallest eigenvalue

I am trying to find out the eigenvalues and the eigenvector corresponding to the smallest eigenvalue. I have a matrix A (nx2) and I have computed B = transpose(A) * a. When I am using c++ eigen function compute() and print the eigenvalues of matrix B, it shows something like this:
(4.4, 0)
(72.1, 0)
Printing the eigenvectors it gives output:
(-0.97, 0) (0.209, 0)
(-0.209, 0) (-0.97, 0)
I am confused. Eigenvectors can't be zero I guess. So, for the smallest eigenvalue 4.4, is the corresponding eigenvector (-0.97, -0.209)?
P.S. - when I print
mysolution.eigenvalues()[0]
it prints (4.4, 0). And when I print
mysolution.eigenvectors().col(0)
it prints (-0.97, 0) (0.209, 0). That's why I guess I can assume that for eigenvalue 4.4, the corresponding eigenvector is (-0.97, -0.209).
I guess you are correct.
None of your eigenvalues is null, though. It seems that you are working with complex numbers.
Could it be that you selected a complex floating point matrix to do your computations? Something along the lines of MatrixX2cf or MatrixX2cd.
Every square matrix has a set of eigenvalues. But even if the matrix itself only consists of real numbers, the eigenvalues and -vectors might contain complex numbers (take (0 1;-1 0) for example)
If Eigen knows nothing about your matrix structure (i.e. is it symmetric/self-adjoint? Is it orthonormal/unitary?) but still wants to provide you with exact eigenvalues, the only general type that can hold all possible eigenvalues is a complex number.
Thus, Eigen always returns complex numbers which are represented as pairs (a, b) for a + bi. Eigen will only return real numbers if the matrix is self-adjoint, i.e. SelfAdjointView is used to access the matrix.
If you know for a fact that your matrix only has real eigenvalues, you can just extract the real part by eigenvalue.real since Eigen returns std::complex values.
EDIT: I just realized that if your matrix A has no complex entries, B=transposed(A)*A is self-adjoint and thus you could just use a SelfAdjointView of the matrix to compute the real eigenvalues and -vectors.