Is there a fortran solver able to solve the following eigenvalue? - fortran

A is an N by N matrix. I is the unit matrix of (N-2) by (N-2). B is another N by N matrix which is defined as
B=[I 0 0;
0 0 0;
0 0 0]
. x is an array with N elements. How can I solve the eigenvalue of the following form
A x=cB x, where c is the eigenvalue,
by using an eigenvalue solver?

You can have a look at the Lapack library, which offers solutions for eigenvalues problems of generalized matrices. Depending on your data type and matrix type you will need to use different subroutines.
Have a look here on this regard. Also have a look here for the nomenclature used as type of matrix.
Finally, sometime ago I wrote this module to give an implementation example of a few Lapack functionalities, including eigenvalue problems. The one you can find there is for a single value generalized matrix (sgeev).

Related

Reliability of the boost::numeric::ublas::matrix LU inversion method

My Problem : inverse squared matrices of doubles numbers of size 30*30 (or around this size).
I started coding in C++ a LU decomposition method, then I discovered the existence of the librairie boost::numeric::ublas::matrix. Therefore to spare me rewriting everything I used some functions of this librairie, namely in this very order lu_factorize() then lu_substitute() to retrieve the inverse.
I hand-checked the reliability of my inversing function (which again, only use the 2 aforementioned boost functions) comparing the results with simple squared matrices (size 3 or 4) and the results are satisfying so far.
Now taking a (30,30) matrix "A" and inversing it "A^-1", the product "A*A^-1" returns me a matrix with 1 on the diagonale and everywhere else some very small numbers, here's a snippet :
1 -1.5e-16 -5.1e-20 2.4e-19
0 1 0 -5.4e-20
1.1e-16 1.1e-16 1 -1.4e-19
6.9e-17 0 -3.3e-17 1
I cannot tell if these numbers (out of the main diagonale) are made up by the matrix librairie trying to approximate 0 or if they issued from the LU decomposition ...
My question : Have you ever used boost ublas with this issue ? Is this librairie still reliable or outdated ? Is there any way to access the source code of these algorithm ?
Thanks in advance
Using C++11, gcc/8.2.0 and boost/1.76.0.

Finding the orthogonal basis of a symmetric matrix in c++

I want to find an eigendecomposition of a symmetric matrix, which looks for example like this:
0 2 2 0
2 0 0 2
2 0 0 2
0 2 2 0
It has a degenerate eigenspace in which you obviously have a certain freedom to chose the eigenvectors. Is there a library for c++ which I can force to find the Orthogonal Basis such that H = UDU^{T}?
Currently I'm using the Eigen::SelfAdjointEigenSolver. This gives the "wrong" result as then I have to use H = UDU^{-1}. The Matrices will have dimensions of 10000x10000 later on, because of this I want to omit the additional inversion of the matrix.
Does anyone know of such a thing?
OpenCV has support for this. Though, I don't know if you can use it on matrix 10000x10000 within suitable time/accuracy. I believe the best fit in OpenCV is eigen(...) method.
There is also BLAS C++ linear algebra library but I am not familiar with it.
Also, there is probably an implemetarion of an algorithm for this problem in the book Numerical Recipes.

How to implement scalar raised to the power of a matrix in Eigen?

I have the following code in MATLAB that I wish to port to C++, ideally with the Eigen library:
N(:,i)=2.^L(:,i)+1;
Where L is a symmetric matrix (1,2;2,1), and diagonal elements are all one.
In Eigen (unsupported) I note there is a function to calculate the exponential of a matrix, but none to raise an arbitrary scalar to a matrix power.
http://eigen.tuxfamily.org/dox-devel/unsupported/group__MatrixFunctions__Module.html#matrixbase_exp
Is there something I am missing?
If you really wanted to raise an arbitrary scalar to a matrix power, you should use the identity a^x = exp(log(a)*x).
However, the Matlab .^ operator computes an element-wise power. If you want the same in Eigen, use the corresponding Array functionality:
N.col(i) = pow(2.0, L.col(i).array()) + 1.0;
Beware that Eigen starts indexing at 0, and Matlab starts at 1, so you may need to replace i by i-1.

How can I get eigenvalues and eigenvectors fast and accurate?

I need to compute the eigenvalues and eigenvectors of a big matrix (about 1000*1000 or even more). Matlab works very fast but it does not guaranty accuracy. I need this to be pretty accurate (about 1e-06 error is ok) and within a reasonable time (an hour or two is ok).
My matrix is symmetric and pretty sparse. The exact values are: ones on the diagonal, and on the diagonal below the main diagonal, and on the diagonal above it. Example:
How can I do this? C++ is the most convenient to me.
MATLAB does not guarrantee accuracy
I find this claim unreasonable. On what grounds do you say that you can find a (significantly) more accurate implementation than MATLAB's highly refined computational algorithms?
AND... using MATLAB's eig, the following is computed in less than half a second:
%// Generate the input matrix
X = ones(1000);
A = triu(X, -1) + tril(X, 1) - X;
%// Compute eigenvalues
v = eig(A);
It's fast alright!
I need this to be pretty accurate (about 1e-06 error is OK)
Remember that solving eigenvalues accurately is related to finding the roots of the characteristic polynomial. This specific 1000x1000 matrix is very ill-conditioned:
>> cond(A)
ans =
1.6551e+003
A general rule of thumb is that for a condition number of 10k, you may lose up to k digits of accuracy (on top of what would be lost to the numerical method due to loss of precision from arithmetic method).
So in your case, I'd expect the results to be accurate up to an approximate error of 10-3.
If you're not opposed to using a third party library, I've had great success using the Armadillo linear algebra libraries.
For the example below, arma is the namespace they like to use, vec is a vector, mat is a matrix.
arma::vec getEigenValues(arma::mat M) {
return arma::eig_sym(M);
}
You can also serialize the data directly into MATLAB and vice versa.
Your system is tridiagonal and a (symmetric) Toeplitz matrix. I'd guess that eigen and Matlab's eig have special cases to handle such matrices. There is a closed-form solution for the eigenvalues in this case (reference (PDF)). In Matlab for your matrix this is simply:
n = size(A,1);
k = (1:n).';
v = 1-2*cos(pi*k./(n+1));
This can be further optimized by noting that the eigenvalues are centered about 1 and thus only half of them need to be computed:
n = size(A,1);
if mod(n,2) == 0
k = (1:n/2).';
u = 2*cos(pi*k./(n+1));
v = 1+[u;-u];
else
k = (1:(n-1)/2).';
u = 2*cos(pi*k./(n+1));
v = 1+[u;0;-u];
end
I'm not sure how you're going to get more fast and accurate than that (other than performing a refinement step using the eigenvectors and optimization) with simple code. The above should be able to translated to C++ very easily (or use Matlab's codgen to generate C/C++ code that uses this or eig). However, your matrix is still ill-conditioned. Just remember that estimates of accuracy are worst case.

Compute rank of Matrix

I need to calculate rank of 4096x4096 sparse matrix, and I use C/C++ code.
I found some libraries (like Armadillo) that do it but they're too slow (almost 5 minutes).
I've also tried two Open Source version of Matlab (Freemat and Octave) but both crashed when I tried to make a test with a script.
5 minutes isn't so much but I must get rank from something like a million of matrix so the faster the better.
Someone knows a fast library for rank computation?
The Eigen library supports sparse matrices, try it out.
Computing the algebraic rank is O(n^3), where n is the matrix size, so it's inherently slow. You need eg. to perform pivoting, and this is slow and inaccurate if your matrix is not well conditioned (for n = 4096, a typical matrix is very ill conditioned).
Now, what is the rank ? It is the dimension of the image. It is very difficult to compute when n is large and it'll be spoiled by any small numerical inaccuracy of the input. For n = 4096, unless you happen to have particularly well conditioned matrices, this will prevent you from doing anything useful with a pivoting algorithm.
The best way is in fact to fix a cutoff epsilon, compute the singular values s_1 > ... > s_n and take as the rank the lowest integer r such that sum(s_i^2, i > r) < epsilon^2 * sum(s_i^2).
You thus need a sparse SVD routine, eg. from there.
This may not be faster, but to the very least it will be correct.
You can ask for less singular values that you need to speed up things. This is a tough problem, and with no info on the background and how you got these matrices, there is nothing more we can do.
Try the following code (the documentation is here).
It is an example for calculating the rank of the matrix A with Eigen library:
MatrixXd A(2,2);
A << 1 , 0, 1, 0;
FullPivLU<MatrixXd> luA(A);
int rank = luA.rank();