Introduction
I am developing a code in Fortran solving an MHD problem with preconditioning of a linear operator. The sparse matrix to be inverted can be considered as a matrix of the following hierarchical structure. The original matrix (say, A_1) is a band matrix of blocks. Each block of A_1 is a sparse matrix (say, A_2) of the same structure (i.e. a block banded matrix). Each block of A_2 is again a block banded matrix of the same sparsity structure, A_3. Each block of A_3 is, finally, a dense matrix 5 by 5, A_4. I find this hierarchical representation is very convenient to initialize elements of the matrix.
Question
I wonder if there exists a library (in Fortran) permitting to handle such a structure and convert it in one of the standard sparse matrix formats (CSR, CSC, BSR,...), since Sparse BLAS or MKL Pardiso will be used to invert it. Let me stress that my intention is to use the hierarchical structure only to initialize elements of the matrix. Of course, the hierarchical structure can be disregarded and the matrix could be hard-coded in the CSR format, but I find this is too time consuming to implement and test.
Comments
I don't expect a linear solver to use the hierarchical structure, although in S. Pissanetsky " Sparse matrix technology", 1984, Academmic Press, page 27 (available online here) such storage schemes are mentioned, namely, the "hypermatrix" and "supersparse" storage schemes, and were used in Gauss elimination. I have not found available implementations of these schemes yet.
Block compressed sparse row (BSR) format (supported by MKL) can be used to handle two levels of the matrix, A_3(sparse) + A_4(dense), not more.
Related
I am using the Armadillo library to manually port a piece of Matlab code. The matlab code uses the eigs() function to find a small number (~3) of eigen vectors of a relative large(200x200) covariance matrix R. The code looks like this:
[E,D] = eigs(R,3,"lm");
In Armadillo there are two functions eigs_sym() and eigs_gen() however the former only support real symmetric matrix and the latter requires ARPACK (I'm building the code for Android). Is there a reason eigs_sym doesn't support complex matrices? Is there any other way to find the eigenvectors of a complex symmetric matrix?
The eigs_sym() and eigs_gen() functions (where the s in eigs stands for sparse) in Armadillo are for large sparse matrices. A "large" size in this context is roughly 5000x5000 or larger.
Your R matrix has a size of 200x200. This is very small by current standards. It would be much faster to simply use the dense eigendecomposition eig_sym() or eig_gen() functions to get all the eigenvalues / eigenvectors, followed by extracting a subset of them using submatrix operations like .tail_cols()
Have you tested constructing a 400x400 real symmetric matrix by replacing each complex value, a+bi, by a 2x2 matrix [a,b;-b,a] (alternatively using a block variant of this)?
This should construct a real symmetric matrix that in some way correspond to the complex one.
There will be a slow-down due to the larger size, and all eigenvalues will be duplicated (which may slow down the algorithm), but it seems straightforward to test.
Armadillo (C++ linear algebra library : http://arma.sourceforge.net/) support a preliminary version of Sparse Matrix (in CSC storing model). As I read in the documentation and the code, the armadillo sparse matrix implementation does not differentiate between zero values and "unset" values.
The documentation says that all the value stored are non-zero, by deduction all non-stored values are zero. Instead what I need is to distinguish "unset" from zero.
I currently port a Julia project to C++/Armadillo where I need to manipulate some sparses matrices with -1, 0, 1 and "unset" values. In contrast to armadillo, Julia distinguishes zero and "unset" in its sparse matrix implementation.
My first idea is maybe to use complex sparse matrix (arma::sp_cx_mat, arma::sp_cx_imat) with little tricks to manage simili-zero values (exploit the imaginary as dummy non-zero part). But this is really non-elegant and surely impact the code performance.
Do you think there's a way to bypass the Armadillo limitation without write my own matrix class?
Thank you very much for your answer.
I'm writing a program with Armadillo C++ (4.400.1)
I have a matrix that has to be sparse and complex, and I want to calculate the inverse of such matrix. Since it is sparse it could be the pseudoinverse, but I can guarantee that the matrix has the full diagonal.
In the API documentation of Armadillo, it mentions the method .i() to calculate the inverse of any matrix, but sp_cx_mat members do not contain such method, and the inv() or pinv() functions cannot handle the sp_cx_mat type apparently.
sp_cx_mat Y;
/*Fill Y ensuring that the diagonal is full*/
sp_cx_mat Z = Y.i();
or
sp_cx_mat Z = inv(Y);
None of them work.
I would like to know how to compute the inverse of matrices of sp_cx_mat type.
Sparse matrix support in Armadillo is not complete and many of the factorizations/complex operations that are available for dense matrices are not available for sparse matrices. There are a number of reasons for this, the largest being that efficient complex operations such as factorizations for sparse matrices is still very much an open research field. So, there is no .i() function available for cx_sp_mat or other sp_mat types. Another reason for this is lack of time on the part of the sparse matrix developers (...which includes me).
Given that the inverse of a sparse matrix is generally going to be dense, then you may simply be better off turning your cx_sp_mat into a cx_mat and then using the same inversion techniques that you normally would for dense matrices. Since you are planning to represent this as a dense matrix anyway, then it's a fair assumption that you have enough RAM to do that.
I'm working with large sparse matrices that are not exactly very sparse and I'm always wondering how much sparsity is required for storage of a matrix as sparse to be beneficial? We know that sparse representation of a reasonably dense matrix could have a larger size than the original one. So is there a threshold for the density of a matrix so that it would be better to store it as sparse? I know that the answer to this question usually depends on the structure of the sparsity, etc but I was wondering if there is just some guidelines? for example I have a very large matrix with density around 42%. should I store this matrix as dense or sparse?
scipy.coo_matrix format stores the matrix as 3 np.arrays. row and col are integer indices, data has the same data type as the equivalent dense matrix. So it should be straight forward to calculate the memory it will take as a function of overall shape and sparsity (as well as the data type).
csr_matrix may be more compact. data and indices are the same as with coo, but indptr has a value for each row plus 1. I was thinking that indptr would be shorter than the others, but I just constructed a small matrix where it was longer. An empty row, for example, requires a value in indptr, but none in data or indices. The emphasis with this format is computational efficiency.
csc is similar, but working with columns. Again you should be able to the math to calculate this size.
Brief discussion of memory advantages from MATLAB (using similar storage options)
http://www.mathworks.com/help/matlab/math/computational-advantages.html#brbrfxy
background paper from MATLAB designers
http://www.mathworks.com/help/pdf_doc/otherdocs/simax.pdf
SPARSE MATRICES IN MATLAB: DESIGN AND IMPLEMENTATION
I want to use the Sparse Blas in Fortran95 just for the creation of the matrices and I am using the point entry construction. After creation of the matrix using the command
call duscr_begin(n,n,a,istat)
here a is the handle to the matrix n by n. After inserting value in it, how can I see the final matrix using its handles a ? As I want to use the matrix for some other operation, so I want to see the matrix in three vectors (sparse) form (row_index, Col_index, Value).
detail about this Sparse Blas is given in Chapter 3 and can be seen here
http://www.netlib.org/blas/blast-forum/
actually what i have asked is before 16 days and it is not just writing of a variable to thee screen. I was using some library known as Sparse Blas for creation of the Sparse matrices. Later on by digging in to the library i found the solution to my problem that using the handles how can we get the three vectors row, col and Val. The commands are something like
call accessdata_dsp(mat,a_handle,ierr)
call get_infoa(mat%INFOA,'n',nnz,ierr)
allocate(K0_row(nnz),K0_col(nnz),K0_A(nnz))
K0_row=mat%IA1; K0_col=mat%IA2; K0_A=mat%A
so here nnz are the non zeros entries in the sparse matrix while K0_row, K0_col and K0_A are our required three vectors, which can be used in further calculation.