Sparse Blas in Fortran 95 - fortran

I want to use the Sparse Blas in Fortran95 just for the creation of the matrices and I am using the point entry construction. After creation of the matrix using the command
call duscr_begin(n,n,a,istat)
here a is the handle to the matrix n by n. After inserting value in it, how can I see the final matrix using its handles a ? As I want to use the matrix for some other operation, so I want to see the matrix in three vectors (sparse) form (row_index, Col_index, Value).
detail about this Sparse Blas is given in Chapter 3 and can be seen here
http://www.netlib.org/blas/blast-forum/

actually what i have asked is before 16 days and it is not just writing of a variable to thee screen. I was using some library known as Sparse Blas for creation of the Sparse matrices. Later on by digging in to the library i found the solution to my problem that using the handles how can we get the three vectors row, col and Val. The commands are something like
call accessdata_dsp(mat,a_handle,ierr)
call get_infoa(mat%INFOA,'n',nnz,ierr)
allocate(K0_row(nnz),K0_col(nnz),K0_A(nnz))
K0_row=mat%IA1; K0_col=mat%IA2; K0_A=mat%A
so here nnz are the non zeros entries in the sparse matrix while K0_row, K0_col and K0_A are our required three vectors, which can be used in further calculation.

Related

Eigen: Modify Rows of Row-Major Sparse Matrix

I am using the Eigen library in C++ for solving sparse linear equations: Ax=b where, A is a square sparse matrix and b is a dense vector with ILU-preconditioned BiCGSTAB. I am initializing the matrix A using the setFromTriplets function. The linear system is generated from discretization of partial differential equations in space and time.
My application changes the matrix slightly at every time-step. I want to modify a small number of rows (around 1% rows) in the matrix in the beginning of each time-step. I am storing the matrix in the row-major format so that I can access the row directly. I don't want to re-assemble the entire matrix from triplets since the number of rows to be modified are around 1%. Moreover, the modification is such that the number of non-zeros in the row are exactly identical. I just want to change the column indices and values. Hence, I do not need to allocate extra memory for the row. After going through the Eigen documentation as well as the forum, I found the functions coeffRef and insert. Both of them will allocate extra memory if the element does not exist. I would like to avoid this since the number of non-zeros are not changing.
Any help is appreciated.

Applying 1d-FFT over each row in PETSc-matrix

I have a PETSc-matrix, and would like to apply a 1d-FFT on each row of that matrix, preferably while keeping the possibility having the matrix distributed over several nodes. Based on the documentation and examples (such as here: https://www.mcs.anl.gov/petsc/petsc-current/src/mat/tests/ex143.c.html) I have to create an FFT-object ("FFT-matrix") and use this object then to create/initialize the vectors used for the FFT itself:
MatCreateFFT(PETSC_COMM_WORLD,DIM,dim,MATFFTW,&A);//Create FFT object
MatCreateVecsFFTW(A,&x,&y,&z); //Initialize Vectors
MatMult(A,x,y); //Apply FFT
Nevertheless, as far as I can see this only will execute a 1d-FFT over a vector, and not a 1d-FFT over each row in my matrix. Of course, I can just iterate over the rows in my matrix and copy them into the vector (and retrieve the result afterwards), but that would slow the process down quite a bit. Or do I have to resort to FFTW without using the interface from PETSc (as described in the same example as above) if I would like to execute the process described above?
You should be able to call MatDenseGetColumnVec and then feed that to your FFT.

Right function for computing a limited number of eigenvectors of a complex symmetric matrix in Armadillo

I am using the Armadillo library to manually port a piece of Matlab code. The matlab code uses the eigs() function to find a small number (~3) of eigen vectors of a relative large(200x200) covariance matrix R. The code looks like this:
[E,D] = eigs(R,3,"lm");
In Armadillo there are two functions eigs_sym() and eigs_gen() however the former only support real symmetric matrix and the latter requires ARPACK (I'm building the code for Android). Is there a reason eigs_sym doesn't support complex matrices? Is there any other way to find the eigenvectors of a complex symmetric matrix?
The eigs_sym() and eigs_gen() functions (where the s in eigs stands for sparse) in Armadillo are for large sparse matrices. A "large" size in this context is roughly 5000x5000 or larger.
Your R matrix has a size of 200x200. This is very small by current standards. It would be much faster to simply use the dense eigendecomposition eig_sym() or eig_gen() functions to get all the eigenvalues / eigenvectors, followed by extracting a subset of them using submatrix operations like .tail_cols()
Have you tested constructing a 400x400 real symmetric matrix by replacing each complex value, a+bi, by a 2x2 matrix [a,b;-b,a] (alternatively using a block variant of this)?
This should construct a real symmetric matrix that in some way correspond to the complex one.
There will be a slow-down due to the larger size, and all eigenvalues will be duplicated (which may slow down the algorithm), but it seems straightforward to test.

Replacing sparse matrix inserting operation in Matlab by C++

I am using a optimization toolbox in Matlab R2016a. But it runs very slow. I find that the main reason is the sparse matrix indexing operation when the size of sparse matrix becomes more than 100000.
In a function of the toolbox, a sparse matrix Jcon is allocated space firstly .
Jcon = spalloc(nrows,ncols,nnonzeros);
Then some other code calculate something like derivatives. In the end some new enties are inserted into Jcon using the following code
Jcon(link_row(ii),col0Right) = DLink.x0_right(ii,jj);
ii and jj are loop variables. The right hand side is usually colume vactor with 4 to 30 rows. The size of Jcon may also change during the inserting operation.
How can I improve the inseting part? Is it possible to use C++ to replace the sparse matrix inserting? Will it be quiker?

Calling the ARPACK reverse communicated matrix-vector routine

I'm trying to write a driver in C++ to calculate the eigenvalues for an asymmetric, real-valued sparse matrix using the fortran functions offered by ARPACK, but I am having a bit of trouble with the reverse communication approach.
Generally, I am trying to solve the normal eigenvalue equation:
A*v = lambda*v
and any interaction with the matrix A is done in ARPACK via a function 'av':
av(n, workd[ipntr[0]], workd[ipntr[1]])
which multiplies the vector held in the array 'workd' beginning at location 'ipntr[0]' and inserts the result into the array 'workd' beginning at location 'ipntr[1]'. Examples of this approach are given in the manual at http://www.caam.rice.edu/software/ARPACK/ and also in the ARPACK/EXAMPLES/SIMPLE/dnsimp.f code.
What I would like to know is how do I actually involve the matrix A? If it is not passed to the routine then how is it possible to find its action on the vector provided?
In the example code dnsimp.f their matrix A is calculated within the function 'av', and is 'derived from the standard central difference discretisation of the 2 dimensional convection-diffusion operator'. However, I believe this is problem specific? It also doesn't seem too useful to have to code the derivation of the matrix A into the function. I can't find much information on this from the manual either.
It doesn't seem to be too much of a problem, since as it is a user defined function I am able to just change the definition of 'av' to include the matrix A as a parameter. However I would like to know how it is done properly in case of any potential compatibility issues.
Thank you!
You don't have to supply the matrix to ARPACK.
All you have to do, is to multiply the matrix with the returned vectors (thus, reverse communication) till the desired convergence is reached.
For information on the algorithms, you should take a look at the users guide and especially on the chapter about the underlying algorithms.
Response to comment: The underlying algorithm is a form of Arnoldi Iteration. The basic algorithm is shown in wikipedia and shows, that the matrix A won't be accessed. Neither directly, nor indirectly.
In particular, the algorithms starts with an arbitrary normalized vector q_1. This vector is returned to the user. The user multiplies this vector with the matrix A using their favourite routine (usually some efficient sparse matrix-vector-multiplication) and returns the result to the Arnoldi Iteration to calculate a part of the Hessenberg matrix H (whose eigenvalues typically converge to the extreme eigenvalues of A) and the next vector q_2. This has to be iterated, till your results are converged.