Partitioned Matrix-Vector Multiplication - c++

Given a very sparse nxn matrix A with nnz(A) non-zeros, and a dense nxn matrix B. I would like to compute the matrix product AxB. Since n is very large, if carried out naively, the dense matrix B cannot be put into the memory. I have the following two options, but not sure which one is better. Could you give some suggestions. Thanks.
Option1. I parition the matrix B into n column vectors [b1,b2,...,bn]. Then, I can put matrix A and any single vector bi into the memory, and calculate the A*b1, A*b2, ..., A*bn, respectively.
Option2. I partition the matrices A and B, respectively, into four n/2Xn/2 blocks, and then use the block matrix-matrix multiplications to calculate A*B.
Which of the above choice is better? Can I say that Option 1 has high performance in parallel calculation?

See a discussion of both approaches, though for two dense matrices, in this document from the Scalapack documentation. Scalapack is the one of the reference tools for distributed linear algebra.

Related

Eigen: Modify Rows of Row-Major Sparse Matrix

I am using the Eigen library in C++ for solving sparse linear equations: Ax=b where, A is a square sparse matrix and b is a dense vector with ILU-preconditioned BiCGSTAB. I am initializing the matrix A using the setFromTriplets function. The linear system is generated from discretization of partial differential equations in space and time.
My application changes the matrix slightly at every time-step. I want to modify a small number of rows (around 1% rows) in the matrix in the beginning of each time-step. I am storing the matrix in the row-major format so that I can access the row directly. I don't want to re-assemble the entire matrix from triplets since the number of rows to be modified are around 1%. Moreover, the modification is such that the number of non-zeros in the row are exactly identical. I just want to change the column indices and values. Hence, I do not need to allocate extra memory for the row. After going through the Eigen documentation as well as the forum, I found the functions coeffRef and insert. Both of them will allocate extra memory if the element does not exist. I would like to avoid this since the number of non-zeros are not changing.
Any help is appreciated.

Sparse Matrix Multiplication Speed in Eigen

I am using Sparse Matrices in Eigen and I observe the following behavior:
I have the following Sparse Matrices with Column Major storage
A [1,766,548 x 3,079,008] with 105,808,194 non-zero elements and
B [3,079,008 x 1,766,548] with 9,476,108 non-zero elements
When I compute the dot product AxB in takes almost 8 seconds.
When I want to compute transpose(Β) x transpose(A), the computation cost seems to increase a lot. In fact, this runs for about ~2,500 seconds.
Note that I load the transposed tables from files and I don't transpose them with Eigen.
I didn't expect the two approaches to have exactly the same computational cost but I don't really understand such a difference in execution time as in both approaches the two matrices have exactly the same number of non-zero elements.
I am using g++ 7.4 and Eigen 3.3.7

Dimensionality Reduction

I am trying to understand the different methods for dimensionality reduction in data analysis. In particular I am interested in Singular Value Decomposition (SVD) and Principle Component Analysis (PCA).
Can anyone please explain there terms to a layperson? - I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and
b) how do they differ in their approach
OR maybe if you can explain what the results of each technique is telling me, so for
a) SVD - what are singular values
b) PCA - "proportion of variance"
Any example would be brilliant. I am not very good at maths!!
Thanks
You probably already figured this out, but I'll post a short description anyway.
First, let me describe the two techniques speaking generally.
PCA basically takes a dataset and figures out how to "transform" it (i.e. project it into a new space, usually of lower dimension). It essentially gives you a new representation of the same data. This new representation has some useful properties. For instance, each dimension of the new space is associated with the amount of variance it explains, i.e. you can essentially order the variables output by PCA by how important they are in terms of the original representation. Another property is the fact that linear correlation is removed from the PCA representation.
SVD is a way to factorize a matrix. Given a matrix M (e.g. for data, it could be an n by m matrix, for n datapoints, each of dimension m), you get U,S,V = SVD(M) where:M=USV^T, S is a diagonal matrix, and both U and V are orthogonal matrices (meaning the columns & rows are orthonormal; or equivalently UU^T=I & VV^T=I).
The entries of S are called the singular values of M. You can think of SVD as dimensionality reduction for matrices, since you can cut off the lower singular values (i.e. set them to zero), destroying the "lower parts" of the matrices upon multiplying them, and get an approximation to M. In other words, just keep the top k singular values (and the top k vectors in U and V), and you have a "dimensionally reduced" version (representation) of the matrix.
Mathematically, this gives you the best rank k approximation to M, essentially like a reduction to k dimensions. (see this answer for more).
So Question 1
I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and b) how do they differ in their approach
The answer is that they are the same.
To see this, I suggest reading the following posts on the CV and math stack exchange sites:
What is the intuitive relationship between SVD and PCA?
Relationship between SVD and PCA. How to use SVD to perform PCA?
How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix?
How to use SVD for dimensionality reduction (in R)
Let me summarize the answer:
essentially, SVD can be used to compute PCA.
PCA is closely related to the eigenvectors and eigenvalues of the covariance matrix of the data. Essentially, by taking the data matrix, computing its SVD, and then squaring the singular values (and doing a little scaling), you end up getting the eigendecomposition of the covariance matrix of the data.
Question 2
maybe if you can explain what the results of each technique is telling me, so for a) SVD - what are singular values b) PCA - "proportion of variance"
These eigenvectors (the singular vectors of the SVD, or the principal components of the PCA) form the axes of the news space into which one transforms the data.
The eigenvalues (closely related to the squares of the data matrix SVD singular values) hold the variance explained by each component. Often, people want to retain say 95% of the variance of the original data, so if they originally had n-dimensional data, they reduce it to d-dimensional data that keeps that much of the original variance, by choosing the largest d-eigenvalues such that 95% of the variance is kept. This keeps as much information as possible, while retaining as few useless dimensions as possible.
In other words, these values (variance explained) essentially tell us the importance of each principal component (PC), in terms of their usefulness reconstructing the original (high-dimensional) data. Since each PC forms an axis in the new space (constructed via linear combinations of the old axes in the original space), it tells us the relative importance of each of the new dimensions.
For bonus, note that SVD can also be used to compute eigendecompositions, so it can also be used to compute PCA in a different way, namely by decomposing the covariance matrix directly. See this post for details.
According to your question,I only understood the topic of Principal component analysis.so that I share few below points about PCA i hope you definitely understand.
PCA:
1.PCA is a linear transformation dimensionality reduction technique.
2.It is used for operations such as noise filtering,feature extraction and data visualization.
3.The goal of PCA is to identify patterns and detecting the correlations between variables.
4.If there is a strong correlation,then we could reduce the dimensionality which PCA is intended for.
5.Eigenvector is to make linear transformation without changing the directions.
this is the sample url to understand the PCA:https://www.solver.com/xlminer/help/principal-components-analysis-example

Armadillo complex sparse matrix inverse

I'm writing a program with Armadillo C++ (4.400.1)
I have a matrix that has to be sparse and complex, and I want to calculate the inverse of such matrix. Since it is sparse it could be the pseudoinverse, but I can guarantee that the matrix has the full diagonal.
In the API documentation of Armadillo, it mentions the method .i() to calculate the inverse of any matrix, but sp_cx_mat members do not contain such method, and the inv() or pinv() functions cannot handle the sp_cx_mat type apparently.
sp_cx_mat Y;
/*Fill Y ensuring that the diagonal is full*/
sp_cx_mat Z = Y.i();
or
sp_cx_mat Z = inv(Y);
None of them work.
I would like to know how to compute the inverse of matrices of sp_cx_mat type.
Sparse matrix support in Armadillo is not complete and many of the factorizations/complex operations that are available for dense matrices are not available for sparse matrices. There are a number of reasons for this, the largest being that efficient complex operations such as factorizations for sparse matrices is still very much an open research field. So, there is no .i() function available for cx_sp_mat or other sp_mat types. Another reason for this is lack of time on the part of the sparse matrix developers (...which includes me).
Given that the inverse of a sparse matrix is generally going to be dense, then you may simply be better off turning your cx_sp_mat into a cx_mat and then using the same inversion techniques that you normally would for dense matrices. Since you are planning to represent this as a dense matrix anyway, then it's a fair assumption that you have enough RAM to do that.

Determinant Value For Very Large Matrix

I have a very large square matrix of order around 100000 and I want to know whether the determinant value is zero or not for that matrix.
What can be the fastest way to know that ?
I have to implement that in C++
Assuming you are trying to determine if the matrix is non-singular you may want to look here:
https://math.stackexchange.com/questions/595/what-is-the-most-efficient-way-to-determine-if-a-matrix-is-invertible
As mentioned in the comments its best to use some sort of BLAS library that will do this for you such as Boost::uBLAS.
Usually, matrices of that size are extremely sparse. Use row and column reordering algorithms to concentrate the entries near the diagonal and then use a QR decomposition or LU decomposition. The product of the diagonal entries of the second factor is - up to a sign in the QR case - the determinant. This may still be too ill-conditioned, the best result for rank is obtained by performing a singular value decomposition. However, SVD is more expensive.
There is a property that if any two rows are equal or one row is a constant multiple of another row we can say that determinant of that matrix is zero.It is applicable to columns as well.
From my knowledge your application doesnt need to calculate determinant but the rank of matrix is sufficient to check if system of equations have non-trivial solution : -
Rank of Matrix