At this moment I'm working with an image processing project. But I have a conceptual questions regarding the PCA.
What exactly happens to the matrix after applying the PCA in the matrix of an image?
I does not understand reading the literature on this subject.
Given an M x N matrix, the result is an matrix M' x N' and where M'< M and N' < N and M' x N' is proporcional to M x N?
I'm no expert in PCA but I'll try to explain what I understand.
After apply PCA to the matrix of an image, you get the eigenvectors of that matrix, which represents a number invariant axis of the matrix. These vectors are all orthogonal to each other.
By measuring how disperse are your original data in matrix along these vectors, you can know how are they distributed. This can be useful, for example, if you wish to perform pattern categorization based on how the data are distributed along these "axis".
Although not accurate, you can imagine PCA helps you to draw "axis" along the blob of data that presence in your matrix, where the new "origin" of the axis are the center of your data.
The best part is that the data are distributed the most along the first eigenvector, follow by the second eigenvector and so on.
I hope I did not confused you.
There are a number of good reference about PCA in quora in addition to stackoverflow too.
Here are a few examples:
https://www.quora.com/What-is-an-intuitive-explanation-for-PCA
http://www.quora.com/How-to-explain-PCA-in-laymans-terms
Again, I'm no expert and welcome others to correct/educate both rwvaldivia and me.
The concept of PCA is closely related to linear algebra, which is the domain of mathematics to which matrices belong. A common way to view a matrix is as a set of vectors, a MxN matrix are just M vectors in an N-dimensional space.
Now a general concept in linear algebra is that the choice of basis vectors is pretty arbitrary. If you choose another basis, you convert your matrix by multiplying it with the old basis expressed in the new basis (an NxN dimensional matrix itself).
PCA is a method to find a basis which isn't arbitrary, but specific to your matrix. In particular, it orders base vectors by the amount in which they're present in your set of vectors. If all your vectors point roughly in the same direction, that direction will be the first basis vector. If they're all roughly in the same plane, the major basis vectors for that plane will be your first two vectors. But remember: you'll generally a full MxN basis (unless your matrix is degenerate); it's up to you to decide how many of the Components are Principal.
Now here's the real question: what exactly is "the matrix of the image"? You generally can't treat a 1024 x 768 image as a set of 1024 vectors in 768-dimensional space. Sure, you can perform the PCA operation, and you will get a 1024x768 result matrix, but what does that even mean? They're the basis vectors of your input matrix, but that output doesn't have an image meaning exactly because your input is not a set of vectors.
Related
I am trying to understand the different methods for dimensionality reduction in data analysis. In particular I am interested in Singular Value Decomposition (SVD) and Principle Component Analysis (PCA).
Can anyone please explain there terms to a layperson? - I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and
b) how do they differ in their approach
OR maybe if you can explain what the results of each technique is telling me, so for
a) SVD - what are singular values
b) PCA - "proportion of variance"
Any example would be brilliant. I am not very good at maths!!
Thanks
You probably already figured this out, but I'll post a short description anyway.
First, let me describe the two techniques speaking generally.
PCA basically takes a dataset and figures out how to "transform" it (i.e. project it into a new space, usually of lower dimension). It essentially gives you a new representation of the same data. This new representation has some useful properties. For instance, each dimension of the new space is associated with the amount of variance it explains, i.e. you can essentially order the variables output by PCA by how important they are in terms of the original representation. Another property is the fact that linear correlation is removed from the PCA representation.
SVD is a way to factorize a matrix. Given a matrix M (e.g. for data, it could be an n by m matrix, for n datapoints, each of dimension m), you get U,S,V = SVD(M) where:M=USV^T, S is a diagonal matrix, and both U and V are orthogonal matrices (meaning the columns & rows are orthonormal; or equivalently UU^T=I & VV^T=I).
The entries of S are called the singular values of M. You can think of SVD as dimensionality reduction for matrices, since you can cut off the lower singular values (i.e. set them to zero), destroying the "lower parts" of the matrices upon multiplying them, and get an approximation to M. In other words, just keep the top k singular values (and the top k vectors in U and V), and you have a "dimensionally reduced" version (representation) of the matrix.
Mathematically, this gives you the best rank k approximation to M, essentially like a reduction to k dimensions. (see this answer for more).
So Question 1
I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and b) how do they differ in their approach
The answer is that they are the same.
To see this, I suggest reading the following posts on the CV and math stack exchange sites:
What is the intuitive relationship between SVD and PCA?
Relationship between SVD and PCA. How to use SVD to perform PCA?
How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix?
How to use SVD for dimensionality reduction (in R)
Let me summarize the answer:
essentially, SVD can be used to compute PCA.
PCA is closely related to the eigenvectors and eigenvalues of the covariance matrix of the data. Essentially, by taking the data matrix, computing its SVD, and then squaring the singular values (and doing a little scaling), you end up getting the eigendecomposition of the covariance matrix of the data.
Question 2
maybe if you can explain what the results of each technique is telling me, so for a) SVD - what are singular values b) PCA - "proportion of variance"
These eigenvectors (the singular vectors of the SVD, or the principal components of the PCA) form the axes of the news space into which one transforms the data.
The eigenvalues (closely related to the squares of the data matrix SVD singular values) hold the variance explained by each component. Often, people want to retain say 95% of the variance of the original data, so if they originally had n-dimensional data, they reduce it to d-dimensional data that keeps that much of the original variance, by choosing the largest d-eigenvalues such that 95% of the variance is kept. This keeps as much information as possible, while retaining as few useless dimensions as possible.
In other words, these values (variance explained) essentially tell us the importance of each principal component (PC), in terms of their usefulness reconstructing the original (high-dimensional) data. Since each PC forms an axis in the new space (constructed via linear combinations of the old axes in the original space), it tells us the relative importance of each of the new dimensions.
For bonus, note that SVD can also be used to compute eigendecompositions, so it can also be used to compute PCA in a different way, namely by decomposing the covariance matrix directly. See this post for details.
According to your question,I only understood the topic of Principal component analysis.so that I share few below points about PCA i hope you definitely understand.
PCA:
1.PCA is a linear transformation dimensionality reduction technique.
2.It is used for operations such as noise filtering,feature extraction and data visualization.
3.The goal of PCA is to identify patterns and detecting the correlations between variables.
4.If there is a strong correlation,then we could reduce the dimensionality which PCA is intended for.
5.Eigenvector is to make linear transformation without changing the directions.
this is the sample url to understand the PCA:https://www.solver.com/xlminer/help/principal-components-analysis-example
I am wondering if there is a way to define a function which labels the coefficients of a matrix. E.g., I want to have a function which depends on the coefficients of a 2-d difference equation to then build a matrix out of them.
The above might be unclear so let me explain:
I have a matrix which expresses the eigenvector of the 2-d finite difference Laplacian (with an extra term). I have something like this
D*a[i,j]=exp(b*j*I)*a[i+1,j]+exp(-b*j*I)*a[i-1,j]+a[i,j+1]+a[i,j-1]-4*a[i,j]
Where I=sqrt(-1) and b=constant. Sorry about the above formatting, I don't know how to latex type on here.
So i want to build a matrix of the coefficients of D*a[i,j], so a NxN matrix where i,j=0,1,..., N-1. For example, to compute the coefficient of a[0,0] I would need to compute D*a[i,j] for all i,j=0,1,...,N-1, and add the coefficients of the terms with non-zero a[0,0], then do the same for each i and j and form a matrix out of these.
I know there is something called Poly which if you have an expression in terms of x you can acquire the coefficients by using Poly then peeling of the coefficients with coeffs(), but i don't know how to define an expression which spits out a matrix of the form A=[[a[0,0],a[0,1],...],...,[...,a[N-1,N-1]].
I'm relatively new to python, I took a course in my second year of university and I'm now doing a PhD, and haven't used python four about 5 years. Sorry if the above isn't clear, it's hard to explain what I want without knowing all the python terminology.
Cheers.
I want to do a singular value decomposition for large matrices containing a lot of zeros. In particular I need U and S, obtained from the diagonalization of a symmetric matrix A. This means that A = U * S * transpose(U^*), where S is a diagonal matrix and U contains all eigenvectors as columns.
I searched the web for c++ librarys that combine SVD and sparse matrices, but could only find libraries that find a few, but not all eigenvectors. Does anyone know if there is such a library?
Also after obtaining U and S I need to multiply them to some dense vector.
For this problem, I am using a combination of different techniques:
Arpack can compute a set of eigenvalues and associated eigenvectors, unfortunately it is fast only for high frequencies and slow for low frequencies
but since the eigenvectors of the inverse of a matrix are the same as the eigenvectors of a matrix, one can factor the matrix (using a sparse matrix factorization routine, such as SuperLU, or Choldmod if the matrix is symmetric). The "communication protocol" with Arpack only expects you to compute a matrix-vector product, so if you do a linear system solve using the factored matrix instead, then this makes Arpack fast for the low frequencies of the spectrum (do not forget then to replace the eigenvalue lambda by 1/lambda !)
This trick can be used to explore the entire spectrum, with a generalized transform (the transform in the previous point is refered as "invert" transform). There is also a "shift-invert" transform that allows one to explore an arbitrary portion of the spectrum and have fast convergence of Arpack. Then you compute (1/lambda + sigma) instead of lambda, when sigma is a "shift" (the transform is slightly more complicated than the "invert" transform, see the references below for a full explanation).
ARPACK: http://www.caam.rice.edu/software/ARPACK/
SUPERLU: http://crd-legacy.lbl.gov/~xiaoye/SuperLU/
The numerical algorithm is explained in my article that can be downloaded here:
http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=ManifoldHarmonics#2008
Sourcecode is available there:
https://gforge.inria.fr/frs/download.php/file/27277/manifold_harmonics-a4-src.tar.gz
See also my answer to this question:
https://scicomp.stackexchange.com/questions/20243/sparse-generalized-eigensolver-using-opencl/20255#20255
I'm working with large sparse matrices that are not exactly very sparse and I'm always wondering how much sparsity is required for storage of a matrix as sparse to be beneficial? We know that sparse representation of a reasonably dense matrix could have a larger size than the original one. So is there a threshold for the density of a matrix so that it would be better to store it as sparse? I know that the answer to this question usually depends on the structure of the sparsity, etc but I was wondering if there is just some guidelines? for example I have a very large matrix with density around 42%. should I store this matrix as dense or sparse?
scipy.coo_matrix format stores the matrix as 3 np.arrays. row and col are integer indices, data has the same data type as the equivalent dense matrix. So it should be straight forward to calculate the memory it will take as a function of overall shape and sparsity (as well as the data type).
csr_matrix may be more compact. data and indices are the same as with coo, but indptr has a value for each row plus 1. I was thinking that indptr would be shorter than the others, but I just constructed a small matrix where it was longer. An empty row, for example, requires a value in indptr, but none in data or indices. The emphasis with this format is computational efficiency.
csc is similar, but working with columns. Again you should be able to the math to calculate this size.
Brief discussion of memory advantages from MATLAB (using similar storage options)
http://www.mathworks.com/help/matlab/math/computational-advantages.html#brbrfxy
background paper from MATLAB designers
http://www.mathworks.com/help/pdf_doc/otherdocs/simax.pdf
SPARSE MATRICES IN MATLAB: DESIGN AND IMPLEMENTATION
I'm currently writing a C++ program where I have vectors of independent and dependent data that I would like to fit to a cubic function. However, I'm having trouble generating a polynomial that can fit my data.
Part of the problem is that I can't use various numerical packages, such as GSL (long story); it's possible that it might even be overkill for my case. I don't need a very generalized solution for least squares fitting. I specifically want to fit my data to a cubic function. I do have access to Sony's vector library, which supports 4x4 matrices and can calculate their inverses, among other things.
While prototyping this in Scilab, I used a function like:
function p = polyfit(x, y, n)
m = length(x);
aa = zeros(m, n+1)
aa(:,1) = ones(m,1)
for k = 2:n+1
aa(:,k) = x.^(k-1)
end
p = aa\y
endfunction
Unfortunately, this doesn't map well to my current environment. The above example needs to support a matrix of M x N+1 dimensions. In my case, that's M x 4, where M depends on how much sample data that I have. There's also the problem of left division. I would need a matrix library that supported the inverse of matrices of arbitrary dimensions.
Is there an algorithm for least squares where I can avoid having to calculate aa\y, or at least limit it to a 4x4 matrix? I suppose that I'm trying to rewrite the above algorithm into a simpler case that works for fitting to a cubic polynomial. I'm not looking for a code solution, but if someone can point me in the right direction, I'd appreciate it.
Here is the page I am working from, although that page itself doesn't address your question directly. The summary of my answer would be:
If you can't work with Nx4 matrices directly, then do those matrix
computations "manually" until you have the problem down to something that has only 4x4 or smaller matrices. In this answer I'll outline how to do the specific matrix computations you need "manually."
--
Let's suppose you have a bunch of data points (x1,y1)...(xn,yn) and you are looking for the cubic equation y = ax^3 + bx^2 + cx + d that fits those points best.
Then following the link above, you'd write this equation:
I'll write A, x, and B for those matrices. Then following my link above, you'd like to multiply by the transpose of A, which will give you the 4x4 matrix AT*A that you can invert. In equations, the following is the plan:
A * x = B .................... [what we started with]
(AT * A) * x = AT * B ..... [multiply by AT]
x = (AT * A)-1 * AT * B ... [multiply by the inverse of AT * A]
You said you are happy with inverting 4x4 matrices, so if we can code a way to get at these matrices without actually using matrix objects, we should be okay.
So, here is a method, although it might be a little bit too much like making your own matrix library for your taste. :)
Write an explicit equation for each of the 16 entries of the 4x4 matrix. The (i,j)th entry (I'm starting with (0,0)) is given by
x1i * x1j + x2i * x2j + ... + xNi * xNj.
Invert that 4x4 matrix using your matrix library. That is (AT * A)-1.
Now all we need is AT * B, which is a 4x1 matrix. The ith entry of it is given by x1i * y1 + x2i * y2 + ... + xNi * yN.
Multiply our hand-created 4x4 matrix (AT * A)-1 by our hand-created 4x1 matrix AT * B to get the 4x1 matrix of least-squares coefficients for your cubic.
Good luck!
Yes, we can limit the problem to computing with "a 4x4 matrix". The least squares fit of a cubic, even for M data points, only requires the solution of four linear equations in four unknowns. Assuming all the x-coordinates are distinct the coefficient matrix is invertible, so in principle the system can be solved by inverting the coefficient matrix. We assume that M is more than 4, as would typically be the case for least squares fits.
Here's a write-up for Maple, Fitting a cubic to data, that hides almost completely the details of what is being solved. The first-order minimum criteria (first derivatives with respect to coefficients as parameters of sum of squares error) gets us the four linear equations, often called the normal equations.
You can "assemble" these four equations in code, then apply your matrix inverse or a more sophisticated solution strategy. Obviously you need to have the data points stored in some form. One possibility is two linear arrays, one for the x-coordinates and one for the y-coordinates, both of length M the number of data points.
NB: I'm going to discuss this matrix assembly in terms of 1-based array subscripts. The polynomial coefficients are actually one application where 0-based array subscripts make things cleaner and simpler, but rewriting it in C or any other language that favors 0-based subscripts is left as an exercise for the reader.
The linear system of normal equations is most easily expressed in matrix form by referring to an Mx4 array A whose entries are powers of x-coordinate data:
A(i,j) = x-coordinate of ith data point raised to power j-1
Let A' denote the transpose of A, so that A'A is a 4x4 matrix.
If we let d be a column of length M containing the y-coordinates of data points (in the given order), then the system of normal equations is just this:
A'A u = A' d
where u = [p0,p1,p2,p3]' is the column of coefficients for the cubic polynomial with least squares fit:
P(x) = p0 + p1*x + p2*x^2 + p3*x^3
Your objections seem to center on a difficulty in storing and/or manipulating the Mx4 array A or its transpose. Therefore my answer will focus on how to assemble matrix A'A and column A'd without explicitly storing all of A at one time. In other words we will be doing the indicated matrix-matrix and matrix-vector multiplications implicitly to get a 4x4 system that you can solve:
C u = f
If you think about the entry C(i,j) being the product of the ith row of A' with the jth column of A, plus the fact that the ith row of A' is really just the transpose of the ith column of A, it should be clear that:
C(i,j) = SUM x^(i+j-2) over all data points
This is certainly one place where the exposition would be simplified by using 0-based subscripts!
It might make sense to accumulate the entries for matrix C, which depend only on the value of i+j, i.e. a so-called Hankel matrix, in a linear array of length 7 such that:
W(k) = SUM x^k over all data points
where k = 0,..,6. The 4x4 matrix C has a "striped" structure that means only these seven values appear. Looping over the list of x-coordinates of data points, you can accumulate the appropriate contributions of each power of each data point in the appropriate entry of W.
A similar strategy can be used to assemble the column f = A' d, namely to loop over the data points and accumulate the following four summations:
f(k) = SUM (x^k)*y over all data points
where k = 0,1,2,3. [Of course in the above sums the values x,y are the coordinates for a common data point.]
Caveats: This satisfies the goal of working only with a 4x4 matrix. However one typically tries to avoid the explicit formation of the matrix of coefficients for the normal equations because these matrices are often what in numerical analysis is called ill-conditioned. In particular the cases where x-coordinates are closely spaced can cause difficulty when one tries to solve the system by inverting the matrix of coefficients.
A more sophisticated approach to solving these normal equations would be the conjugate gradient method on the normal equations, which can be done with code that computes the matrix-vector products A u and A' v one entry at a time (using what we say above about entries of A).
The accuracy of the conjugate gradient method is often satisfactory because of its natural iterative approach, esp. when one can compute the required dot-products with a little extra precision.
You should never do full matrix inversion for stability reasons. Do LU decomposition and forward-back substitution. The other solutions are spot on otherwise.