Fitting data to a 3rd degree polynomial - c++

I'm currently writing a C++ program where I have vectors of independent and dependent data that I would like to fit to a cubic function. However, I'm having trouble generating a polynomial that can fit my data.
Part of the problem is that I can't use various numerical packages, such as GSL (long story); it's possible that it might even be overkill for my case. I don't need a very generalized solution for least squares fitting. I specifically want to fit my data to a cubic function. I do have access to Sony's vector library, which supports 4x4 matrices and can calculate their inverses, among other things.
While prototyping this in Scilab, I used a function like:
function p = polyfit(x, y, n)
m = length(x);
aa = zeros(m, n+1)
aa(:,1) = ones(m,1)
for k = 2:n+1
aa(:,k) = x.^(k-1)
end
p = aa\y
endfunction
Unfortunately, this doesn't map well to my current environment. The above example needs to support a matrix of M x N+1 dimensions. In my case, that's M x 4, where M depends on how much sample data that I have. There's also the problem of left division. I would need a matrix library that supported the inverse of matrices of arbitrary dimensions.
Is there an algorithm for least squares where I can avoid having to calculate aa\y, or at least limit it to a 4x4 matrix? I suppose that I'm trying to rewrite the above algorithm into a simpler case that works for fitting to a cubic polynomial. I'm not looking for a code solution, but if someone can point me in the right direction, I'd appreciate it.

Here is the page I am working from, although that page itself doesn't address your question directly. The summary of my answer would be:
If you can't work with Nx4 matrices directly, then do those matrix
computations "manually" until you have the problem down to something that has only 4x4 or smaller matrices. In this answer I'll outline how to do the specific matrix computations you need "manually."
--
Let's suppose you have a bunch of data points (x1,y1)...(xn,yn) and you are looking for the cubic equation y = ax^3 + bx^2 + cx + d that fits those points best.
Then following the link above, you'd write this equation:
I'll write A, x, and B for those matrices. Then following my link above, you'd like to multiply by the transpose of A, which will give you the 4x4 matrix AT*A that you can invert. In equations, the following is the plan:
A * x = B .................... [what we started with]
(AT * A) * x = AT * B ..... [multiply by AT]
x = (AT * A)-1 * AT * B ... [multiply by the inverse of AT * A]
You said you are happy with inverting 4x4 matrices, so if we can code a way to get at these matrices without actually using matrix objects, we should be okay.
So, here is a method, although it might be a little bit too much like making your own matrix library for your taste. :)
Write an explicit equation for each of the 16 entries of the 4x4 matrix. The (i,j)th entry (I'm starting with (0,0)) is given by
x1i * x1j + x2i * x2j + ... + xNi * xNj.
Invert that 4x4 matrix using your matrix library. That is (AT * A)-1.
Now all we need is AT * B, which is a 4x1 matrix. The ith entry of it is given by x1i * y1 + x2i * y2 + ... + xNi * yN.
Multiply our hand-created 4x4 matrix (AT * A)-1 by our hand-created 4x1 matrix AT * B to get the 4x1 matrix of least-squares coefficients for your cubic.
Good luck!

Yes, we can limit the problem to computing with "a 4x4 matrix". The least squares fit of a cubic, even for M data points, only requires the solution of four linear equations in four unknowns. Assuming all the x-coordinates are distinct the coefficient matrix is invertible, so in principle the system can be solved by inverting the coefficient matrix. We assume that M is more than 4, as would typically be the case for least squares fits.
Here's a write-up for Maple, Fitting a cubic to data, that hides almost completely the details of what is being solved. The first-order minimum criteria (first derivatives with respect to coefficients as parameters of sum of squares error) gets us the four linear equations, often called the normal equations.
You can "assemble" these four equations in code, then apply your matrix inverse or a more sophisticated solution strategy. Obviously you need to have the data points stored in some form. One possibility is two linear arrays, one for the x-coordinates and one for the y-coordinates, both of length M the number of data points.
NB: I'm going to discuss this matrix assembly in terms of 1-based array subscripts. The polynomial coefficients are actually one application where 0-based array subscripts make things cleaner and simpler, but rewriting it in C or any other language that favors 0-based subscripts is left as an exercise for the reader.
The linear system of normal equations is most easily expressed in matrix form by referring to an Mx4 array A whose entries are powers of x-coordinate data:
A(i,j) = x-coordinate of ith data point raised to power j-1
Let A' denote the transpose of A, so that A'A is a 4x4 matrix.
If we let d be a column of length M containing the y-coordinates of data points (in the given order), then the system of normal equations is just this:
A'A u = A' d
where u = [p0,p1,p2,p3]' is the column of coefficients for the cubic polynomial with least squares fit:
P(x) = p0 + p1*x + p2*x^2 + p3*x^3
Your objections seem to center on a difficulty in storing and/or manipulating the Mx4 array A or its transpose. Therefore my answer will focus on how to assemble matrix A'A and column A'd without explicitly storing all of A at one time. In other words we will be doing the indicated matrix-matrix and matrix-vector multiplications implicitly to get a 4x4 system that you can solve:
C u = f
If you think about the entry C(i,j) being the product of the ith row of A' with the jth column of A, plus the fact that the ith row of A' is really just the transpose of the ith column of A, it should be clear that:
C(i,j) = SUM x^(i+j-2) over all data points
This is certainly one place where the exposition would be simplified by using 0-based subscripts!
It might make sense to accumulate the entries for matrix C, which depend only on the value of i+j, i.e. a so-called Hankel matrix, in a linear array of length 7 such that:
W(k) = SUM x^k over all data points
where k = 0,..,6. The 4x4 matrix C has a "striped" structure that means only these seven values appear. Looping over the list of x-coordinates of data points, you can accumulate the appropriate contributions of each power of each data point in the appropriate entry of W.
A similar strategy can be used to assemble the column f = A' d, namely to loop over the data points and accumulate the following four summations:
f(k) = SUM (x^k)*y over all data points
where k = 0,1,2,3. [Of course in the above sums the values x,y are the coordinates for a common data point.]
Caveats: This satisfies the goal of working only with a 4x4 matrix. However one typically tries to avoid the explicit formation of the matrix of coefficients for the normal equations because these matrices are often what in numerical analysis is called ill-conditioned. In particular the cases where x-coordinates are closely spaced can cause difficulty when one tries to solve the system by inverting the matrix of coefficients.
A more sophisticated approach to solving these normal equations would be the conjugate gradient method on the normal equations, which can be done with code that computes the matrix-vector products A u and A' v one entry at a time (using what we say above about entries of A).
The accuracy of the conjugate gradient method is often satisfactory because of its natural iterative approach, esp. when one can compute the required dot-products with a little extra precision.

You should never do full matrix inversion for stability reasons. Do LU decomposition and forward-back substitution. The other solutions are spot on otherwise.

Related

Why does Essential matrix has 2 euqal singular values and 1 zero singular values?

I was watching the lecture about an essential matrix. and the professor was teaching eight-point linear algorithm. I understood that we need 8 points to estimate the essential matrix. But in this slide, he said that the estimated matrix doesn't correspond to an essential matrix and we should project that matrix it to the essential space. And he didn't prove this theorem, and just skipped it.
So I have some questions
Why does an essential matrix has two equal singular values, and a zero singular value?
Why should we average the two largest singular values that is obtained from eight-point algorithm in order to make an essential matrix?
Asking your first question.
The singular values of the esential matrix E are the square-roots of the eigenvalues of EE^t. We know that E = TR where R is a rotation matrix and T=[0,-z,y;z,0,-x;-y,x,0] then we have EE^t=TRR^tT^t=TT^t.
Also T is a singular skew-symmetric matrix then we can decompose it as T=P^t[0,k,0;0,-k,0;0,0,0]P where P is an orthogonal matrix.
Replacing this in the previous formula we have EE^t=P^t[-k^2,0,0;0,-k^2,0;0,0,0]P=P^tUP then E is a normal matrix and the eigenvalues are the diagonal elements of U.
I have used Matlab notation to denote matrices and A^t denotes the transpose of the matrix A.

Dimensionality Reduction

I am trying to understand the different methods for dimensionality reduction in data analysis. In particular I am interested in Singular Value Decomposition (SVD) and Principle Component Analysis (PCA).
Can anyone please explain there terms to a layperson? - I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and
b) how do they differ in their approach
OR maybe if you can explain what the results of each technique is telling me, so for
a) SVD - what are singular values
b) PCA - "proportion of variance"
Any example would be brilliant. I am not very good at maths!!
Thanks
You probably already figured this out, but I'll post a short description anyway.
First, let me describe the two techniques speaking generally.
PCA basically takes a dataset and figures out how to "transform" it (i.e. project it into a new space, usually of lower dimension). It essentially gives you a new representation of the same data. This new representation has some useful properties. For instance, each dimension of the new space is associated with the amount of variance it explains, i.e. you can essentially order the variables output by PCA by how important they are in terms of the original representation. Another property is the fact that linear correlation is removed from the PCA representation.
SVD is a way to factorize a matrix. Given a matrix M (e.g. for data, it could be an n by m matrix, for n datapoints, each of dimension m), you get U,S,V = SVD(M) where:M=USV^T, S is a diagonal matrix, and both U and V are orthogonal matrices (meaning the columns & rows are orthonormal; or equivalently UU^T=I & VV^T=I).
The entries of S are called the singular values of M. You can think of SVD as dimensionality reduction for matrices, since you can cut off the lower singular values (i.e. set them to zero), destroying the "lower parts" of the matrices upon multiplying them, and get an approximation to M. In other words, just keep the top k singular values (and the top k vectors in U and V), and you have a "dimensionally reduced" version (representation) of the matrix.
Mathematically, this gives you the best rank k approximation to M, essentially like a reduction to k dimensions. (see this answer for more).
So Question 1
I understand the general premis of dimensionality reduction as bringing data to a lower dimension - But
a) how do SVD and PCA do this, and b) how do they differ in their approach
The answer is that they are the same.
To see this, I suggest reading the following posts on the CV and math stack exchange sites:
What is the intuitive relationship between SVD and PCA?
Relationship between SVD and PCA. How to use SVD to perform PCA?
How to use SVD for dimensionality reduction to reduce the number of columns (features) of the data matrix?
How to use SVD for dimensionality reduction (in R)
Let me summarize the answer:
essentially, SVD can be used to compute PCA.
PCA is closely related to the eigenvectors and eigenvalues of the covariance matrix of the data. Essentially, by taking the data matrix, computing its SVD, and then squaring the singular values (and doing a little scaling), you end up getting the eigendecomposition of the covariance matrix of the data.
Question 2
maybe if you can explain what the results of each technique is telling me, so for a) SVD - what are singular values b) PCA - "proportion of variance"
These eigenvectors (the singular vectors of the SVD, or the principal components of the PCA) form the axes of the news space into which one transforms the data.
The eigenvalues (closely related to the squares of the data matrix SVD singular values) hold the variance explained by each component. Often, people want to retain say 95% of the variance of the original data, so if they originally had n-dimensional data, they reduce it to d-dimensional data that keeps that much of the original variance, by choosing the largest d-eigenvalues such that 95% of the variance is kept. This keeps as much information as possible, while retaining as few useless dimensions as possible.
In other words, these values (variance explained) essentially tell us the importance of each principal component (PC), in terms of their usefulness reconstructing the original (high-dimensional) data. Since each PC forms an axis in the new space (constructed via linear combinations of the old axes in the original space), it tells us the relative importance of each of the new dimensions.
For bonus, note that SVD can also be used to compute eigendecompositions, so it can also be used to compute PCA in a different way, namely by decomposing the covariance matrix directly. See this post for details.
According to your question,I only understood the topic of Principal component analysis.so that I share few below points about PCA i hope you definitely understand.
PCA:
1.PCA is a linear transformation dimensionality reduction technique.
2.It is used for operations such as noise filtering,feature extraction and data visualization.
3.The goal of PCA is to identify patterns and detecting the correlations between variables.
4.If there is a strong correlation,then we could reduce the dimensionality which PCA is intended for.
5.Eigenvector is to make linear transformation without changing the directions.
this is the sample url to understand the PCA:https://www.solver.com/xlminer/help/principal-components-analysis-example

Finite difference modified Laplacian matrix of coefficients (Python)

I am wondering if there is a way to define a function which labels the coefficients of a matrix. E.g., I want to have a function which depends on the coefficients of a 2-d difference equation to then build a matrix out of them.
The above might be unclear so let me explain:
I have a matrix which expresses the eigenvector of the 2-d finite difference Laplacian (with an extra term). I have something like this
D*a[i,j]=exp(b*j*I)*a[i+1,j]+exp(-b*j*I)*a[i-1,j]+a[i,j+1]+a[i,j-1]-4*a[i,j]
Where I=sqrt(-1) and b=constant. Sorry about the above formatting, I don't know how to latex type on here.
So i want to build a matrix of the coefficients of D*a[i,j], so a NxN matrix where i,j=0,1,..., N-1. For example, to compute the coefficient of a[0,0] I would need to compute D*a[i,j] for all i,j=0,1,...,N-1, and add the coefficients of the terms with non-zero a[0,0], then do the same for each i and j and form a matrix out of these.
I know there is something called Poly which if you have an expression in terms of x you can acquire the coefficients by using Poly then peeling of the coefficients with coeffs(), but i don't know how to define an expression which spits out a matrix of the form A=[[a[0,0],a[0,1],...],...,[...,a[N-1,N-1]].
I'm relatively new to python, I took a course in my second year of university and I'm now doing a PhD, and haven't used python four about 5 years. Sorry if the above isn't clear, it's hard to explain what I want without knowing all the python terminology.
Cheers.

What exactly happens after run Algorithm PCA (Principal Component Analysis)

At this moment I'm working with an image processing project. But I have a conceptual questions regarding the PCA.
What exactly happens to the matrix after applying the PCA in the matrix of an image?
I does not understand reading the literature on this subject.
Given an M x N matrix, the result is an matrix M' x N' and where M'< M and N' < N and M' x N' is proporcional to M x N?
I'm no expert in PCA but I'll try to explain what I understand.
After apply PCA to the matrix of an image, you get the eigenvectors of that matrix, which represents a number invariant axis of the matrix. These vectors are all orthogonal to each other.
By measuring how disperse are your original data in matrix along these vectors, you can know how are they distributed. This can be useful, for example, if you wish to perform pattern categorization based on how the data are distributed along these "axis".
Although not accurate, you can imagine PCA helps you to draw "axis" along the blob of data that presence in your matrix, where the new "origin" of the axis are the center of your data.
The best part is that the data are distributed the most along the first eigenvector, follow by the second eigenvector and so on.
I hope I did not confused you.
There are a number of good reference about PCA in quora in addition to stackoverflow too.
Here are a few examples:
https://www.quora.com/What-is-an-intuitive-explanation-for-PCA
http://www.quora.com/How-to-explain-PCA-in-laymans-terms
Again, I'm no expert and welcome others to correct/educate both rwvaldivia and me.
The concept of PCA is closely related to linear algebra, which is the domain of mathematics to which matrices belong. A common way to view a matrix is as a set of vectors, a MxN matrix are just M vectors in an N-dimensional space.
Now a general concept in linear algebra is that the choice of basis vectors is pretty arbitrary. If you choose another basis, you convert your matrix by multiplying it with the old basis expressed in the new basis (an NxN dimensional matrix itself).
PCA is a method to find a basis which isn't arbitrary, but specific to your matrix. In particular, it orders base vectors by the amount in which they're present in your set of vectors. If all your vectors point roughly in the same direction, that direction will be the first basis vector. If they're all roughly in the same plane, the major basis vectors for that plane will be your first two vectors. But remember: you'll generally a full MxN basis (unless your matrix is degenerate); it's up to you to decide how many of the Components are Principal.
Now here's the real question: what exactly is "the matrix of the image"? You generally can't treat a 1024 x 768 image as a set of 1024 vectors in 768-dimensional space. Sure, you can perform the PCA operation, and you will get a 1024x768 result matrix, but what does that even mean? They're the basis vectors of your input matrix, but that output doesn't have an image meaning exactly because your input is not a set of vectors.

Efficient solution of linear system Ax= b when only one of the constant term changes

How does one solve a large system of linear equations efficiently when only a few of the constant terms change. For example:
I currently have the system Ax= b. I compute the inverse of A once, store it in a matrix and each time any entry updates in b perform a matrix-vector multiplication A^-1(b) to recompute x.
This is inefficient as only a couple of entries would have update in b. Are there more efficient ways of solving this system when A-1 remains constant but specific known values change in b?
I use uBlas and Eigen, but not aware of solutions that would address this problem of selective recalculation. Thanks for any guidance.
Compute A^-1. If b_i is the ith component of b, then examine d/db_i A^-1 b (the derivative of A^-1 with respect to the ith component of b) -- it equals a column of A^-1 (in particular, the ith column). And derivatives of linear functions are constant over their domain. So if you have b and b', and they differ only in the ith component, then A^-1 b - A^-1 b' = [d/db_i A^-1] * (b-b')_i. For multiple components, just add them up (as A^-1 is linear).
Or, in short, you can calculate A^-1 (b'-b) with some optimizations for input components that are zero (which, if only some components change, will be most of the components). A^-1 b' = A^-1 (b'-b) + A^-1 (b). And if you know that only some components will change, you can take a copy of the appropriate column of A^-1, then multiply it by the change in that component of b.
You can take advantage of the problem's linearity :
x0 = A_(-1)*b0
x = A_(-1)*b = x0 + A_(-1)*db
where db is a the difference matrix between b and b0 and it should be filled with zero : you can compressed it to a sparse matrix.
the Eigen lib has a lot of cool functions for sparse matrices (multiplication, inverse, ...).
Firstly, don't perform a matrix inversion, use a solver library instead. Secondly, pass your initial x to the library as a first guess.
The library will perform some kind of decomposition like LU, and use that to calculate x. If you choose an iterative solver, then it is already doing pretty much what you describe to home in on the solution; it will begin with a worse guess and generate a better one, and any good routine will take an initial guess to speed up the process. In many circumstances you have a good idea of the result anyway, so it makes sense to exploit that.
If the new b is near the old b, then the new x should be near the old x, and it will serve as a good initial guess.
First, don't compute the matrix inverse, use rather the LU decomposition, or the QR decomposition (slower than LU but stabler). Such decompositions scale better than inversion performancewise with the matrix size, and are usually stabler (especially QR).
There are ways to update the QR decomposition if A changes slightly (eg. by a rank one matrix), but if B is changed, you have to solve again with the new b -- you cannot escape this, and this is O(n^2).
However, if the right hand side B only changes by a fixed element, ie. B' = B + dB with dB known in advance, you can solve A dx = dB once and for all and now the solution x' of Ax' = B' is x + dX.
If dB is not known in advance but is always a linear combination of a few dB_i vectors, you may solve for A dx_i = dB_i, but if you have many such dB_i, you end up with a n^2 process (this in fact amounts to computing the inverse)...