My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.
Related
Hello stackoverflow community,
I have troubles in understanding a least-square-error-problem in the c++ armadillo package.
I have a matrix A with many more rows than columns (5000 to 100 for example) so it is overdetermined.
I want to find x so that A*x=b gives me the least square error.
If i use the solve function of armadillo on my data like "x = Solve(A,b)" the error of "(A*x-b)^2" is sometimes way to high.
If on the other hand I solve for x with the analytical form by "x = (A^T * A)^-1 *A^T * b" the results are always right.
The results for x in both cases can differ by 10 magnitudes.
I had thought that armadillo would use this analytical form in the background if the system is overdetermined.
Now I would like to understand why these two methods give such different results.
I wanted to give a short example program, but i can't reproduce this behavior with a short program.
I thought about giving the Matrix here, but with 5000 times 100 it's also very big. I can deliver the values for which this happens though if needed.
So as a short background.
The matrix I get from my program is a numerically solved reaction of a nonlinear oscillator in which I put information inside by wiggeling a parameter of this system.
Because the influence of this parameter on the system is small, the values of my different rows are very similar but never the same, otherwise armadillo should throw an error.
I'm still thinking that this is the problem, but the solve function never threw any error.
Another thing that confuses me is that in a short example program with a random matrix, the analytical form is waaay slower than the solve function.
But on my program, both are nearly identically fast.
I guess this has something to do with the numerical convergence of the pseudo inverse and the special case of my matrix, but for that i don't know enough about how armadillo works.
I hope someone can help me with that problem and thanks a lot in advance.
Thanks for the replies. I think i figured the problem out and wanted to give some feedback for everybody who runs into the same problem.
The Armadillo solve function gives me the x that minimizes (A*x-b)^2.
I looked at the values of x and they are sometimes in the magnitude of 10^13.
This comes from the fact that the rows of my matrix only slightly change. (So nearly linear dependent but not exactly).
Because of that i was in the numerical precision of my doubles and as a result my error sometimes jumped around.
If i use the rearranged analytical formular (A^T * A)*x = *A^T * b with the solve function this problem doesn't occur anymore because the fitted values of x are in the magnitude of 10^4. The least square error is a little bit higher but that is okay, as i want to avoid overfitting.
I now additionally added Tikhonov regularization by solving (A^T * A + lambda*Identity_Matrix)*x = *A^T * b with the solve function of armadillo.
Now the weight vectors are in the order of around 1 and the error nearly doesn't change compared to the formular without regularization.
I have a linear algebraic equation of the form Ax=By. Where A is a matrix of 6x5, x is vector of size 5, B a matrix of 6x6 and y vector of size 6. A, B and y are known variables and their values are accessed in real time coming from the sensors. x is unknown and has to find. One solution is to find Least Square Estimation that is x = [(A^T*A)^-1]*(A^T)B*y. This is conventional solution of linear algebraic equations. I used Eigen QR Decomposition to solve this as below
matrixA = getMatrixA();
matrixB = getMatrixB();
vectorY = getVectorY();
//LSE Solution
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec1(matrixA);
vectorX = dec1.solve(matrixB*vectorY);//
Everything is fine until now. But when I check the errore = Ax-By, its not zero always. Error is not very big but even not ignorable. Is there any other type of decomposition which is more reliable? I have gone through one of the page but could not understand the meaning or how to implement this. Below are lines from the reference how to solve the problem. Could anybody suggest me how to implement this?
The solution of such equations Ax = Byis obtained by forming the error vector e = Ax-By and the finding the unknown vector x that minimizes the weighted error (e^T*W*e), where W is a weighting matrix. For simplicity, this weighting matrix is chosen to be of the form W = K*S, where S is a constant diagonal scaling matrix, and K is scalar weight. Hence the solution to the equation becomes
x = [(A^T*W*A)^-1]*(A^T)*W*B*y
I did not understand how to form the matrix W.
Your statement " But when I check the error e = Ax-By, its not zero always. " almost always will be true, regardless of your technique, or what weighting you choose. When you have an over-described system, you are basically trying to fit a straight line to a slew of points. Unless, by chance, all the points can be placed exactly on a single perfectly straight line, there will be some error. So no matter what technique you use to choose the line, (weights and so on) you will always have some error if the points are not colinear. The alternative would be to use some kind of spline, or in higher dimensions to allow for warping. In those cases, you can choose to fit all the points exactly to a more complicated shape, and hence result with 0 error.
So the choice of a weight matrix simply changes which straight line you will use by giving each point a slightly different weight. So it will not ever completely remove the error. But if you had a few particular points that you care more about than the others, you can give the error on those points higher weight when choosing the least square error fit.
For spline fitting see:
http://en.wikipedia.org/wiki/Spline_interpolation
For the really nicest spline curve interpolation you can use Centripital Catmull-Rom, which in addition to finding a curve to fit all the points, will prevent unnecessary loops and self intersections that can sometimes come up during abrupt changes in the data direction.
Catmull-rom curve with no cusps and no self-intersections
Here's the problem:
I am currently trying to create a control system which is required to find a solution to a series of complex linear equations without a unique solution.
My problem arises because there will ever only be six equations, while there may be upwards of 20 unknowns (usually way more than six unknowns). Of course, this will not yield an exact solution through the standard Gaussian elimination or by changing them in a matrix to reduced row echelon form.
However, I think that I may be able to optimize things further and get a more accurate solution because I know that each of the unknowns cannot have a value smaller than zero or greater than one, but it is free to take on any value in between them.
Of course, I am trying to create code that would find a correct solution, but in the case that there are multiple combinations that yield satisfactory results, I would want to minimize Sum of (value of unknown * efficiency constant) over all unknowns, i.e. Sigma[xI*eI] from I=0 to n, but finding an accurate solution is of a greater priority.
Performance is also important, due to the fact that this algorithm may need to be run several times per second.
So, does anyone have any ideas to help me on implementing this?
Edit: You might just want to stick to linear programming with equality and inequality constraints, but here's an interesting exact solution that does not incorporate the constraint that your unknowns are between 0 and 1.
Here's a powerpoint discussing your problem: http://see.stanford.edu/materials/lsoeldsee263/08-min-norm.pdf
I'll translate your problem into math to make things a bit easier to figure out:
you have a 6x20 matrix A and a vector x with 20 elements. You want to minimize (x^T)e subject to Ax=y. According to the slides, if you were just minimizing the sum of x, then the answer is A^T(AA^T)^(-1)y. I'll take another look at this as soon as I get the chance and see what the solution is to minimizing (x^T)e (ie your specific problem).
Edit: I looked in the powerpoint some more and near the end there's a slide entitled "General norm minimization with equality constraints". I am going to switch the notation to match the slide's:
Your problem is that you want to minimize ||Ax-b||, where b = 0 and A is your e vector and x is the 20 unknowns. This is subject to Cx=d. Apparently the answer is:
x=(A^T A)^-1 (A^T b -C^T(C(A^T A)^-1 C^T)^-1 (C(A^T A)^-1 A^Tb - d))
it's not pretty, but it's not as bad as you might think. There's really aren't that many calculations. For example (A^TA)^-1 only needs to be calculated once and then you can reuse the answer. And your matrices aren't that big.
Note that I didn't incorporate the constraint that the elements of x are within [0,1].
It looks like the solution for what I am doing is with Linear Programming. It is starting to come back to me, but if I have other problems I will post them in their own dedicated questions instead of turning this into an encyclopedia.
I have a somewhat complicated algorithm that requires the fitting of a quadric to a set of points. This quadric is given by its parametrization (u, v, f(u,v)), where f(u,v) = au^2+bv^2+cuv+du+ev+f.
The coefficients of the f(u,v) function need to be found since I have a set of exactly 6 constraints this function should obey. The problem is that this set of constraints, although yielding a problem like A*x = b, is not completely well behaved to guarantee a unique solution.
Thus, to cut it short, I'd like to use alglib's facilities to somehow either determine A's pseudoinverse or directly find the best fit for the x vector.
Apart from computing the SVD, is there a more direct algorithm implemented in this library that can solve a system in a least squares sense (again, apart from the SVD or from using the naive inv(transpose(A)*A)*transpose(A)*b formula for general least squares problems where A is not a square matrix?
Found the answer through some careful documentation browsing:
rmatrixsolvels( A, noRows, noCols, b, singularValueThreshold, info, solverReport, x)
The documentation states the the singular value threshold is a clamping threshold that sets any singular value from the SVD decomposition S matrix to 0 if that value is below it. Thus it should be a scalar between 0 and 1.
Hopefully, it will help someone else too.
I'm having difficulty coming up with the method by which a program can find the rank of a matrix. In particular, I don't fully understand how you can make sure the program would catch all cases of linear combinations resulting in dependencies.
The general idea of how to solve this is what I'm interested in. However, if you want to take the answer a step farther, I'm specifically looking for the solution in regards to square matrices only. Also the code would be in C++.
Thanks for your time!
General process:
matrix = 'your matrix you want to find rank of'
m2 = rref(matrix)
rank = number_non_zero_rows(m2)
where rref(matrix) is a function that does your run-of-the-mill Gaussian elimination
number_non_zero_rows(m2) is a function that sums the number of rows with non-zero entries
Your concern about all cases of linear combinations resulting in dependencies is taken care of with the rref (Gaussian elimination) step. Incidentally, this works no matter what the dimensions of the matrix are.