I have a linear algebraic equation of the form Ax=By. Where A is a matrix of 6x5, x is vector of size 5, B a matrix of 6x6 and y vector of size 6. A, B and y are known variables and their values are accessed in real time coming from the sensors. x is unknown and has to find. One solution is to find Least Square Estimation that is x = [(A^T*A)^-1]*(A^T)B*y. This is conventional solution of linear algebraic equations. I used Eigen QR Decomposition to solve this as below
matrixA = getMatrixA();
matrixB = getMatrixB();
vectorY = getVectorY();
//LSE Solution
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec1(matrixA);
vectorX = dec1.solve(matrixB*vectorY);//
Everything is fine until now. But when I check the errore = Ax-By, its not zero always. Error is not very big but even not ignorable. Is there any other type of decomposition which is more reliable? I have gone through one of the page but could not understand the meaning or how to implement this. Below are lines from the reference how to solve the problem. Could anybody suggest me how to implement this?
The solution of such equations Ax = Byis obtained by forming the error vector e = Ax-By and the finding the unknown vector x that minimizes the weighted error (e^T*W*e), where W is a weighting matrix. For simplicity, this weighting matrix is chosen to be of the form W = K*S, where S is a constant diagonal scaling matrix, and K is scalar weight. Hence the solution to the equation becomes
x = [(A^T*W*A)^-1]*(A^T)*W*B*y
I did not understand how to form the matrix W.
Your statement " But when I check the error e = Ax-By, its not zero always. " almost always will be true, regardless of your technique, or what weighting you choose. When you have an over-described system, you are basically trying to fit a straight line to a slew of points. Unless, by chance, all the points can be placed exactly on a single perfectly straight line, there will be some error. So no matter what technique you use to choose the line, (weights and so on) you will always have some error if the points are not colinear. The alternative would be to use some kind of spline, or in higher dimensions to allow for warping. In those cases, you can choose to fit all the points exactly to a more complicated shape, and hence result with 0 error.
So the choice of a weight matrix simply changes which straight line you will use by giving each point a slightly different weight. So it will not ever completely remove the error. But if you had a few particular points that you care more about than the others, you can give the error on those points higher weight when choosing the least square error fit.
For spline fitting see:
http://en.wikipedia.org/wiki/Spline_interpolation
For the really nicest spline curve interpolation you can use Centripital Catmull-Rom, which in addition to finding a curve to fit all the points, will prevent unnecessary loops and self intersections that can sometimes come up during abrupt changes in the data direction.
Catmull-rom curve with no cusps and no self-intersections
Related
I am writing some simple Unit Tests for math library.
To decide if the library generates good results I have to compare them with expected ones. Because of rounding etc. even good result will differ a bit from expected one (e.g. 0.701 when 0.700 was expected).
The problem is, I have to decide how similar two vectors are. I want to describe that similarity as an error proportion (for number it would be e.g. errorScale(3.0f /* generated */, 1.0f /* expected */) = 3.0f/1.5f = 2.0f == 200%).
Which method should I use to determine the similarity of 2D, 3D and 4D (quaternions) vectors?
There's no universally good measure. In particular, for addition the absolute error is better while for multiplication the relative error is better.
For vectors the "relative error" can also be considered in terms of length and direction. If you think about it, the "acceptable outcomes" form a small area around the exact result. But what's the shape of this area? Is it an axis-aligned square (absolute errors in x and y direction)? That privileges a specific vector base. A circle might be a better shape.
I am pretty new to MATLAB and programming in general, so apologies in advance if this is too trivial of a question.
Here is my dilemma, I have a system where two vectors are running away from each other out of the origin, one with a magnitude of 200 and another with a magnitude of 150, these figures are given.
After the user inputs their magnitudes and angles in a cartesian coordinate system, the angles are converted to radians and the following calculations are performed:
compA = MagA*[cos(AngleA), sin(AngleA)];
compB = MagB*[cos(AngleB), sin(AngleB)];
AngleAwrtB = compA-compB;
Where compA and compB are the x any y components of of the two vectors "end points" and AwrtB is "A with respect to B" . MagA and MagB are the magnitudes of each vector.
So I now have the angle of vector A with respect to B, now I need to find the magnitude of vector A with respect to vector B, any ideas on how I could do this? I want to use something like the following:
MagAwrtB = MagA-MagB
I am just worried that this is mathematically incorrect, that there is some other trigonometric relation that I am missing.
Any help would be greatly appreciated.
Well, this is pretty much the definition of the dot product, which can be achieved in Matlab with the dot function (or manually with the formula, in 2D it's straightforward).
Best
it's been a while since I've handled some math stuff and I'm a bit rusty, please be nice if I ask a stupid question.
Problem: I have n couples of lines, which are saved in memory as an array of 2D points, therefore no explicit functions. I have to check if the lines on couples are parallel, and this is a pretty easy task because it's sufficient to check if their derivatives are the same.
To do this in an algorithm, I have to check the slope of the line between two points of the function (which I have) and since I don't need an extreme accuracy, I can use the easy formula:
m = (y2-y1)/(x2-x1)
But obviously this lead me to the big problem of x2 = x1. I can't give a default value for this case... how can I workaround it?
Another way to compare slopes in 2D is the following:
m1 = (y2-y1)/(x2-x1)
m2 = (y4-y3)/(x4-x3)
as m1 = m2
(y2-y1)*(x4-x3) = (y4-y3)*(x2-x1) if lines are parallel
This doesn't give divide by zero & is more efficient as it avoids floating point division.
I have a somewhat complicated algorithm that requires the fitting of a quadric to a set of points. This quadric is given by its parametrization (u, v, f(u,v)), where f(u,v) = au^2+bv^2+cuv+du+ev+f.
The coefficients of the f(u,v) function need to be found since I have a set of exactly 6 constraints this function should obey. The problem is that this set of constraints, although yielding a problem like A*x = b, is not completely well behaved to guarantee a unique solution.
Thus, to cut it short, I'd like to use alglib's facilities to somehow either determine A's pseudoinverse or directly find the best fit for the x vector.
Apart from computing the SVD, is there a more direct algorithm implemented in this library that can solve a system in a least squares sense (again, apart from the SVD or from using the naive inv(transpose(A)*A)*transpose(A)*b formula for general least squares problems where A is not a square matrix?
Found the answer through some careful documentation browsing:
rmatrixsolvels( A, noRows, noCols, b, singularValueThreshold, info, solverReport, x)
The documentation states the the singular value threshold is a clamping threshold that sets any singular value from the SVD decomposition S matrix to 0 if that value is below it. Thus it should be a scalar between 0 and 1.
Hopefully, it will help someone else too.
My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.