Computing slope in cusps in C++ - c++

it's been a while since I've handled some math stuff and I'm a bit rusty, please be nice if I ask a stupid question.
Problem: I have n couples of lines, which are saved in memory as an array of 2D points, therefore no explicit functions. I have to check if the lines on couples are parallel, and this is a pretty easy task because it's sufficient to check if their derivatives are the same.
To do this in an algorithm, I have to check the slope of the line between two points of the function (which I have) and since I don't need an extreme accuracy, I can use the easy formula:
m = (y2-y1)/(x2-x1)
But obviously this lead me to the big problem of x2 = x1. I can't give a default value for this case... how can I workaround it?

Another way to compare slopes in 2D is the following:
m1 = (y2-y1)/(x2-x1)
m2 = (y4-y3)/(x4-x3)
as m1 = m2
(y2-y1)*(x4-x3) = (y4-y3)*(x2-x1) if lines are parallel
This doesn't give divide by zero & is more efficient as it avoids floating point division.

Related

Least Squares Solution of Overdetermined Linear Algebraic Equation Ax = By

I have a linear algebraic equation of the form Ax=By. Where A is a matrix of 6x5, x is vector of size 5, B a matrix of 6x6 and y vector of size 6. A, B and y are known variables and their values are accessed in real time coming from the sensors. x is unknown and has to find. One solution is to find Least Square Estimation that is x = [(A^T*A)^-1]*(A^T)B*y. This is conventional solution of linear algebraic equations. I used Eigen QR Decomposition to solve this as below
matrixA = getMatrixA();
matrixB = getMatrixB();
vectorY = getVectorY();
//LSE Solution
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec1(matrixA);
vectorX = dec1.solve(matrixB*vectorY);//
Everything is fine until now. But when I check the errore = Ax-By, its not zero always. Error is not very big but even not ignorable. Is there any other type of decomposition which is more reliable? I have gone through one of the page but could not understand the meaning or how to implement this. Below are lines from the reference how to solve the problem. Could anybody suggest me how to implement this?
The solution of such equations Ax = Byis obtained by forming the error vector e = Ax-By and the finding the unknown vector x that minimizes the weighted error (e^T*W*e), where W is a weighting matrix. For simplicity, this weighting matrix is chosen to be of the form W = K*S, where S is a constant diagonal scaling matrix, and K is scalar weight. Hence the solution to the equation becomes
x = [(A^T*W*A)^-1]*(A^T)*W*B*y
I did not understand how to form the matrix W.
Your statement " But when I check the error e = Ax-By, its not zero always. " almost always will be true, regardless of your technique, or what weighting you choose. When you have an over-described system, you are basically trying to fit a straight line to a slew of points. Unless, by chance, all the points can be placed exactly on a single perfectly straight line, there will be some error. So no matter what technique you use to choose the line, (weights and so on) you will always have some error if the points are not colinear. The alternative would be to use some kind of spline, or in higher dimensions to allow for warping. In those cases, you can choose to fit all the points exactly to a more complicated shape, and hence result with 0 error.
So the choice of a weight matrix simply changes which straight line you will use by giving each point a slightly different weight. So it will not ever completely remove the error. But if you had a few particular points that you care more about than the others, you can give the error on those points higher weight when choosing the least square error fit.
For spline fitting see:
http://en.wikipedia.org/wiki/Spline_interpolation
For the really nicest spline curve interpolation you can use Centripital Catmull-Rom, which in addition to finding a curve to fit all the points, will prevent unnecessary loops and self intersections that can sometimes come up during abrupt changes in the data direction.
Catmull-rom curve with no cusps and no self-intersections

Alglib: solving A * x = b in a least squares sense

I have a somewhat complicated algorithm that requires the fitting of a quadric to a set of points. This quadric is given by its parametrization (u, v, f(u,v)), where f(u,v) = au^2+bv^2+cuv+du+ev+f.
The coefficients of the f(u,v) function need to be found since I have a set of exactly 6 constraints this function should obey. The problem is that this set of constraints, although yielding a problem like A*x = b, is not completely well behaved to guarantee a unique solution.
Thus, to cut it short, I'd like to use alglib's facilities to somehow either determine A's pseudoinverse or directly find the best fit for the x vector.
Apart from computing the SVD, is there a more direct algorithm implemented in this library that can solve a system in a least squares sense (again, apart from the SVD or from using the naive inv(transpose(A)*A)*transpose(A)*b formula for general least squares problems where A is not a square matrix?
Found the answer through some careful documentation browsing:
rmatrixsolvels( A, noRows, noCols, b, singularValueThreshold, info, solverReport, x)
The documentation states the the singular value threshold is a clamping threshold that sets any singular value from the SVD decomposition S matrix to 0 if that value is below it. Thus it should be a scalar between 0 and 1.
Hopefully, it will help someone else too.

Weighted linear least square for 2D data point sets

My question is an extension of the discussion How to fit the 2D scatter data with a line with C++. Now I want to extend my question further: when estimating the line that fits 2D scatter data, it would be better if we can treat each 2D scatter data differently. That is to say, if the scatter point is far away from the line, we can give the point a low weighting, and vice versa. Therefore, the question then becomes: given an array of 2D scatter points as well as their weighting factors, how can we estimate the linear line that passes them? A good implementation of this method can be found in this article (weighted least regression). However, the implementation of the algorithm in that article is too complicated as it involves matrix calculation. I am therefore trying to find a method without matrix calculation. The algorithm is an extension of simple linear regression, and in order to illustrate the algorithm, I wrote the following MATLAB codes:
function line = weighted_least_squre_for_line(x,y,weighting);
part1 = sum(weighting.*x.*y)*sum(weighting(:));
part2 = sum((weighting.*x))*sum((weighting.*y));
part3 = sum( x.^2.*weighting)*sum(weighting(:));
part4 = sum(weighting.*x).^2;
beta = (part1-part2)/(part3-part4);
alpha = (sum(weighting.*y)-beta*sum(weighting.*x))/sum(weighting);
a = beta;
c = alpha;
b = -1;
line = [a b c];
In the above codes, x,y,weighting represent the x-coordinate, y-coordinate and the weighting factor respectively. I test the algorithm with several examples but still not sure whether it is right or not as this method gets a different result with Polyfit, which relies on matrix calculation. I am now posting the implementation here and for your advice. Do you think it is a right implementation? Thanks!
If you think it is a good idea to downweight points that are far from the line, you might be attracted by http://en.wikipedia.org/wiki/Least_absolute_deviations, because one way of calculating this is via http://en.wikipedia.org/wiki/Iteratively_re-weighted_least_squares, which will give less weight to points far from the line.
If you think all your points are "good data", then it would be a mistake to weight them naively according to their distance from your initial fit. However, it's a fairly common practice to discard "outliers": if a few data points are implausibly far from the fit, and you have reason to believe that there's an error mechanism that could generate a small subset of "bad" datapoints, you could simply remove the implausible points from the dataset to get a better fit.
As far as the math is concerned, I would recommend biting the bullet and trying to figure out the matrix math. Perhaps you could find a different article, or a book which has a better presentation. I will not comment on your Matlab code, except to say that it looks like you will have some precision problems when subtracting part4 from part3, and probably part2 from part1 as well.
Not exactly what you are asking for, but you should look into robust regression. MATLAB has the function robustfit (requires Statistics Toolbox).
There is even an interactive demo you can play with to compare regular linear regression vs. robust regression:
>> robustdemo
This shows that in the presence of outliers, robust regression tends to give better results.

Removing unsolvable equations from an underdetermined system

My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.

Solving floating-point rounding issues C++

I develop a scientific application (simulation of chromosomes moving in a cell nucleus). The chromosomes are divided in small fragments that rotate around a random axis using 4x4 rotation matrices.
The problem is that the simulation performs hundreds of billions of rotations, therefore the floating-point rounding errors stack up and grow exponentially, so the fragments tend to "float away" and detach from the rest of the chromosome as time passes.
I use double precision with C++. The soft runs on CPU for the moment but will be ported for CUDA, and simulations can last for 1 month at most.
I have no idea how I could somehow renormalize the chromosome, because all fragments are chained together (you can see it as a doubly linked-list), but I think that would be the best idea, if possible.
Do you have any suggestions ? I feel a bit lost.
Thank you very much,
H.
EDIT:
Added a simplified sample code.
You can assume all matrix math are classical implementations.
// Rotate 1000000 times
for (int i = 0; i < 1000000; ++i)
{
// Pick a random section start
int istart = rand() % chromosome->length;
// Pick the end 20 segments further (cyclic)
int iend = (istart + 20) % chromosome->length;
// Build rotation axis
Vector4 axis = chromosome->segments[istart].position - chromosome->segments[iend].position;
axis.normalize();
// Build rotation matrix and translation vector
Matrix4 rotm(axis, rand() / float(RAND_MAX));
Vector4 oldpos = chromosome->segments[istart].position;
// Rotate each segment between istart and iend using rotm
for (int j = (istart + 1) % chromosome->length; j != iend; ++j, j %= chromosome->length)
{
chromosome->segments[j].position -= oldpos;
chromosome->segments[j].position.transform(rotm);
chromosome->segments[j].position += oldpos;
}
}
You need to find some constraint for your system and work to keep that within some reasonable bounds. I've done a bunch of molecular collision simulations and in those systems the total energy is conserved, so every step I double check the total energy of the system and if it varies by some threshold, then I know that my time step was poorly chosen (too big or too small) and I pick a new time step and rerun it. That way I can keep track of what's happening to the system in real time.
For this simulation, I don't know what conserved quantity you have, but if you have one, you can try to keep that constant. Remember, making your time step smaller doesn't always increase the accuracy, you need to optimize the step size with the amount of precision you have. I've had numerical simulations run for weeks of CPU time and conserved quantities were always within 1 part in 10^8, so it is possible, you just need to play around some.
Also, as Tomalak said, maybe try to always reference your system to the start time rather than to the previous step. So rather than always moving your chromosomes keep the chromosomes at their start place and store with them a transformation matrix that gets you to the current location. When you compute your new rotation, just modify the transformation matrix. It may seem silly, but sometimes this works well because the errors average out to 0.
For example, lets say I have a particle that sits at (x,y) and every step I calculate (dx, dy) and move the particle. The step-wise way would do this
t0 (x0,y0)
t1 (x0,y0) + (dx,dy) -> (x1, y1)
t2 (x1,y1) + (dx,dy) -> (x2, y2)
t3 (x2,y2) + (dx,dy) -> (x3, y3)
t4 (x3,30) + (dx,dy) -> (x4, y4)
...
If you always reference to t0, you could do this
t0 (x0, y0) (0, 0)
t1 (x0, y0) (0, 0) + (dx, dy) -> (x0, y0) (dx1, dy1)
t2 (x0, y0) (dx1, dy1) + (dx, dy) -> (x0, y0) (dx2, dy2)
t3 (x0, y0) (dx2, dy2) + (dx, dy) -> (x0, y0) (dx3, dy3)
So at any time, tn, to get your real position you have to do (x0, y0) + (dxn, dyn)
Now for simple translation like my example, you're probably not going to win very much. But for rotation, this can be a life saver. Just keep a matrix with the Euler angles associated with each chromosome and update that rather than the actual position of the chromosome. At least this way they won't float away.
Write your formulae so that the data for timestep T does not derive solely from the floating-point data in timestep T-1. Try to ensure that the production of floating-point errors is limited to a single timestep.
It's hard to say anything more specific here without a more specific problem to solve.
The problem description is rather vague, so here are some rather vague suggestions.
Option 1:
Find some set of constraints such that (1) they should always hold, (2) if they fail, but only just, it's easy to tweak the system so that they do, (3) if they do all hold then your simulation isn't going badly crazy, and (4) when the system starts to go crazy the constraints start failing but only slightly. For instance, perhaps the distance between adjacent bits of chromosome should be at most d, for some d, and if a few of the distances are just slightly greater than d then you can (e.g.) walk along the chromosome from one end, fixing up any distances that are too big by moving the next fragment towards its predecessor, along with all its successors. Or something.
Then check the constraints often enough to be sure that any violation will still be small when caught; and when you catch a violation, fix things up. (You should probably arrange that when you fix things up, you "more than satisfy" the constraints.)
If it's cheap to check the constraints all the time, then of course you can do that. (Doing so may also enable you to do the fixup more cheaply, e.g. if it means that any violations are always tiny.)
Option 2:
Find a new way of describing the state of the system that makes it impossible for the problem to arise. For instance, maybe (I doubt this) you can just store a rotation matrix for each adjacent pair of fragments, and force it always to be an orthogonal matrix, and then let the positions of the fragments be implicitly determined by those rotation matrices.
Option 3:
Instead of thinking of your constraints as constraints, supply some small "restoring forces" so that when something gets out of line it tends to get pulled back towards the way it should be. Take care that when nothing is wrong the restoring forces are zero or at least very negligible, so that they don't perturb your results more badly than the original numeric errors did.
I think it depends on the compiler you are using.
Visual Studio compiler support the /fp switch which tells the behavior of the floating point operations
you can read more about it. Basically, /fp:strict is the harshest mode
I guess it depends on the required precision, but you could use 'integer' based floating point numbers. With this approach, you use an integer and provide your own offset for the number of decimals.
For example, with a precision of 4 decimal points, you would have
float value -> int value
1.0000 -> 10000
1.0001 -> 10001
0.9999 -> 09999
You have to be careful when you do your multiply and divide and be careful when you apply your precision offsets. Other wise you can quickly get overflow errors.
1.0001 * 1.0001 becomes 10001 * 10001 / 10000
If I read this code correctly, at no time is the distance between any two adjacent chromosome segments supposed to change. In that case, before the main loop compute the distance between each pair of adjacent points, and after the main loop, move each point if necessary to have the proper distance from the previous point.
You may need to enforce this constraint several times during the main loop, depending on circumstances.
Basically, you need to avoid the accumulation of error from these (inexact) matrix operators and there are two major ways of doing so in most applications.
Instead of writing the position as some initial position operated on many times, you can write out what the operator would be explicitly after N operations. For instance, imagine you had a position x and you were adding a value e (that you couldn't represent exactly.) Much better than computing x += e; a large amount of times would be to compute x + EN; where EN is some more accurate way of representing what happens with the operation after N times. You should think whether you have some way of representing the action of many rotations more accurately.
Slightly more artificial is to take your newly found point and project off any discrepancies from the expected radius from your center of rotation. This will guarantee that it doesn't drift off (but won't necessarily guarantee that the rotation angle is accurate.)