Relative Angle Vectors - c++

I am pretty new to MATLAB and programming in general, so apologies in advance if this is too trivial of a question.
Here is my dilemma, I have a system where two vectors are running away from each other out of the origin, one with a magnitude of 200 and another with a magnitude of 150, these figures are given.
After the user inputs their magnitudes and angles in a cartesian coordinate system, the angles are converted to radians and the following calculations are performed:
compA = MagA*[cos(AngleA), sin(AngleA)];
compB = MagB*[cos(AngleB), sin(AngleB)];
AngleAwrtB = compA-compB;
Where compA and compB are the x any y components of of the two vectors "end points" and AwrtB is "A with respect to B" . MagA and MagB are the magnitudes of each vector.
So I now have the angle of vector A with respect to B, now I need to find the magnitude of vector A with respect to vector B, any ideas on how I could do this? I want to use something like the following:
MagAwrtB = MagA-MagB
I am just worried that this is mathematically incorrect, that there is some other trigonometric relation that I am missing.
Any help would be greatly appreciated.

Well, this is pretty much the definition of the dot product, which can be achieved in Matlab with the dot function (or manually with the formula, in 2D it's straightforward).
Best

Related

How to update the covariance of a multi camera system when a rigid motion is applied to all of them?

For example for 6-dof camera states, two cameras have 12 state parameters and a 12*12 covariance matrix (assume Gaussian distribution). How does this covariance change when a 6-dof rigid motion is applied to the cameras?
What if the 6-dof is a Gaussian distribution too ?
You can use the "forward propagation" theorem (you can find it in Hartley and Zisserman's multiple view geometry book, chapter 5, page 139).
Basically, if you have a random variable x with mean x_m and covariance C, and a differentliable function f that you apply to x, then the mean of f(x) will be f(x_m) and its covariance C_f will be approximately JCJ^t, where ^t denotes the transpose, and Jis the Jacobian matrix of f evaluated at x_m.
Let's now consider the problems of the covariance propagation separately for camera positions and camera orientations.
First see what happens to the translation parameters of the camera in your case, let's denote them with x_t.In your case, f is a rigid transformation, that means that
f(x_t)=Rx_t+T //R is a rotation and T a translation, x_t is the position of the camera
Now the Jacobian of f with respect to x_t is simply R, so the covariance is given by
C_f=RCR^T
which is an interesting result: it indicates that the change in
covariance only depends on the rotation. This makes sense, since
intuitively, translating the (positional) data doesn't actually changes the axis
along which it changes (thing about principal component
analysis).
Also note that if C is isotropic, i.e a diagonal matrix
lambda*Identity, then C_f=lambda*Identity, which also makes sense,
since intuitively we don't expect an isotropic covariance to change
with a rotation.
Now consider the orientation parameters. Let's use the Lie algebra of the SO(3) group. In that case, the yaw, pitch, scale will be parametrized as v=[alpha_1, alpha_2, alpha_3]^t (they are basically Lie algebra coefficients). In the following, we will use the exponential and logarithm maps from the Lie algebra so(3) to the group SO(3). We can write our function as
f(v)=log(R*exp(v))
In the above, exp(v) is the rotation matrix of your camera, and R is the rotation from your rigid transformation.
Note that translation doesn't affect orientation parameters. Computing the Jacobian of f with respect to v is mathematically involved. I suspect that you can do it using the adjoint or the Lie algebra, or you can do it using the Baker-Campbell-Hausdorff formula, however, you will have to limit the precision. Here, we'll take a shortcut and use the result given in this question.
jacobian_f_with_respect_to_v=R*inverse(R*exp(v))
=R*exp(v)^t*R^t
So, our covariance will be
R*exp(v)^t*R^t * Cov(v) * (R*exp(v)^t*R^t)^t
=R*exp(v)^t*R^t * Cov(v) * R * exp(v) * R^t
Again, we observe the same thing: if Cov(v) is isotropic then so is the covariance of f.
Edit: Answers to the questions you asked in the comments
Why did you assume conditional independence between translation/rotation?
Conditional independence between translation/orientation parameters is often assumed in many works (especially in the pose graphe litterature, e.g. see Hauke Strasdat's thesis), and I've always found that in practice, this works a lot better (not a very convincing argument, I know). However, I admit that I didn't put much thought (if any) into this when writing this answer, because my main point was "use the forward propagation theorem". You can apply it jointly to orientation/position, and all this changes is that your Jacobian will look like
J=[J_R J_T]//J_R Jacobian w.r.t orientation , J_T Jacobian w.r.t position
and then the "densification" of the covariance matrix will happen as a result of the propagation like J^T*C*J.
Why did you use SO(3) instead of SE(3)?
You said it yourself, I separated the translation parameters from the orientation. SE(3) is the space of rigid transformation, which includes translations. It wouldn't have made sense for me to use it since I already had taken care of the position parameters.
What about the covariance between two cameras?
I think we can still apply the same theorem. The difference now is your rigid transformation will be a function M(x_1,x_2) of 12 parameters, and your Jacobian will look like [J_R_1 J_R_2 J_T_1 J_T2]. These can be tedious to compute as you know, so if you can just try numeric or automatic differentiation.

solve a system of nonlinear equations with c++

I want to solve a system of equations in c++. Is there any tool/package that provides a solver? My system looks like
(x-a)^2 + (y-b)^2 = d1
(x-c)^2 + (y-d)^2 = d2
In that case I know a,..,d, d1,d2.
For know i took a spacial case (a,b,d = 0, and c not 0) but I want a solution in all cases.
Anybody an idea?
If you need general support for solving nonlinear equations Ceres, PetSC, dlib all have nonlinear solvers that you can use from C++ to solve the problems you describe. Though you are much more likely to find better support for this type of work in Matlab or even python's scipy. Particularly if you are not really interested in performance, and only need to solve small scale equations with ease.
If all you need is to solve the system you posted, there is a simple closed form solution:
Subtract eq2 from eq1 and express x = f(y) [s1]
Substitute x with f(y) in one of the equations and solve for y
Substitute y back in [s1] to find x
I suggest you read 'Numerical Recipes'.
This book has a chapter on equations solving, and their preface usually gives a very good overview in simple enough terms on all the subject.
please note, that solving equations numerically has many fine details, and using any package without handling the details may lead to a bad solution (or maybe just slow, or not good enough).
In geometrical sense, the system of equations(SOE)
represent two circles. The first one a circle whose
center is at (a,b) and of raduis sqrt(d1), and the
second one a circle at (c,d) with radius of sqrt(d2).
There are three cases to consider
the first case is if the two circles do not
intersect. In this case the equation does not have a
solution.
The second case is if the two circles intersect at
two points. In such case the equations will have two
solutions. i.e two possible values for (x,y)
In third case the two circles intersect at exactly
two points. In this case the SOE has exactly one
solution. i.e one pair of solution (x,y).
So how do we check if the SOE, has a solution. well we
check if the two circles intersect.
The two circles intersect iff:
The distance between the two circles is less than or
equal to the sum of their radii.
sqrt( (a-c)^2 + (b-d)^2 ) <= sqrt(d1) + sqrt(d2).
if the equality holds then the two circles intersect in
exactly one point and therfore the SOE has exactly one
solution.
I can continue explaining but I will leave you with the
equation. Check this out:
https://math.stackexchange.com/questions/256100/how-can-i-find-the-points-at-which-two-circles-intersect#256123
Yes, this one supports nonlinear systems and overloaded ^ operator.
Here is an example: https://github.com/ohhmm/NonLinearSystem

Which method should I use to determine the similarity of 2D, 3D and 4D (quaternions) vectors?

I am writing some simple Unit Tests for math library.
To decide if the library generates good results I have to compare them with expected ones. Because of rounding etc. even good result will differ a bit from expected one (e.g. 0.701 when 0.700 was expected).
The problem is, I have to decide how similar two vectors are. I want to describe that similarity as an error proportion (for number it would be e.g. errorScale(3.0f /* generated */, 1.0f /* expected */) = 3.0f/1.5f = 2.0f == 200%).
Which method should I use to determine the similarity of 2D, 3D and 4D (quaternions) vectors?
There's no universally good measure. In particular, for addition the absolute error is better while for multiplication the relative error is better.
For vectors the "relative error" can also be considered in terms of length and direction. If you think about it, the "acceptable outcomes" form a small area around the exact result. But what's the shape of this area? Is it an axis-aligned square (absolute errors in x and y direction)? That privileges a specific vector base. A circle might be a better shape.

Least Squares Solution of Overdetermined Linear Algebraic Equation Ax = By

I have a linear algebraic equation of the form Ax=By. Where A is a matrix of 6x5, x is vector of size 5, B a matrix of 6x6 and y vector of size 6. A, B and y are known variables and their values are accessed in real time coming from the sensors. x is unknown and has to find. One solution is to find Least Square Estimation that is x = [(A^T*A)^-1]*(A^T)B*y. This is conventional solution of linear algebraic equations. I used Eigen QR Decomposition to solve this as below
matrixA = getMatrixA();
matrixB = getMatrixB();
vectorY = getVectorY();
//LSE Solution
Eigen::ColPivHouseholderQR<Eigen::MatrixXd> dec1(matrixA);
vectorX = dec1.solve(matrixB*vectorY);//
Everything is fine until now. But when I check the errore = Ax-By, its not zero always. Error is not very big but even not ignorable. Is there any other type of decomposition which is more reliable? I have gone through one of the page but could not understand the meaning or how to implement this. Below are lines from the reference how to solve the problem. Could anybody suggest me how to implement this?
The solution of such equations Ax = Byis obtained by forming the error vector e = Ax-By and the finding the unknown vector x that minimizes the weighted error (e^T*W*e), where W is a weighting matrix. For simplicity, this weighting matrix is chosen to be of the form W = K*S, where S is a constant diagonal scaling matrix, and K is scalar weight. Hence the solution to the equation becomes
x = [(A^T*W*A)^-1]*(A^T)*W*B*y
I did not understand how to form the matrix W.
Your statement " But when I check the error e = Ax-By, its not zero always. " almost always will be true, regardless of your technique, or what weighting you choose. When you have an over-described system, you are basically trying to fit a straight line to a slew of points. Unless, by chance, all the points can be placed exactly on a single perfectly straight line, there will be some error. So no matter what technique you use to choose the line, (weights and so on) you will always have some error if the points are not colinear. The alternative would be to use some kind of spline, or in higher dimensions to allow for warping. In those cases, you can choose to fit all the points exactly to a more complicated shape, and hence result with 0 error.
So the choice of a weight matrix simply changes which straight line you will use by giving each point a slightly different weight. So it will not ever completely remove the error. But if you had a few particular points that you care more about than the others, you can give the error on those points higher weight when choosing the least square error fit.
For spline fitting see:
http://en.wikipedia.org/wiki/Spline_interpolation
For the really nicest spline curve interpolation you can use Centripital Catmull-Rom, which in addition to finding a curve to fit all the points, will prevent unnecessary loops and self intersections that can sometimes come up during abrupt changes in the data direction.
Catmull-rom curve with no cusps and no self-intersections

Removing unsolvable equations from an underdetermined system

My program tries to solve a system of linear equations. In order to do that, it assembles matrix coeff_matrix and vector value_vector, and uses Eigen to solve them like:
Eigen::VectorXd sol_vector = coeff_matrix
.colPivHouseholderQr().solve(value_vector);
The problem is that the system can be both over- and under-determined. In the former case, Eigen either gives a correct or uncorrect solution, and I check the solution using coeff_matrix * sol_vector - value_vector.
However, please consider the following system of equations:
a + b - c = 0
c - d = 0
c = 11
- c + d = 0
In this particular case, Eigen solves the three latter equations correctly but also gives solutions for a and b.
What I would like to achieve is that only the equations which have only one solution would be solved, and the remaining ones (the first equation here) would be retained in the system.
In other words, I'm looking for a method to find out which equations can be solved in a given system of equations at the time, and which cannot because there will be more than one solution.
Could you suggest any good way of achieving that?
Edit: please note that in most cases the matrix won't be square. I've added one more row here just to note that over-determination can happen too.
I think what you want to is the singular value decomposition (SVD), which will give you exact what you want. After SVD, "the equations which have only one solution will be solved", and the solution is pseudoinverse. It will also give you the null space (where infinite solutions come from) and left null space (where inconsistency comes from, i.e. no solution).
Based on the SVD comment, I was able to do something like this:
Eigen::FullPivLU<Eigen::MatrixXd> lu = coeff_matrix.fullPivLu();
Eigen::VectorXd sol_vector = lu.solve(value_vector);
Eigen::VectorXd null_vector = lu.kernel().rowwise().sum();
AFAICS, the null_vector rows corresponding to single solutions are 0s while the ones corresponding to non-determinate solutions are 1s. I can reproduce this throughout all my examples with the default treshold Eigen has.
However, I'm not sure if I'm doing something correct or just noticed a random pattern.
What you need is to calculate the determinant of your system. If the determinant is 0, then you have an infinite number of solutions. If the determinant is very small, the solution exists, but I wouldn't trust the solution found by a computer (it will lead to numerical instabilities).
Here is a link to what is the determinant and how to calculate it: http://en.wikipedia.org/wiki/Determinant
Note that Gaussian elimination should also work: http://en.wikipedia.org/wiki/Gaussian_elimination
With this method, you end up with lines of 0s if there are an infinite number of solutions.
Edit
In case the matrix is not square, you first need to extract a square matrix. There are two cases:
You have more variables than equations: then you have either no solution, or an infinite number of them.
You have more equations than variables: in this case, find a square sub-matrix of non-null determinant. Solve for this matrix and check the solution. If the solution doesn't fit, it means you have no solution. If the solution fits, it means the extra equations were linearly-dependant on the extract ones.
In both case, before checking the dimension of the matrix, remove rows and columns with only 0s.
As for the gaussian elimination, it should work directly with non-square matrices. However, this time, you should check that the number of non-empty row (i.e. a row with some non-0 values) is equal to the number of variable. If it's less you have an infinite number of solution, and if it's more, you don't have any solutions.