Multiplying matrices in Eigen c++ gives wrong dimensions - c++

I'm having trouble understanding why I am getting a 10x10 matrix as a result from multiplying a 10x3 matrix with a 3x10 matrix using the Eigen library in c++.
By following the documentation at http://eigen.tuxfamily.org/dox-devel/group__TutorialMatrixArithmetic.html I came up with
const int NUM_OBSERVATIONS = 10;
const int NUM_DIMENSIONS = 3;
MatrixXf localspace(NUM_DIMENSIONS, NUM_OBSERVATIONS);
MatrixXf rotatedlocalspace(NUM_OBSERVATIONS, NUM_DIMENSIONS);
MatrixXf covariance(NUM_DIMENSIONS, NUM_DIMENSIONS);
covariance = (rotatedlocalspace * localspace) / (NUM_OBSERVATIONS - 1);
cout << covariance << endl;
Output gives a 10x10 matrix, when I am trying to obtain a 3x3 covariance matrix for each dimension (These are mean centered XYZ points). "localspace" and "rotatedlocalspace" are both filled with float values when covariance is calculated.
How do I get the correct covariance matrix?

Eigen is correct, as it reproduces basic math: if A is a matrix of dimension n x m and B has dimension m x k, then A*B has the dimension n x k.
Applied to your problem, if your matrix rotatedlocalspace is of dimension 10 x 3 and localspace has dimension 3 x 10, then rotatedlocalspace*localspace has dimension
(10 x 3) * (3 x 10) -> 10 x 10.
The scalar division you apply further doesn't change the dimension.
If you expect a different dimension, then try to commute the factors in the matrix product. This you will obtain a 3x3 matrix.

Related

C++ Boost: determinant and inversion of complex matrix

Do you know whether boost has functions that can calculate the determinant and the inversion of a complex matrix? The matrix dimension isn't large (less than 50).
Inversion:
Input: matrix M = A +i*B with A,B two real matrices of dimension (n x n) with n <50.
Output:
Inversion:
matrix N = C + iD with C,D two real matrices of dimension (n x n) such that: (A +iB)^T (C+ i*D) = I (I: the identity matrix)
Determinant:
det(A+iB)
I googled but didn't succeed.
Thank you in advance.
Finally I know why these operators on inversion and determinant of matrices aren't implemented. It's because we have closed-form solution of these 2 operators from classical operators on real matrices.
For matrix inversion: we have this closed-form solution https://fr.mathworks.com/matlabcentral/fileexchange/49373-complex-matrix-inversion-by-real-matrix-inversion
For matrix determinant, we have:
det((A+iB))= det (A * (I + i A1.B)) (with A1 is the inversed matrix of A)
= det(A) * det (I + i A1.B))
= det(A) * det (U1 (I + iD) U2) (with U1 = A1.B, U2 is the invered matrix of U1, D is the diagonal matrix of U1) = det(A) *det(I +iD). It's easy to calculated the determinant of I +iD which is a diagonal matrix.
So, det(A+iB) = det(A) * det(I +iD) with D: the matrix of eigenvalues of (A^(-1) * B)

Fast matrix multiplication of XDX^T for D diagonal

Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.

Eigen library, Jacobi SVD

I'm trying to estimate a 3D rotation matrix between two sets of points, and I want to do that by computing the SVD of the covariance matrix, say C, as follows:
U,S,V = svd(C)
R = V * U^T
C in my case is 3x3 . I am using the Eigen's JacobiSVD module for this and I only recently found out that it stores matrices in column-major format. So that has had me confused.
So, when using Eigen, should I do:
V*U.transpose() or V.transpose()*U ?
Additionally, the rotation is accurate upto changing the sign of the column of U corresponding to the smallest singular value,such that determinant of R is positive. Let's say the index of the smallest singular value is minIndex .
So when the determinant is negative, because of the column major confusion, should I do:
U.col(minIndex) *= -1 or U.row(minIndex) *= -1
Thanks!
This has nothing to do with matrices being stored row-major or column major. svd(C) gives you:
U * S.asDiagonal() * V.transpose() == C
so the closest rotation R to C is:
R = U * V.transpose();
If you want to apply R to a point p (stored as column-vector), then you do:
q = R * p;
Now whether you are interested R or its inverse R.transpose()==V.transpose()*U is up to you.
The singular values scale the columns of U, so you should invert the columns to get det(U)=1. Again, nothing to do with storage layout.

OpenGL: mat4x4 multiplied with vec4 yields tvec<float>

Consider the code:
glm::mat4x4 T = glm::mat4x4(1);
glm::vec4 vrpExpanded;
vrpExpanded.x = this->vrp.x;
vrpExpanded.y = this->vrp.y;
vrpExpanded.z = this->vrp.z;
vrpExpanded.w = 1;
this->vieworientationmatrix = T * (-vrpExpanded);
Why does T*(-vrpExpanded) yield a vector? According to my knowledge of linear algebra this should yield a mat4x4.
According to my knowledge of linear algebra this should yield a mat4x4.
Then that's the problem.
According to linear algebra, a matrix can be multipled by a scalar (which does element-wise multiplication) or by another matrix. But even then, a matrix * matrix multiplication only works if the number of rows in the first matrix equals the number of columns in the second. And the resulting matrix is one which has the number of columns in the first and the number of rows in the second.
So if you have an AxB matrix and you multiply it with a CxD matrix, this only works if B and C are equal. And the result is an AxD matrix.
Multiplying a matrix by a vector means to pretend the vector is a matrix. So if you have a 4x4 matrix and you right-multiply it with a 4-element vector, this will only make sense if you treat that vector as a 4x1 matrix (since you cannot multiply a 4x4 matrix by a 1x4 matrix). And the result of a 4x4 matrix * a 4x1 matrix is... a 4x1 matrix.
AKA: a vector.
GLM is doing exactly what you asked.

Matrix multiplication very slow in Eigen

I have implemented a Gauss-Newton optimization process which involves calculating the increment by solving a linearized system Hx = b. The H matrx is calculated by H = J.transpose() * W * J and b is calculated from b = J.transpose() * (W * e) where e is the error vector. Jacobian here is a n-by-6 matrix where n is in thousands and stays unchanged across iterations and W is a n-by-n diagonal weight matrix which will change across iterations (some diagonal elements will be set to zero). However I encountered a speed issue.
When I do not add the weight matrix W, namely H = J.transpose()*J and b = J.transpose()*e, my Gauss-Newton process can run very fast in 0.02 sec for 30 iterations. However when I add the W matrix which is defined outside the iteration loop, it becomes so slow (0.3~0.7 sec for 30 iterations) and I don't understand if it is my coding problem or it normally takes this long.
Everything here are Eigen matrices and vectors.
I defined my W matrix using .asDiagonal() function in Eigen library from a vector of inverse variances. then just used it in the calculation for H ad b. Then it gets very slow. I wish to get some hints about the potential reasons for this huge slowdown.
EDIT:
There are only two matrices. Jacobian is definitely dense. Weight matrix is generated from a vector by the function vec.asDiagonal() which comes from the dense library so I assume it is also dense.
The code is really simple and the only difference that's causing the time change is the addition of the weight matrix. Here is a code snippet:
for (int iter=0; iter<max_iter; ++iter) {
// obtain error vector
error = ...
// calculate H and b - the fast one
Eigen::MatrixXf H = J.transpose() * J;
Eigen::VectorXf b = J.transpose() * error;
// calculate H and b - the slow one
Eigen::MatrixXf H = J.transpose() * weight_ * J;
Eigen::VectorXf b = J.transpose() * (weight_ * error);
// obtain delta and update state
del = H.ldlt().solve(b);
T <- T(del) // this is pseudo code, meaning update T with del
}
It is in a function in a class, and weight matrix now for debug purposes is defined as a class variable that can be accessed by the function and is defined before the function is called.
I guess that weight_ is declared as a dense MatrixXf? If so, then replace it by w.asDiagonal() everywhere you use weight_, or make the later an alias to the asDiagonal expression:
auto weight = w.asDiagonal();
This way Eigen will knows that weight is a diagonal matrix and computations will be optimized as expected.
Because the matrix multiplication is just the diagonal, you can change it to use coefficient wise multiplication like so:
MatrixXd m;
VectorXd w;
w.setLinSpaced(5, 2, 6);
m.setOnes(5,5);
std::cout << (m.array().rowwise() * w.array().transpose()).matrix() << "\n";
Likewise, the matrix vector product can be written as:
(w.array() * error.array()).matrix()
This avoids the zero elements in the matrix. Without an MCVE for me to base this on, YMMV...