Problem when solving homogeneous system of equations related to Toeplitz hermitian matrix with Sympy - sympy

For homework they asked me to define a function that creates a Toeplitz hermitian matrix given an input vector which would be the matrix' first row. Once I did it, they then asked me to write a function which given the same vector, it'd find the matrix' eigenvalues and eigenvectors. Instead of using the built-in Sympy functions like diagonalize, eigenvalues and eigenvectors, I wanted to solve it by going through the whole process (Calculating determinant of A-kI, finding its roots, and then finding the solutions to (A-kI)x=0 for each eigenvalue k). I could find the eigenvalues for the matrix made from the vector [1,2-1j,3,4+8j,5-3j], but when it comes to solving the system of eqs., since they are all homogeneous, the solutions should depend on a parametrized variable, but instead it just yields a unique solution, the 0 vector. I don't know why this happens, and it is not due to the matrix being a 5x5, because it happens the same for 2x2 matrices too... Here're the codes I mentioned:
Function that creates the Toeplitz hermitian matrix:
def compute_herm_toeplitz_matrix(a):
""""""
global l
l = len(a)
matriu = sp.zeros(l,l)
for i in range(l):
for j in range(l-i):
matriu[j,j+i]=a[i]
matriu[j+i,j]=a[i].conjugate()
return sp.Matrix(matriu)
Function that finds its eigenvalues and eigenvectors:
def eigenvalues_eigenvectors_toeplitz(a):
""""""
k = sp.Symbol('\lambda')
x = [sp.symbols('x%d' % i) for i in range(l)]
matriu_toeplitz = compute_herm_toeplitz_matrix(a)
Id = sp.eye(l)
det_toeplitz = (matriu_toeplitz-Id*k).det()
eigenvalues = sp.solve(det_toeplitz)
incognites = sp.Matrix(x)
eigenvectors = []
for eigenvalue in eigenvalues:
equacions = (matriu_toeplitz-Id*eigenvalue)*(incognites)
eigenvectors.append( sp.solve( [*equacions], x ) )
return sp.diag(*eigenvalues), eigenvectors

Related

Eigen c++ triangular from

I use C++ 14 and Eigen. For n x n matrix A how can I extract Q and R matrices using QR decomposition in Eigen, I tried to read the documentation but I'm disorientated
I've obtain only R:
HouseholderQR<MatrixXd> qr(A);
qr.compute(A);
MatrixXd R = qr.matrixQR().template triangularView<Upper>();
Anyway, I just want to convert matrix A into a triangular matrix (in a efficient way, around O(n^3) I think), which have the determinant equal to determinant of A, in this way accept any other methods to do this in Eigen. (or another Linear Algebra library, if you know some good libraries I waiting for suggestions )
You can get Q and R as follows:
Eigen::MatrixXd Q = qr.householderQ();
Eigen::MatrixXd QR = qr.matrixQR();
The R matrix is in the upper triangular portion of matrix QR. You can compute the determinant of R as R.diagonal().prod() which is equal in magnitude to A.determinant(). If you want to isolate the upper triangular
portion you can do this:
Eigen::MatrixXd T = R.triangularView<Eigen::UnitUpper>();

Fast matrix multiplication of XDX^T for D diagonal

Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.

Eigen library, Jacobi SVD

I'm trying to estimate a 3D rotation matrix between two sets of points, and I want to do that by computing the SVD of the covariance matrix, say C, as follows:
U,S,V = svd(C)
R = V * U^T
C in my case is 3x3 . I am using the Eigen's JacobiSVD module for this and I only recently found out that it stores matrices in column-major format. So that has had me confused.
So, when using Eigen, should I do:
V*U.transpose() or V.transpose()*U ?
Additionally, the rotation is accurate upto changing the sign of the column of U corresponding to the smallest singular value,such that determinant of R is positive. Let's say the index of the smallest singular value is minIndex .
So when the determinant is negative, because of the column major confusion, should I do:
U.col(minIndex) *= -1 or U.row(minIndex) *= -1
Thanks!
This has nothing to do with matrices being stored row-major or column major. svd(C) gives you:
U * S.asDiagonal() * V.transpose() == C
so the closest rotation R to C is:
R = U * V.transpose();
If you want to apply R to a point p (stored as column-vector), then you do:
q = R * p;
Now whether you are interested R or its inverse R.transpose()==V.transpose()*U is up to you.
The singular values scale the columns of U, so you should invert the columns to get det(U)=1. Again, nothing to do with storage layout.

Solving for Lx=b and Px=b when A=LLt

I am decomposing a sparse SPD matrix A using Eigen. It will either be a LLt or a LDLt deomposition (Cholesky), so we can assume the matrix will be decomposed as A = P-1 LDLt P where P is a permutation matrix, L is triangular lower and D diagonal (possibly identity). If I do
SolverClassName<SparseMatrix<double> > solver;
solver.compute(A);
To solve Lx=b then is it efficient to do the following?
solver.matrixL().TriangularView<Lower>().solve(b)
Similarly, to solve Px=b then is it efficient to do the following?
solver.permutationPinv()*b
I would like to do this in order to compute bt A-1 b efficiently and stably.
Have a look how _solve_impl is implemented for SimplicialCholesky. Essentially, you can simply write:
Eigen::VectorXd x = solver.permutationP()*b; // P not Pinv!
solver.matrixL().solveInPlace(x); // matrixL is already a triangularView
// depending on LLt or LDLt use either:
double res_llt = x.squaredNorm();
double res_ldlt = x.dot(solver.vectorD().asDiagonal().inverse()*x);
Note that you need to multiply by P and not Pinv, since the inverse of
A = P^-1 L D L^t P is
P^-1 L^-t D^-1 L^-1 P
because the order of matrices reverses when taking the inverse of a product.

c++ eigenvalue and eigenvector corresponding to the smallest eigenvalue

I am trying to find out the eigenvalues and the eigenvector corresponding to the smallest eigenvalue. I have a matrix A (nx2) and I have computed B = transpose(A) * a. When I am using c++ eigen function compute() and print the eigenvalues of matrix B, it shows something like this:
(4.4, 0)
(72.1, 0)
Printing the eigenvectors it gives output:
(-0.97, 0) (0.209, 0)
(-0.209, 0) (-0.97, 0)
I am confused. Eigenvectors can't be zero I guess. So, for the smallest eigenvalue 4.4, is the corresponding eigenvector (-0.97, -0.209)?
P.S. - when I print
mysolution.eigenvalues()[0]
it prints (4.4, 0). And when I print
mysolution.eigenvectors().col(0)
it prints (-0.97, 0) (0.209, 0). That's why I guess I can assume that for eigenvalue 4.4, the corresponding eigenvector is (-0.97, -0.209).
I guess you are correct.
None of your eigenvalues is null, though. It seems that you are working with complex numbers.
Could it be that you selected a complex floating point matrix to do your computations? Something along the lines of MatrixX2cf or MatrixX2cd.
Every square matrix has a set of eigenvalues. But even if the matrix itself only consists of real numbers, the eigenvalues and -vectors might contain complex numbers (take (0 1;-1 0) for example)
If Eigen knows nothing about your matrix structure (i.e. is it symmetric/self-adjoint? Is it orthonormal/unitary?) but still wants to provide you with exact eigenvalues, the only general type that can hold all possible eigenvalues is a complex number.
Thus, Eigen always returns complex numbers which are represented as pairs (a, b) for a + bi. Eigen will only return real numbers if the matrix is self-adjoint, i.e. SelfAdjointView is used to access the matrix.
If you know for a fact that your matrix only has real eigenvalues, you can just extract the real part by eigenvalue.real since Eigen returns std::complex values.
EDIT: I just realized that if your matrix A has no complex entries, B=transposed(A)*A is self-adjoint and thus you could just use a SelfAdjointView of the matrix to compute the real eigenvalues and -vectors.