I want use LAPACK to calculate Q * x and Q^T * x, where Q comes from the reduced QR factorization of an m by n matrix A (m > n), stored in the form of Householder reflectors and a vector tau, as obtained from DGEQRF and x is a vector of length n in the case of Q * x and length m in the case of Q^T * x.
The documentation of DORMQR states that x is overwritten with the result, which already confuses me, since x and Q * x obviuosly have different dimensions if the original matrix A and subsequently its reduced Q are not square. Furthermore it states that
"Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'."
In my case, only the first half applies and M refers to the length of x. What do they mean by order? I have rarely ever heard the term "order" in the context of non-square matrices, and if so, it would be something like m by n, and not just a single number. Do they mean rank?
Can I even use DORMQR to calculate both Q * x and Q^T * x for a non-square Q, or is it not designed for this? Do I need to pad x with zeros?
DORMQR applies only to Q a square matrix. Although the input A to the procedure relates to elementary reflectors, such as output of DGEQRF which can be more general, the documentation has the additional restriction that Q "is a real orthogonal matrix".
Of course, to be orthogonal, Q must be square.
Related
Example: let
M = Matrix([[1,2],[3,4]]) # and
p = Poly(x**3 + x + 1) # then
p.subs(x,M).expand()
gives the error :
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
which is very plausible since the two first terms become matrices but the last term (the constant term) is not a matrix but a scalar. To remediate to this situation I changed the polynomial to
p = Poly(x**3 + x + x**0) # then
the same error persists. Am I obliged to type the expression by hand, replacing x by M? In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.
So I think the question is mainly revolving around the concept of Matrix polynomial:
(where P is a polynomial, and A is a matrix)
I think this is saying that the free term is a number, and it cannot be added with the rest which is a matrix, effectively the addition operation is undefined between those two types.
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
However, this can be circumvented by defining a function that evaluates the matrix polynomial for a specific matrix. The difference here is that we're using matrix exponentiation, so we correctly compute the free term of the matrix polynomial a_0 * I where I=A^0 is the identity matrix of the required shape:
from sympy import *
x = symbols('x')
M = Matrix([[1,2],[3,4]])
p = Poly(x**3 + x + 1)
def eval_poly_matrix(P,A):
res = zeros(*A.shape)
for t in enumerate(P.all_coeffs()[::-1]):
i, a_i = t
res += a_i * (A**i)
return res
eval_poly_matrix(p,M)
Output:
In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.
The function eval_poly_matrix above can be extended to work for multivariate polynomials by using the .monoms() method to extract monomials with nonzero coefficients, like so:
from sympy import *
x,y = symbols('x y')
M = Matrix([[1,2],[3,4]])
p = poly( x**3 * y + x * y**2 + y )
def eval_poly_matrix(P,*M):
res = zeros(*M[0].shape)
for m in P.monoms():
term = eye(*M[0].shape)
for j in enumerate(m):
i,e = j
term *= M[i]**e
res += term
return res
eval_poly_matrix(p,M,eye(M.rows))
Note: Some sanity checks, edge cases handling and optimizations are possible:
The number of variables present in the polynomial relates to the number of matrices passed as parameters (the former should never be greater than the latter, and if it's lower than some logic needs to be present to handle that, I've only handled the case when the two are equal)
All matrices need to be square as per the definition of the matrix polynomial
A discussion about a multivariate version of the Horner's rule features in the comments of this question. This might be useful to minimize the number of matrix multiplications.
Handle the fact that in a Matrix polynomial x*y is different from y*x because matrix multiplication is non-commutative . Apparently poly functions in sympy do not support non-commutative variables, but you can define symbols with commutative=False and there seems to be a way forward there
About the 4th point above, there is support for Matrix expressions in SymPy, and that can help here:
from sympy import *
from sympy.matrices import MatrixSymbol
A = Matrix([[1,2],[3,4]])
B = Matrix([[2,3],[3,4]])
X = MatrixSymbol('X',2,2)
Y = MatrixSymbol('Y',2,2)
I = eye(X.rows)
p = X**2 * Y + Y * X ** 2 + X ** 3 - I
display(p)
p = p.subs({X: A, Y: B}).doit()
display(p)
Output:
For more developments on this feature follow #18555
Consider fast matrix multiplication of XDX^T for X an n by m matrix, and D an m by m diagonal matrix. Here m>>n (suppose n around 1000, m around 100000). In my application, X is a fixed matrix and values of D can change at every iteration.
What would be a fast way to calculate this? At the moment I am just doing simple multiplication in C++.
EDIT: I should clarify my current procedure, it is not "simple multiplication". In particular, I am columnise multiplying the X by the square root of diagonal entries of D to get A:=XD^{1/2}. Then I am directly calculating A*t(A) (which is the multiplication of an n by m matrix with its transpose).
Thank you.
If you know that D is diagonal, the you can just do simple multiplication. Hopefully, you are not multiplying the zeros.
I'm trying to estimate a 3D rotation matrix between two sets of points, and I want to do that by computing the SVD of the covariance matrix, say C, as follows:
U,S,V = svd(C)
R = V * U^T
C in my case is 3x3 . I am using the Eigen's JacobiSVD module for this and I only recently found out that it stores matrices in column-major format. So that has had me confused.
So, when using Eigen, should I do:
V*U.transpose() or V.transpose()*U ?
Additionally, the rotation is accurate upto changing the sign of the column of U corresponding to the smallest singular value,such that determinant of R is positive. Let's say the index of the smallest singular value is minIndex .
So when the determinant is negative, because of the column major confusion, should I do:
U.col(minIndex) *= -1 or U.row(minIndex) *= -1
Thanks!
This has nothing to do with matrices being stored row-major or column major. svd(C) gives you:
U * S.asDiagonal() * V.transpose() == C
so the closest rotation R to C is:
R = U * V.transpose();
If you want to apply R to a point p (stored as column-vector), then you do:
q = R * p;
Now whether you are interested R or its inverse R.transpose()==V.transpose()*U is up to you.
The singular values scale the columns of U, so you should invert the columns to get det(U)=1. Again, nothing to do with storage layout.
How can I multiply an NxN matrix A in Fortran x times to get its power without amplifying rounding errors?
If A can be diagonalized as
A P = P D,
where P is some NxN matrix (each column is called 'eigenvector'), and D is an NxN diagonal matrix (the diagonal elements are called 'eigenvalues'), then
A = P D P^{-1},
where P^{-1} is the inverse matrix of P. Therefore the second power of A results in
A A= P D P^{-1} P D P^{-1} = P D D P^{-1}.
Repeating multiplication of A for x times yields
A^x = P D^x P^{-1}.
Note here that D^x is still a diagonal matrix. Let the i-th diagonal element of D be D_{ii}. Then, the i-th diagonal element of D^x is
[D^x]_{ii} = (D_{ii})^x.
That is, the elements of D^x is simply x-th power of the elements of D and can be computed without much rounding error, I guess. Now, you multiply P and P^{-1} from left and right, respectively, to this D^x to obtain A^x. The error in A^x depends on the error of P and P^{-1}, which can be calculated by some subroutines in numerical packages such as LAPACK.
as alluded to in the answer by norio, one can employ in general the Jordan (or alternatively Schur) decomposition and proceed in a similar fashion - for details (including brief error analysis) see, e.g., Chapter 11 of Matrix computations by Golub and Loan.
I am trying to test the LAPACK method CGESV, but I am encountering an issue. I want to reuse my 'A' matrix in other parts of my code, but it changes when I pass it into the method. The definition of 'A':
(input/output) COMPLEX array, dimension (LDA,N)
On entry, the N-by-N coefficient matrix A.
On exit, the factors L and U from the factorization
A = P*L*U; the unit diagonal elements of L are not stored.
Is there a way to keep the value of A after passing it into CGESV short of creating a temp variable to store the value?
As you already noticed the A matrix is overwritten with P*L*U decomposition. If the size of the matrix is not so big, you can copy the contents of A matrix and use the copy for the decomposition.
CALL CCOPY(N*N, A, 1, A_NEW, 1)
If the matrix size is so big that you can not keep two copies of it in memory, you can perform the math operations with the decomposed matrix. For example to compute y=A*x
* y = x
CALL CCOPY(N, X, 1, Y, 1)
* y = U * y
CALL CTRMV('Upper', 'No transpose', 'Non-unit', N, A, N, Y, 1)
* y = L * y
CALL CTRMV('Lower', 'No transpose', 'Unit', N, A, N, Y, 1)
* y = P * y
CALL DLASWP( 1, Y, N, 1, N, IPIV, 1 )
The additional memory needed is the integer IPIV sized N.
The routines do their work in-place, so the only way to keep the original array is to make a copy.