matrix diagonalization and basis change with geev - fortran

I want to diagonalize a matrix and then be able to do basis changes. The aim in the end is to do matrix exponentiation, with exp(A) = P.exp(D).P^{-1}.
I use sgeev to diagonalize A. If I am not mistaken (and I probably am since it's not working), sgeev gives me P in the vr matrix and P^{-1} is transpose(vl). The diagonal matrix can be reconstitute from the eigenvalues wr.
The problem is that when I try to verify the matrix transformation by computing P * D * P^{-1} it's not giving A back.
Here's my code:
integer :: i,n, info
real::norm
real, allocatable:: A(:,:), B(:,:), C(:,:),D(:,:)
real, allocatable:: wr(:), wi(:), vl(:, :), vr(:, :), work(:)
n=3
allocate(vr(n,n), vl(n,n), wr(n), wi(n), work(4*n))
allocate(A(n,n),B(n,n), C(n,n),D(n,n))
A(1,:)=(/1,0,1/)
A(2,:)=(/0,2,1/)
A(3,:)=(/0,3,1/)
call sgeev('V','V',n,A,n,wr,wi,vl,n,vr,n,work,size(work,1),info)
print*,'eigenvalues'
do i=1,n
print*,i,wr(i),wi(i)
enddo
D=0.0
D(1,1)=wr(1)
D(2,2)=wr(2)
D(3,3)=wr(3)
C = matmul(D,transpose(vl))
B = matmul(vr,C)
print*,'A'
do i=1, n
print*, B(i,:)
enddo
The printed result is:
eigenvalues
1 1.00000000 0.00000000
2 3.30277562 0.00000000
3 -0.302775621 0.00000000
A
0.688247263 0.160159975 0.764021933
0.00000000 1.66581571 0.817408621
0.00000000 2.45222616 0.848407149
A is not the original A, not even considering an eventual factor.
I guess I am somehow mistaken since I checked the eigenvectors by computing matmul(A,vr) = matmul(vr,D) and matmul(transpose(vl),A) = matmul(D, transpose(vl)), and it worked.
Where am I wrong?

The problem is that transpose(vl) is not the inverse of vr. The normalisation given by sgeev is that each eigenvector (each column of vl or vr) is individually normalised. This means that dot_product(vl(:,i), vr(:,j)) is zero if i/=j, but is in general <1 if i==j.
If you want to get P^{-1}, you need to scale each column of vl by a factor of 1/dot_product(vl(:,i),vr(:,i) before transposing it.

Related

Wrong eigenvector with MKL sgeev

I want to compute the eigenvalues and eigenvectors of a matrix. I'm using sgeev of MKL lapack.
I have this very simple test code:
integer :: i,n, info
real, allocatable:: A(:,:), B(:,:), C(:,:)
real, allocatable:: wr(:), wi(:), vl(:, :), vr(:, :), work(:)
n=3
allocate(vr(n,n), vl(n,n), wr(n), wi(n), work(4*n))
allocate(A(n,n),B(n,n), C(n,n))
A(1,:)=(/-1.0,3.0,-1.0/)
A(2,:)=(/-3.0,5.0,-1.0/)
A(3,:)=(/-3.0,3.0,1.0/)
call sgeev('V','V',n,A,n,wr,wi,vl,n,vr,n,work,size(work,1),info)
print*,info
do i=1,n
print*,i,wr(i),wi(i)
enddo
print*,'vr'
do i=1, n
print*, vr(i,:)
enddo
print*,'vl'
do i=1, n
print*, vl(i,:)
enddo
It gives the right eigenvalues (2, 2, 1) but the wrong eigenvectors.
I have:
vr
-0.577350259 0.557844639 -0.539340019
-0.577350557 0.704232574 -0.273908198
-0.577349961 0.439164847 0.796295524
vl
-0.688247085 -0.617912114 -0.815013587
0.688247383 0.771166325 0.364909053
-0.229415640 -0.153254643 0.450104564
when vr should be
-1 1 1
0 1 1
3 0 1
What am I doing the wrong way?
Your matrix is degenerate (has two eigenvalues which are the same as one another), so the corresponding eigenvectors can be an arbitrary linear combination of the two degenerate eigenvectors.
Also, the output of sgeev normalises the eigenvectors, whereas the eigenvectors you have given are not normalised.
The first eigenvalue given is 1, and the corresponding eigenvector is the first column of vr, l1=(-0.57..., -0.57..., -0.57...). This is proportional to the third eigenvector you have given, (1, 1, 1).
The second and third eigenvalues are both 2. The corresponding eigenvectors are the second and third columns of vr, l2=(0.55..., 0.70..., 0.43...) and l3=(-0.53..., -0.27..., 0.79...). Taking 0.27...*l2+0.70...*l3 gives (-0.22..., 0, 0.66...), proportional to (-1, 0, 3), and taking 0.79...*l2-0.43...*l3 gives (0.66..., 0.66..., 0), proportional to (1, 1, 0).

Sign issues with Fo Householder Transformation QR Decomposition

I am having trouble implementing the QR Decomposition using the Householder Reflection algorithm. I am getting the right numbers for the decomposition but the signs are incorrect. I can't seem to troubleshoot what's causing this issue since I can't find good pseudocode for the algorithm elsewhere on the internet. I have a feeling it has something to do with how I am choosing the sign for the tau variable, but I am following the pseudocode given to me from my textbook.
I have chosen to write this algorithm as a Fortran subroutine to prepare for an upcoming test on this material, and for now, it only computes the upper triangular matrix and not Q. Here is the output that this algorithm gives me:
-18.7616634 -9.80723381 -15.7768545 -11.0864372
0.00000000 -9.99090481 -9.33576298 -7.53412533
0.00000000 0.00000000 -5.99453259 -9.80128098
0.00000000 0.00000000 0.00000000 -0.512612820
and here is the correct output:
18.7616634 9.80723286 15.7768517 11.0864372
0.00000000 9.99090672 9.33576488 7.53412533
0.00000000 0.00000000 5.99453354 9.80128098
0.00000000 0.00000000 0.00000000 0.512614250
So it is basically just a sign issue it seems because the values themselves are otherwise correct.
! QR Decomposition with Householder reflections
! Takes a matrix square N x N matrix A
! upper triangular matrix R is stored in A
SUBROUTINE qrdcmp(A, N)
REAL, DIMENSION(10,10) :: A, Q, I_N, uu
REAL gamma, tau, E
INTEGER N, & ! value of n for n x n matrix
I,J,K ! loop indices
E = EPSILON(E) ! smallest value recognized by compiler for real
! Identity matrix initialization
DO I=1,N
DO J=1,N
IF(I == J) THEN
I_N(I,J) = 1.0
ELSE
I_N(I,J) = 0.0
END IF
END DO
END DO
! Main loop
DO K=1,N
IF(ABS(MAXVAL(A(K:N,K))) < E .and. ABS(MINVAL(A(K:N,K))) < E) THEN
gamma = 0
ELSE
tau = 0
DO J=K,N
tau = tau + A(J,K)**2.0
END DO
tau = tau**(0.5) ! tau currently holds norm of the vector
IF(A(K,K) < 0) tau = -tau
A(K,K) = A(K,K) + tau
gamma = A(K,K)/tau
DO J=K+1,N
A(J,K) = A(J,K)/A(K,K)
END DO
A(K,K) = 1
END IF
DO I=K,N
DO J=K,N
uu(I,J) = gamma*A(I,K)*A(J,K)
END DO; END DO
Q(K:N,K:N) = -1*I_N(K:N,K:N) + uu(K:N,K:N)
A(K:N,K+1:N) = MATMUL(TRANSPOSE(Q(K:N,K:N)), A(K:N,K+1:N))
A(K,K) = -tau
END DO
END SUBROUTINE qrdcmp
Any insight would be greatly appreciated. Also, I apologize if my code is not as easily readable as it could be.
In the construction of a reflector, you have the choice of two bisectors between the current column vector and the associated basis vector. Here this is done in the line IF(A(K,K) < 0) tau = -tau, as you already identified. "Unfortunately", the stable choice that avoids cancellation issues, usually also produces a sign flip in R as you observed. For instance, the decomposition of a matrix close to I will produce a result close to Q,R = -I,-I. If you want the R factor with positive diagonal, you have to shift the signs from R to Q in an extra step after completing the decomposition.
LAPACK had a short period where they used the choice producing the positive diagonal by default, this produced some strange errors/numerical stability problems where there previously where none.

Evaluating the fast Fourier transform of Gaussian function in FORTRAN using FFTW3 library

I am trying to write a FORTRAN code to evaluate the fast Fourier transform of the Gaussian function f(r)=exp(-(r^2)) using FFTW3 library. As everyone knows, the Fourier transform of the Gaussian function is another Gaussian function.
I consider evaluating the Fourier-transform integral of the Gaussian function in the spherical coordinate.
Hence the resulting integral can be simplified to be integral of [r*exp(-(r^2))*sin(kr)]dr.
I wrote the following FORTRAN code to evaluate the discrete SINE transform DST which is the discrete Fourier transform DFT using a PURELY real input array. DST is performed by C_FFTW_RODFT00 existing in FFTW3, taking into account that the discrete values in position space are r=i*delta (i=1,2,...,1024), and the input array for DST is the function r*exp(-(r^2)) NOT the Gaussian. The sine function in the integral of [r*exp(-(r^2))*sin(kr)]dr resulting from the INTEGRATION over the SPHERICAL coordinates, and it is NOT the imaginary part of exp(ik.r) that appears when taking the analytic Fourier transform in general.
However, the result is not a Gaussian function in the momentum space.
Module FFTW3
use, intrinsic :: iso_c_binding
include 'fftw3.f03'
end module
program sine_FFT_transform
use FFTW3
implicit none
integer, parameter :: dp=selected_real_kind(8)
real(kind=dp), parameter :: pi=acos(-1.0_dp)
integer, parameter :: n=1024
real(kind=dp) :: delta, k
real(kind=dp) :: numerical_F_transform
integer :: i
type(C_PTR) :: my_plan
real(C_DOUBLE), dimension(1024) :: y
real(C_DOUBLE), dimension(1024) :: yy, yk
integer(C_FFTW_R2R_KIND) :: C_FFTW_RODFT00
my_plan= fftw_plan_r2r_1d(1024,y,yy,FFTW_FORWARD, FFTW_ESTIMATE)
delta=0.0125_dp
do i=1, n !inserting the input one-dimension position function
y(i)= 2*(delta)*(i-1)*exp(-((i-1)*delta)**2)
! I multiplied by 2 due to the definition of C_FFTW_RODFT00 in FFTW3
end do
call fftw_execute_r2r(my_plan, y,yy)
do i=2, n
k = (i-1)*pi/n/delta
yk(i) = 4*pi*delta*yy(i)/2 !I divide by 2 due to the definition of
!C_FFTW_RODFT00
numerical_F_transform=yk(i)/k
write(11,*) i,k,numerical_F_transform
end do
call fftw_destroy_plan(my_plan)
end program
Executing the previous code gives the following plot which is not for Gaussian function.
Can anyone help me understand what the problem is? I guess the problem is mainly due to FFTW3. Maybe I did not use it properly especially concerning the boundary conditions.
Looking at the related pages in the FFTW site (Real-to-Real Transforms, transform kinds, Real-odd DFT (DST)) and the header file for Fortran, it seems that FFTW expects FFTW_RODFT00 etc rather than FFTW_FORWARD for specifying the kind of
real-to-real transform. For example,
! my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_FORWARD, FFTW_ESTIMATE )
my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_RODFT00, FFTW_ESTIMATE )
performs the "type-I" discrete sine transform (DST-I) shown in the above page. This modification seems to fix the problem (i.e., makes the Fourier transform a Gaussian with positive values).
The following is a slightly modified version of OP's code to experiment the above modification:
! ... only the modified part is shown...
real(dp) :: delta, k, r, fftw, num, ana
integer :: i, j, n
type(C_PTR) :: my_plan
real(C_DOUBLE), allocatable :: y(:), yy(:)
delta = 0.0125_dp ; n = 1024 ! rmax = 12.8
! delta = 0.1_dp ; n = 128 ! rmax = 12.8
! delta = 0.2_dp ; n = 64 ! rmax = 12.8
! delta = 0.4_dp ; n = 32 ! rmax = 12.8
allocate( y( n ), yy( n ) )
! my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_FORWARD, FFTW_ESTIMATE )
my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_RODFT00, FFTW_ESTIMATE )
! Loop over r-grid
do i = 1, n
r = i * delta ! (2-a)
y( i )= r * exp( -r**2 )
end do
call fftw_execute_r2r( my_plan, y, yy )
! Loop over k-grid
do i = 1, n
! Result of FFTW
k = i * pi / ((n + 1) * delta) ! (2-b)
fftw = 4 * pi * delta * yy( i ) / k / 2 ! the last 2 due to RODFT00
! Numerical result via quadrature
num = 0
do j = 1, n
r = j * delta
num = num + r * exp( -r**2 ) * sin( k * r )
enddo
num = num * 4 * pi * delta / k
! Analytical result
ana = sqrt( pi )**3 * exp( -k**2 / 4 )
! Output
write(10,*) k, fftw
write(20,*) k, num
write(30,*) k, ana
end do
Compile (with gfortran-8.2 + FFTW3.3.8 + OSX10.11):
$ gfortran -fcheck=all -Wall sine.f90 -I/usr/local/Cellar/fftw/3.3.8/include -L/usr/local/Cellar/fftw/3.3.8/lib -lfftw3
If we use FFTW_FORWARD as in the original code, we get
which has a negative lobe (where fort.10, fort.20, and fort.30 correspond to FFTW, quadrature, and analytical results). Modifying the code to use FFTW_RODFT00 changes the result as below, so the modification seems to be working (but please see below for the grid definition).
Additional notes
I have slightly modified the grid definition for r and k in my code (Lines (2-a) and (2-b)), which is found to improve the accuracy. But I'm still not sure whether the above definition matches the definition used by FFTW, so please read the manual for details...
The fftw3.f03 header file gives the interface for fftw_plan_r2r_1d
type(C_PTR) function fftw_plan_r2r_1d(n,in,out,kind,flags) bind(C, name='fftw_plan_r2r_1d')
import
integer(C_INT), value :: n
real(C_DOUBLE), dimension(*), intent(out) :: in
real(C_DOUBLE), dimension(*), intent(out) :: out
integer(C_FFTW_R2R_KIND), value :: kind
integer(C_INT), value :: flags
end function fftw_plan_r2r_1d
(Because of no Tex support, this part is very ugly...) The integral of 4 pi r^2 * exp(-r^2) * sin(kr)/(kr) for r = 0 -> infinite is pi^(3/2) * exp(-k^2 / 4) (obtained from Wolfram Alpha or by noting that this is actually a 3-D Fourier transform of exp(-(x^2 + y^2 + z^2)) by exp(-i*(k1 x + k2 y + k3 z)) with k =(k1,k2,k3)). So, although a bit counter-intuitive, the result becomes a positive Gaussian.
I guess the r-grid can be chosen much coarser (e.g. delta up to 0.4), which gives almost the same accuracy as long as it covers the frequency domain of the transformed function (here exp(-r^2)).
Of course there are negative components of the real part to the FFT of a limited Gaussian spectrum. You are just using the real part of the transform. So your plot is absolutely correct.
You seem to be mistaking the real part with the magnitude, which of course would not be negative. For that you would need to fftw_plan_dft_r2c_1d and then calculate the absolute values of the complex coefficients. Or you might be mistaking the Fourier transform with a limited DFT.
You might want to check here to convince yourself of the correctness of you calculation above:
http://docs.mantidproject.org/nightly/algorithms/FFT-v1.html
Please do keep in mind that the plots on the above page are shifted, so that the 0 frequency is in the middle of the spectrum.
Citing yourself, the nummeric integration of [r*exp(-(r^2))*sin(kr)]dr would have negative components for all k>1 if normalised to 0 for highest frequency.
TLDR: Your plot is absolute state of the art and inline with discrete and limited functional analysis.

Discrete Fourier Transform seems to be printing incorrect answers?

I am attempting to write a program that calculates the discrete fourier transform of a set of given data. I've sampled a sine wave, so my set is (pi/2,2*pi,3*pi/2,2*pi). Here is my program:
program DFT
implicit none
integer :: k, N, x, y, j, r, l
integer, parameter :: dp = selected_real_kind(15,300)
real, allocatable,dimension(:) :: h, rst
integer, dimension(:,:), allocatable :: W
real(kind=dp) :: pi
open(unit=100, file="dft.dat",status='replace')
N = 4
allocate(h(N))
allocate(rst(N))
allocate(W(-N/2:N/2,1:N))
pi = 3.14159265359
do k=1,N
h(k) = k*(pi*0.5)
end do
do j = -N/2,N/2
do k = 1, N
W(j,k) = EXP((2.0_dp*pi*cmplx(0.0_dp,1.0_dp)*j*k)/N)
end do
end do
rst = matmul(W,h)
!print *, h, w
write(100,*) rst
end program
And this prints out the array rst as:
0.00000000 0.00000000 15.7079639 0.00000000 0.00000000
Using an online calculator, the results should be:
15.7+0j -3.14+3.14j -3.14+0j -3.14-3.14j
I'm not sure why rst is 1 entry too long either.
Can anyone spot why it's printing out 0 for 3/4 of the results? I notice that 15.7 appears in both the actual answers and my result.
Thank you
Even though the question has been answered and accepted, the program given has so many problems that I had to say...
The input given is not a sine wave, it's a linear function of time. Kind of like a 1-based ramp input.
For DFTs the indices normally are considered to go from 0:N-1, not 1:N.
For W the Nyquist frequency is represented twice, as -N/2 and N/2. Again it would have been normal to number the rows 0:N-1, BTW, this is why you have an extra output in your rst vector.
pi is double precision but only initialized to 12 significant figures. It's hard to tell if there's a typo in your value of pi which is why many would use 4*atan(1.0_dp) or acos(-1.0_dp).
Notice that h(N) is actually going to end up as the zero time input, which is one reason the whole world indices DFT vectors from zero.
The expression cmplx(0.0_dp,1.0_dp) is sort of futile because the CMPLX intrinsic always returns a single precision result if the third optional KIND= argument is not present. As a complex literal, (0.0_dp,1.0_dp) would be double precision. However, you could as well use (0,1) because it's exactly representable in single precision and would be converted to double precision when it gets multiplied by the growing product on its left. Also 2.0_dp could have been represented successfully as 2 with less clutter.
The expression EXP((2.0_dp*pi*cmplx(0.0_dp,1.0_dp)*j*k)/N) is appropriate for inverse DFT, disregarding normalization. Thus I would have written the whole thing more cleanly and correctly as EXP(-2*pi*(0,1)*j*k/N). Then the output should have been directly comparable to what the online calculator printed out.
Fortran does complex numbers for you but you must declare the appropriate variables as complex. Try
complex, allocatable,dimension(:) :: rst
complex, dimension(:,:), allocatable :: W

zgeev giving eigenvectors which are not orthogonal

I try to diagonalize a matrix using zgeev and it giving correct eigenvalues but the eigenvectors are not orthogonal.
program complex_diagonalization
implicit none
integer,parameter :: N=3
integer::i,j
integer,parameter :: LDA=N,LDVL=N,LDVR=N
real(kind=8),parameter::q=dsqrt(2.0d0),q1=1.0d0/q
integer,parameter :: LWMAX=1000
integer :: INFO,LWORK
real(kind=8) :: RWORK(2*N)
complex(kind=8) :: B(LDA,N),VL(LDVL,N),VR(LDVR,N),W(N),WORK(LWMAX)
external::zgeev
!matrix defining
B(1,1)=0.0d0;B(1,2)=-q1;B(1,3)=-q1
B(2,1)=-q1;B(2,2)=0.50d0;B(2,3)=-0.50d0
B(3,1)=-q1;B(3,2)=-0.5d0;B(3,3)=0.50d0
LWORK=-1
CALL ZGEEV('Vectors','Vectors',N,B,LDA,W,VL,LDVL,VR,LDVR,WORK,LWORK,RWORK,INFO)
LWORK=MIN(LWMAX,INT(WORK(1)))
CALL ZGEEV('Vectors','Vectors',N,B,LDA,W,VL,LDVL,VR,LDVR,WORK,LWORK,RWORK,INFO)
IF( INFO.GT.0 ) THEN
WRITE(*,*)'The algorithm failed to compute eigenvalues.'
STOP
END IF
!eigenvalues
do i=1,N
WRITE(*,*)W(i)
enddo
!eigenvectors
do i=1,N
WRITE(*,*)(VR(i,j),j=1,N)
ENDDO
end
and the result I am getting are this:
eigenvalues:
( 0.99999999999999978,0.0000000000000000)
(-0.99999999999999978,0.0000000000000000)
( 0.99999999999999978,0.0000000000000000)
eigenvectors
(0.70710678118654746,0.0000000000000000)
(-0.50000000000000000,0.0000000000000000)
(-0.50000000000000000,0.0000000000000000)
(0.70710678118654746,0.0000000000000000)
(0.50000000000000000,0.0000000000000000)
(0.50000000000000000,0.0000000000000000)
(-0.11982367636731203,0.0000000000000000)
( 0.78160853028734012,0.0000000000000000)
(-0.61215226207528295,0.0000000000000000)
you can see that the third eigenvector is not orthogonal with one of the two eigenvectors. What I am expecting is that in the third eigenvector first entry should be zero and second entry will be minus of third entry and because it's a unit vector it will be 0.707.
A real symmetric matrix has three orthogonal eigenvectors if the three eigenvalues are unique. Only the eigenvectors corresponding to distinct eigenvalues have tobe orthogonal. https://math.stackexchange.com/a/1368948/134138
The Hermitian specialized routine ZHEEV should guarantee orthogonality of the eigenvectors as suggested by Ian Bush. Or in your case you can also consider DSYEV (because your matrix is real).
The situation is well described in this post from LAPACK Forum http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg01352.html
From the documentation:
DSYEV:
* On exit, if JOBZ = 'V', then if INFO = 0, A contains the
* orthonormal eigenvectors of the matrix A.
ZHEEV:
* On exit, if JOBZ = 'V', then if INFO = 0, A contains the
* orthonormal eigenvectors of the matrix A.