lapack stemr segmentation fault with a particular matrix - fortran

I am trying to find the first (smallest) k eigenvalues of a real symmetric tridiagonal matrix using the appropriate lapack routine.
I am new to both Fortran and lapack libraries, but (d)stemr seemd to me a good choice so I tried calling it but keep getting segmentation faults.
After some trials I noticed the problem was my input matrix, which has:
diagonal = 2 * (1 + order 1e-5 to 1e-3 small variable correction)
subdiagonal all equal -1 (if I use e.g. 0.95 instead everything works)
I reduced the code to a single M(not)WE program shown below.
So the question are:
why is stemr failing with such a matrix, while e.g. stev works?
why a segmentation fault?
program mwe
implicit none
integer, parameter :: n = 10
integer, parameter :: iu = 3
integer :: k
double precision :: d(n), e(n)
double precision :: vals(n), vecs(n,iu)
integer :: m, ldz, nzc, lwk, liwk, info
integer, allocatable :: isuppz(:), iwk(:)
double precision, allocatable :: wk(:)
do k = 1, n
d(k) = +2d0 + ((k-5.5d0)*1d-2)**2
e(k) = -1d0 ! e(n) not really needed
end do
ldz = n
nzc = iu
allocate(wk(1), iwk(1), isuppz(2*iu))
! ifort -mkl gives SIGSEGV at this call <----------------
call dstemr( &
'V', 'I', n, d, e, 0d0, 0d0, 1, iu, &
m, vals, vecs, ldz, -1, isuppz, .true., &
wk, -1, iwk, -1, info)
lwk = ceiling(wk(1)); deallocate(wk); allocate(wk(lwk))
liwk = iwk(1); deallocate(iwk); allocate(iwk(liwk))
print *, info, lwk, liwk ! ok with gfortran
! gfortran -llapack gives SIGSEGV at this call <---------
call dstemr( &
'V', 'I', n, d, e, 0d0, 0d0, 1, iu, &
m, vals, vecs, ldz, nzc, isuppz, .true., &
wk, lwk, iwk, liwk, info)
end program
Compilers are invoked via:
gfortran [(GCC) 9.2.0]: gfortran -llapack -o o.x mwe.f90
ifort [(IFORT) 19.0.5.281 20190815]: ifort -mkl -o o.x mwe.f90

According to the manual, one issue seems that the argument TRYRAC needs to be a variable (rather than a constant) because it can be overwritten by dstemr():
[in,out] TRYRAC : ... On exit, a .TRUE. TRYRAC will be set to .FALSE. if the matrix
does not define its eigenvalues to high relative accuracy.
So, for example, a modified code may look like:
logical :: tryrac
...
tryrac = .true.
call dstemr( &
'V', 'I', n, d, e, 0d0, 0d0, 1, iu, &
m, vals, vecs, ldz, -1, isuppz, tryrac, & !<--
wk, -1, iwk, -1, info)
...
tryrac = .true.
call dstemr( &
'V', 'I', n, d, e, 0d0, 0d0, 1, iu, &
m, vals, vecs, ldz, nzc, isuppz, tryrac, & !<--
wk, lwk, iwk, liwk, info)

Related

Fortran's MKL dgemm is giving zero result

In the following Fortran program I use Intel's MKL library to perform matrix multiplications using dgemm. Initially, I used the matmul subroutine and got correct results. When I translated matmul to dgemm in the loop below, I got all zero vectors instead of the correct output. I appreciate your help.
program spectral_norm
implicit none
!
integer, parameter :: n = 5500, dp = kind(0.0d0)
real(dp), allocatable :: A(:, :), u(:), v(:), Au(:), Av(:)
integer :: i, j
allocate(u(n), v(n), A(n, n), Au(n), Av(n))
do j = 1, n
do i = 1, n
A(i, j) = Ac(i, j)
end do
end do
u = 1
do i = 1, 10
call dgemm('N','N', n, 1, n, 1.0, A, n, u, n, 0.0, Au, n)
call dgemm('N','N', n, 1, n, 1.0, Au, n, A, n, 0.0, v, n)
call dgemm('N','N', n, 1, n, 1.0, A, n, v, n, 0.0, Av, n)
call dgemm('N','N', n, 1, n, 1.0, Av, n, A, n, 0.0, u, n)
!v = matmul(matmul(A, u), A)
!u = matmul(matmul(A, v), A)
end do
write(*, "(f0.9)") sqrt(dot_product(u, v) / dot_product(v, v))
contains
pure real(dp) function Ac(i, j) result(r)
integer, intent(in) :: i, j
r = 1._dp / ((i+j-2) * (i+j-1)/2 + i)
end function
end program spectral_norm
This gives NaN, while the correct output from matmul is 1.274224153.
Well, thank you all for your suggestions. I think I figured out the source of the error. The order of multiplication was reversed in two cases, it should have been A * Au and A * Av instead. This is because A has order n x n and both Au ans Av have order n x 1. So, we can't multiply Au * A or Av * A due to dimensions mismatch. I posted the corrected version below.
program spectral_norm
implicit none
!
integer, parameter :: n = 5500, dp = kind(0.d0)
real(dp), allocatable :: A(:,:), u(:), v(:), Au(:), Av(:)
integer :: i, j
allocate(u(n), v(n), A(n, n), Au(n), Av(n))
do j = 1, n
do i = 1, n
A(i, j) = Ac(i, j)
end do
end do
u = 1
do i = 1, 10
call dgemm('N', 'N', n, 1, n, 1._dp, A, n, u, n, 0._dp, Au, n)
call dgemm('T', 'N', n, 1, n, 1._dp, A, n, Au, n, 0._dp, v, n)
call dgemm('N', 'N', n, 1, n, 1._dp, A, n, v, n, 0._dp, Av, n)
call dgemm('T', 'N', n, 1, n, 1._dp, A, n, Av, n, 0._dp, u, n)
end do
write(*, "(f0.9)") sqrt(dot_product(u, v) / dot_product(v, v))
contains
pure real(dp) function Ac(i, j) result(r)
integer, intent(in) :: i, j
r = 1._dp / ((i+j-2) * (i+j-1)/2 + i)
end function
end program spectral_norm
This gives the correct results:
1.274224153
Elapsed time 0.5150000 seconds

Eigenvector discontinuities in hermitian matrix diagonalization

I need to diagonalize a 2x2 Hermitian matrix that depends on a parameter x, which varies continuously. For diagonalization I use EISPACK. When I plot the real and imaginary components of eigenvectors as a function of x, I notice that they have discontinuities. The eigenvalues calculation is OK. When I plot the eigenvectors in Maxima, the solutions appear continuous. I need the continuous eigenvectors since in next step I will need to calculate their derivatives.
Below the f77 code I use as test (compiling with gfortran on mingw).
program Eigenvalue
implicit none
integer n, m
parameter (n=2)
integer ierr, matz, i, j
double precision x, dx, xf, amp, xin
double precision w(n)
double precision Ar(n,n), Ai(n,n)
double precision xr(n,n), xi(n,n)
double precision fm1(2,n) ! f77
double precision fv1(n) ! f77
double precision fv2(n) ! f77
double complex psi1a, psi1b, psi2a, psi2b
m = 51
xf = 10.d0
xin = 0.0d0
amp = 2.d0
dx = (xf - xin)/(m-1)
do i = 1, m
x = dx*(i-m) + xf
Ar(1,1) = dsin(x)**2
Ar(1,2) = amp*dcos(x)
Ar(2,1) = amp*dcos(x)
Ar(2,2) = dcos(x)**2
Ai(1,1) = 0.0d0
Ai(1,2) = amp*dsin(x)
Ai(2,1) = -amp*dsin(x)
Ai(2,2) = 0.0d0
matz = 1
call ch ( n, n, ar, ai, w, matz, xr, xi, fv1, fv2, fm1, ierr ) !f77
write(20,*) x, w(1), w(2)
write(21,*) x, xr(1,1), xi(1,1)
write(22,*) x, xr(2,1), xi(2,1)
write(23,*) x, xr(1,2), xi(1,2)
write(24,*) x, xr(2,2), xi(2,2)
! autovetor 1
psi1a = cmplx(xr(1,1),xi(1,1))
psi1b = cmplx(xr(1,2),xi(1,2))
! autovetor 2
psi2a = cmplx(xr(2,1),xi(2,1))
psi2b = cmplx(xr(2,2),xi(2,2))
end do
end
While not really an answer, what follows is the code I used with LAPACK.
I used the latest versions of LAPACK and BLAS, with the following compiler options:
gfortran -Og -std=f2008 -Wall -Wextra {location_of_lapack}/liblapack.a {location_of_blas}/blas_LINUX.a main.f90 -o main
I'm compiling on Mac OS X with gfortran 6.3.0 from homebrew.
As Ian mentioned above, things like dcos are replaced with cos and I have used the KIND= formulation to ensure the same precision.
Ian also mentioned above about the arbitrary phase.
This problem is answered here; I have translated this solution into my code below.
The "magic" happens after the call to ZHEEV.
With this fix, I see no discontinuities.
program Eigenvalue
!> This can be used with the f2008 call
use, intrinsic :: iso_fortran_env
implicit none
!> dp contains the kind value for double precision.
!> Use below if compiling to f2008
integer, parameter :: dp = REAL64
!> Use below if compiling with f95 up and comment out iso_fortran_env
!>integer, parameter :: dp = SELECTED_REAL_KIND(15, 300)
!> Set wp to the desired precision.
integer, parameter :: wp = dp
integer, parameter :: n = 2
integer :: i, j, k, m
real(kind=wp) x, dx, xf, amp, xin
real(kind=wp), dimension(n) :: w
real(kind=wp), dimension(n, n) :: Ar, Ai
complex(kind=wp), dimension(n, n) :: A
complex(kind=wp), dimension(max(1,2*n-1)) :: WORK
integer, parameter :: lwork = max(1,2*n-1)
real(kind=wp), dimension(max(1, 3*n-2)) :: RWORK
integer :: info
complex(kind=wp) :: psi1a, psi1b, psi2a, psi2b
real(kind=wp) :: mag
m = 51
xf = 10.0_wp
xin = 0.0_wp
amp = 2.0_wp
if (m .eq. 1) then
dx = 0.0_wp
else
dx = (xf - xin)/(m-1)
end if
do i = 1, m
x = dx*(i-m) + xf
Ar(1,1) = sin(x)**2
Ar(1,2) = amp*cos(x)
Ar(2,1) = amp*cos(x)
Ar(2,2) = cos(x)**2
Ai(1,1) = 0.0_wp
Ai(1,2) = amp*sin(x)
Ai(2,1) = -amp*sin(x)
Ai(2,2) = 0.0_wp
do j = 1, n
do k = 1, n
A(j, k) = cmplx(Ar(j, k), Ai(j, k), kind=wp)
end do
end do
call ZHEEV('V', 'U', N, A, N, W, WORK, LWORK, RWORK, INFO)
do j = 1, n
A(:, j) = A(:, j) / A(1, j)
mag = sqrt(real(A(1, j)*conjg(A(1, j)))+ real(A(2, j)*conjg(A(2, j))))
A(:, j) = A(:, j)/mag
end do
psi1a = A(1, 1)
psi1b = A(1, 2)
psi2a = A(2, 1)
psi2b = A(2, 2)
end do
end program Eigenvalue

Segmentation fault using SCALAPACK in Fortran? No backtrace?

I'm trying to find the eigenvalues and eigenvectors of a Hermitian matrix using SCALAPACK and MPI in Fortran. For bug-squashing, I made this program as simple as possible, but am still getting a segmentation fault. Per the answers given to people with similar questions, I've tried changing all of my integers to integer*8, and all of my reals to real*8 or real*16, but I still get this issue. Most interestingly, I don't even get a backtrace for the segmentation fault: the program hangs up when trying to give me a backtrace and has to be aborted manually.
Also, please forgive my lack of knowledge -- I'm not familiar with most program-y things but I've done my best. Here is my code:
PROGRAM easydiag
IMPLICIT NONE
INCLUDE 'mpif.h'
EXTERNAL BLACS_EXIT, BLACS_GET, BLACS_GRIDEXIT, BLACS_GRIDINFO
EXTERNAL BLACS_GRIDINIT, BLACS_PINFO,BLACS_SETUP, DESCINIT
INTEGER,EXTERNAL::NUMROC,ICEIL
REAL*8,EXTERNAL::PDLAMCH
INTEGER,PARAMETER::XNDIM=4 ! MATRIX WILL BE XNDIM BY XNDIM
INTEGER,PARAMETER::EXPND=XNDIM
INTEGER,PARAMETER::NPROCS=1
INTEGER COMM,MYID,ROOT,NUMPROCS,IERR,STATUS(MPI_STATUS_SIZE)
INTEGER NUM_DIM
INTEGER NPROW,NPCOL
INTEGER CONTEXT, MYROW, MYCOL
COMPLEX*16,ALLOCATABLE::HH(:,:),ZZ(:,:),MATTODIAG(:,:)
REAL*8:: EIG(2*XNDIM) ! EIGENVALUES
CALL MPI_INIT(ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,ierr)
ROOT=0
NPROW=INT(SQRT(REAL(NPROCS)))
NPCOL=NPROCS/NPROW
NUM_DIM=2*EXPND/NPROW
CALL SL_init(CONTEXT,NPROW,NPCOL)
CALL BLACS_GRIDINFO( CONTEXT, NPROW, NPCOL, MYROW, MYCOL )
ALLOCATE(MATTODIAG(XNDIM,XNDIM),HH(NUM_DIM,NUM_DIM),ZZ(NUM_DIM,NUM_DIM))
MATTODIAG=0.D0
CALL MAKEHERMMAT(XNDIM,MATTODIAG)
CALL MPIDIAGH(EXPND,MATTODIAG,ZZ,MYROW,MYCOL,NPROW,NPCOL,NUM_DIM,CONTEXT,EIG)
DEALLOCATE(MATTODIAG,HH,ZZ)
CALL MPI_FINALIZE(IERR)
END
!****************************************************
SUBROUTINE MAKEHERMMAT(XNDIM,MATTODIAG)
IMPLICIT NONE
INTEGER:: XNDIM, I, J, COUNTER
COMPLEX*16:: MATTODIAG(XNDIM,XNDIM)
REAL*8:: RAND
COUNTER = 1
DO J=1,XNDIM
DO I=J,XNDIM
MATTODIAG(I,J)=COUNTER
COUNTER=COUNTER+1
END DO
END DO
END
!****************************************************
SUBROUTINE MPIDIAGH(EXPND,A,Z,MYROW,MYCOL,NPROW,NPCOL,NUM_DIM,CONTEXT,W)
IMPLICIT NONE
EXTERNAL DESCINIT
REAL*8,EXTERNAL::PDLAMCH
INTEGER EXPND,NUM_DIM
INTEGER CONTEXT
INTEGER MYCOL,MYROW,NPROW,NPCOL
COMPLEX*16 A(NUM_DIM,NUM_DIM), Z(NUM_DIM,NUM_DIM)
REAL*8 W(2*EXPND)
INTEGER N
CHARACTER JOBZ, RANGE, UPLO
INTEGER IL,IU,IA,JA,IZ,JZ
INTEGER LIWORK,LRWORK,LWORK
INTEGER M, NZ, INFO
REAL*8 ABSTOL, ORFAC, VL, VU
INTEGER DESCA(50), DESCZ(50)
INTEGER IFAIL(2*EXPND), ICLUSTR(2*NPROW*NPCOL)
REAL*8 GAP(NPROW*NPCOL)
INTEGER,ALLOCATABLE:: IWORK(:)
REAL*8,ALLOCATABLE :: RWORK(:)
COMPLEX*16,ALLOCATABLE::WORK(:)
N=2*EXPND
JOBZ='V'
RANGE='I'
UPLO='U' ! This should be U rather than L
VL=0.d0
VU=0.d0
IL=1 ! EXPND/2+1
IU=2*EXPND ! EXPND+(EXPND/2) ! HERE IS FOR THE CUTTING OFF OF THE STATE
M=IU-IL+1
ORFAC=-1.D0
IA=1
JA=1
IZ=1
JZ=1
ABSTOL=PDLAMCH( CONTEXT, 'U')
CALL DESCINIT( DESCA, N, N, NUM_DIM, NUM_DIM, 0, 0, CONTEXT, NUM_DIM, INFO )
CALL DESCINIT( DESCZ, N, N, NUM_DIM, NUM_DIM, 0, 0, CONTEXT, NUM_DIM, INFO )
LWORK = -1
LRWORK = -1
LIWORK = -1
ALLOCATE(WORK(LWORK))
ALLOCATE(RWORK(LRWORK))
ALLOCATE(IWORK(LIWORK))
CALL PZHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, &
VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, &
JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, &
LIWORK, IFAIL, ICLUSTR, GAP, INFO )
LWORK = INT(ABS(WORK(1)))
LRWORK = INT(ABS(RWORK(1)))
LIWORK =INT (ABS(IWORK(1)))
DEALLOCATE(WORK)
DEALLOCATE(RWORK)
DEALLOCATE(IWORK)
ALLOCATE(WORK(LWORK))
ALLOCATE(RWORK(LRWORK))
ALLOCATE(IWORK(LIWORK))
PRINT*, LWORK, LRWORK, LIWORK
CALL PZHEEVX( JOBZ, RANGE, UPLO, N, A, IA, JA, DESCA, VL, &
VU, IL, IU, ABSTOL, M, NZ, W, ORFAC, Z, IZ, &
JZ, DESCZ, WORK, LWORK, RWORK, LRWORK, IWORK, &
LIWORK, IFAIL, ICLUSTR, GAP, INFO )
RETURN
END
The problem is with the second PZHEEVX function. I'm fairly certain that I'm using it correctly since this code is a simpler version of another more complicated code that works fine. For this purpose, I'm only using one processor.
Help!
According to this page
setting LWORK = -1 seems to request the PZHEEVX routine to return the necessary size of all the work arrays, for example,
If LWORK = -1, then LWORK is global input and a workspace query
is assumed; the routine only calculates the optimal size for
all work arrays. Each of these values is returned in the first
entry of the corresponding work array, and no error message is
issued by PXERBLA.
Similar explanations can be found for LRWORK = -1. As for IWORK,
IWORK (local workspace) INTEGER array
On return, IWORK(1) contains the amount of integer workspace
required.
but in your program the work arrays are allocated as
LWORK = -1
LRWORK = -1
LIWORK = -1
ALLOCATE(WORK(LWORK))
ALLOCATE(RWORK(LRWORK))
ALLOCATE(IWORK(LIWORK))
and after the first call of PZHEEVX, the sizes of the work arrays are obtained as
LWORK = INT(ABS(WORK(1)))
LRWORK = INT(ABS(RWORK(1)))
LIWORK =INT (ABS(IWORK(1)))
which looks inconsistent (-1 vs 1). So it will be better to modify the allocation as (*)
allocate( WORK(1), RWORK(1), IWORK(1) )
An example in this page also seems to allocate the work arrays this way. Another point of concern is that INT() is used in several places (for example, NPROW=INT(SQRT(REAL(NPROCS))), but I guess it might be better to use NINT() to avoid the effect of round-off errors.
(*) More precisely, allocation of an array with -1 is not valid because the size of an allocated array becomes 0 (thanks to #francescalus). You can verify this by printing size(a) or a(:). To prevent this kind of error, it is very useful to attach compiler options like -fcheck=all (for gfortran) or -check (for ifort).
There's a fishy piece of dimensioning in your code which can easily be responsible for the segfault. In your main program you set
EXPND=XNDIM=4
NUM_DIM=2*EXPND !NPROW==1 for a single-process test
ALLOCATE(MATTODIAG(XNDIM,XNDIM)) ! MATTODIAG(4,4)
Then you pass your MATTODIAG, the Hermitian matrix, to
CALL MPIDIAGH(EXPND,MATTODIAG,ZZ,MYROW,...)
which is in turn defined as
SUBROUTINE MPIDIAGH(EXPND,A,Z,MYROW,...)
COMPLEX*16 A(NUM_DIM,NUM_DIM) ! A(8,8)
This is already an inconsistency, which can mess up the computations in that subroutine (even without having a segfault). Furthermore, the subroutine along with scalapack thinks that A is of size (8,8), instead of (4,4) which you allocated in the main program, allowing the subroutine to overrun available memory.

Generalized diagonalization with Lapack

I am having some problems using the subroutine DSYGV of Lapack:
DSYGV( ITYPE, JOBZ, UPLO, N, A, LDA, B, LDB, W, WORK,LWORK, INFO )
This is the diagonalization I want to carry out:
v_mat*x = eig*t_mat*x
This is the crucial piece of my code:
program pruebadiago
real, dimension(:,:), allocatable :: v_mat, t_mat
real, dimension(:), allocatable :: eig,WORK
real , parameter :: k=3.0,m=4.0
integer, parameter :: n=2
integer :: i
! EXPECTED EIGENVALUES AND EIGENVECTORS
!eig = 0.286475 ------> u = (0.262866 , 0.425325)
!eig = 1.96353 ------> u = (0.425325, -0.262866)
allocate(v_mat(n,n),t_mat(n,n),eig(n))
!--------------------------
v_mat(1,1:2) = (/2.0*k,-k/)
v_mat(2,1:2) = (/-k,k/)
!--------------------------
!--------------------------
t_mat(1,1:2) = (/m,0.0/)
t_mat(2,1:2) = (/0.0,m/)
!---------------------------
!diagonalizacion
call DSYGV( 1, 'v', 'u', n , v_mat, 2 , t_mat, 2, eig, WORK,-1, INF )
LWORK=WORK(1)
allocate(WORK(LWORK))
call DSYGV( 1, 'v', 'u', n , v_mat, 2 , t_mat, 2, eig, WORK,LWORK, INF )
open(unit=100,file="pruebadiago.dat",status="replace",action="write")
do i = 1,n
write(unit=100,fmt=*) "E",i,"=",eig(i),(v_mat(i,j),j=1,n)
!autofuntzioak zutabeka doaz"(100f12.6)"
enddo
close(unit=100)
deallocate(v_mat,t_mat,eig,WORK)
end program pruebadiago
I think I understood everything given in this document:
http://www.netlib.org/lapack/lapack-3.1.1/html/dsygv.f.html
But the argument LWORK which I did not understand so I just try different values.
I know something is wrong because I know what are the eigenvalues and eigenvectors of this matrix and I get wrong eigenvalues and eigenvectors, and I am doing such a simple calculation in order to understand the way it works and then compute huge diagonalizations.
Does anybody see what is the problem?
Thank you
There are missing some elements from the code you posted. Basically the array declarations of WORK,eig,v_mat and t_mat.
Anyway, LWORK argument actually is the size of WORK vector. i.e
DOUBLE PRECISION WORK(100)
LWORK=100
Lapack specifies as minimum value of LWORK=3*N-1. In your case N=2.
For this example case I would suggested to use a big WORK vector (i.e. 100) so that you will not encounter any problems from that.
For large matrices you should use double call of DSYGV.
First call with LWORK=-1 and get suggested size from WORK(1)
Allocate a NEW WORK vector with suggested size.
Finally solve eigenproblem with DSYGV.
Sample code is:
CALL DSYGV( 1, 'V', 'U', 2 , v_mat, 2 , t_mat, 2, eig, W, -1, INF )
LWORK=W(1)
ALLOCATE ( WORK(LWORK) )
CALL DSYGV( 1, 'V', 'U', 2 , v_mat, 2 , t_mat, 2, eig, WORK, LWORK, INF )
Additionally, you need to check INF value after DSYGV call. If it is not zero then an error occurred.
EDIT: Fixed source code
program pruebadiago
double precision, dimension(:,:), allocatable :: v_mat, t_mat
double precision, dimension(:), allocatable :: eig,WORK
double precision :: W
double precision , parameter :: k=3.0,m=4.0
integer, parameter :: n=2
integer :: i
! EXPECTED EIGENVALUES AND EIGENVECTORS
!eig = 0.286475 ------> u = (0.262866 , 0.425325)
!eig = 1.96353 ------> u = (0.425325, -0.262866)
allocate(v_mat(n,n),t_mat(n,n),eig(n))
!--------------------------
v_mat(1,1:2) = (/2.0*k,-k/)
v_mat(2,1:2) = (/-k,k/)
!--------------------------
t_mat(1,1:2) = (/m,0.0d0/)
t_mat(2,1:2) = (/0.0d0,m/)
!---------------------------
call DSYGV( 1, 'v', 'u', n , v_mat, 2 , t_mat, 2, eig, W,-1, INF )
LWORK=W
allocate(WORK(LWORK))
call DSYGV( 1, 'v', 'u', n , v_mat, 2 , t_mat, 2, eig, WORK,LWORK, INF )
open(unit=100,file="pruebadiago.dat",status="replace",action="write")
do i = 1,n
write(unit=100,fmt=*) "E",i,"=",eig(i),(v_mat(i,j),j=1,n)
enddo
close(unit=100)
deallocate(v_mat,t_mat,eig,WORK)
end program pruebadiago

FORTRAN ZGEEV, all 0 eigenvalues

I am trying to get the ZGEEV routine in Lapack to work for a test problem and having some difficulties. I just started coding in FORTRAN a week ago, so I think it is probably something very trivial that I am missing.
I have to diagonalize rather large complex symmetric matrices. To get started, using Matlab I created a 200 by 200 matrix, which I have verified is diagonalizable. When I run the code, it brings up no errors and the INFO = 0, suggesting a success. However, all the eigenvalues are (0,0) which I know is wrong.
Attached is my code.
PROGRAM speed_zgeev
IMPLICIT NONE
INTEGER(8) :: N
COMPLEX*16, DIMENSION(:,:), ALLOCATABLE :: MAT
INTEGER(8) :: INFO, I, J
COMPLEX*16, DIMENSION(:), ALLOCATABLE :: RWORK
COMPLEX*16, DIMENSION(:), ALLOCATABLE :: D
COMPLEX*16, DIMENSION(1,1) :: VR, VL
INTEGER(8) :: LWORK = -1
COMPLEX*16, DIMENSION(:), ALLOCATABLE :: WORK
DOUBLE PRECISION :: RPART, IPART
EXTERNAL ZGEEV
N = 200
ALLOCATE(D(N))
ALLOCATE(RWORK(2*N))
ALLOCATE(WORK(N))
ALLOCATE(MAT(N,N))
OPEN(UNIT = 31, FILE = "newmat.txt")
OPEN(UNIT = 32, FILE = "newmati.txt")
DO J = 1,N
DO I = 1,N
READ(31,*) RPART
READ(32,*) IPART
MAT(I,J) = CMPLX(RPART, IPART)
END DO
END DO
CLOSE(31)
CLOSE(32)
CALL ZGEEV('N','N', N, MAT, N, D, VL, 1, VR, 1, WORK, LWORK, RWORK, INFO)
INFO = WORK(1)
DEALLOCATE(WORK)
ALLOCATE(WORK(INFO))
CALL ZGEEV('N','N', N, MAT, N, D, VL, 1, VR, 1, WORK, LWORK, RWORK, INFO)
IF (INFO .EQ. 0) THEN
PRINT*, D(1:10)
ELSE
PRINT*, INFO
END IF
DEALLOCATE(MAT)
DEALLOCATE(D)
DEALLOCATE(RWORK)
DEALLOCATE(WORK)
END PROGRAM speed_zgeev
I have tried the same code on smaller matrices, of size 30 by 30 and they work fine. Any help would be appreciated! Thanks.
I forgot to mention that I am loading the matrices from a test file which I have verified to be working right.
Maybe LWORK = WORK (1) instead of INFO = WORK(1)? Also change ALLOCATE(WORK(INFO)).