I try to diagonalize a matrix using zgeev and it giving correct eigenvalues but the eigenvectors are not orthogonal.
program complex_diagonalization
implicit none
integer,parameter :: N=3
integer::i,j
integer,parameter :: LDA=N,LDVL=N,LDVR=N
real(kind=8),parameter::q=dsqrt(2.0d0),q1=1.0d0/q
integer,parameter :: LWMAX=1000
integer :: INFO,LWORK
real(kind=8) :: RWORK(2*N)
complex(kind=8) :: B(LDA,N),VL(LDVL,N),VR(LDVR,N),W(N),WORK(LWMAX)
external::zgeev
!matrix defining
B(1,1)=0.0d0;B(1,2)=-q1;B(1,3)=-q1
B(2,1)=-q1;B(2,2)=0.50d0;B(2,3)=-0.50d0
B(3,1)=-q1;B(3,2)=-0.5d0;B(3,3)=0.50d0
LWORK=-1
CALL ZGEEV('Vectors','Vectors',N,B,LDA,W,VL,LDVL,VR,LDVR,WORK,LWORK,RWORK,INFO)
LWORK=MIN(LWMAX,INT(WORK(1)))
CALL ZGEEV('Vectors','Vectors',N,B,LDA,W,VL,LDVL,VR,LDVR,WORK,LWORK,RWORK,INFO)
IF( INFO.GT.0 ) THEN
WRITE(*,*)'The algorithm failed to compute eigenvalues.'
STOP
END IF
!eigenvalues
do i=1,N
WRITE(*,*)W(i)
enddo
!eigenvectors
do i=1,N
WRITE(*,*)(VR(i,j),j=1,N)
ENDDO
end
and the result I am getting are this:
eigenvalues:
( 0.99999999999999978,0.0000000000000000)
(-0.99999999999999978,0.0000000000000000)
( 0.99999999999999978,0.0000000000000000)
eigenvectors
(0.70710678118654746,0.0000000000000000)
(-0.50000000000000000,0.0000000000000000)
(-0.50000000000000000,0.0000000000000000)
(0.70710678118654746,0.0000000000000000)
(0.50000000000000000,0.0000000000000000)
(0.50000000000000000,0.0000000000000000)
(-0.11982367636731203,0.0000000000000000)
( 0.78160853028734012,0.0000000000000000)
(-0.61215226207528295,0.0000000000000000)
you can see that the third eigenvector is not orthogonal with one of the two eigenvectors. What I am expecting is that in the third eigenvector first entry should be zero and second entry will be minus of third entry and because it's a unit vector it will be 0.707.
A real symmetric matrix has three orthogonal eigenvectors if the three eigenvalues are unique. Only the eigenvectors corresponding to distinct eigenvalues have tobe orthogonal. https://math.stackexchange.com/a/1368948/134138
The Hermitian specialized routine ZHEEV should guarantee orthogonality of the eigenvectors as suggested by Ian Bush. Or in your case you can also consider DSYEV (because your matrix is real).
The situation is well described in this post from LAPACK Forum http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg01352.html
From the documentation:
DSYEV:
* On exit, if JOBZ = 'V', then if INFO = 0, A contains the
* orthonormal eigenvectors of the matrix A.
ZHEEV:
* On exit, if JOBZ = 'V', then if INFO = 0, A contains the
* orthonormal eigenvectors of the matrix A.
Related
I want to diagonalize a matrix and then be able to do basis changes. The aim in the end is to do matrix exponentiation, with exp(A) = P.exp(D).P^{-1}.
I use sgeev to diagonalize A. If I am not mistaken (and I probably am since it's not working), sgeev gives me P in the vr matrix and P^{-1} is transpose(vl). The diagonal matrix can be reconstitute from the eigenvalues wr.
The problem is that when I try to verify the matrix transformation by computing P * D * P^{-1} it's not giving A back.
Here's my code:
integer :: i,n, info
real::norm
real, allocatable:: A(:,:), B(:,:), C(:,:),D(:,:)
real, allocatable:: wr(:), wi(:), vl(:, :), vr(:, :), work(:)
n=3
allocate(vr(n,n), vl(n,n), wr(n), wi(n), work(4*n))
allocate(A(n,n),B(n,n), C(n,n),D(n,n))
A(1,:)=(/1,0,1/)
A(2,:)=(/0,2,1/)
A(3,:)=(/0,3,1/)
call sgeev('V','V',n,A,n,wr,wi,vl,n,vr,n,work,size(work,1),info)
print*,'eigenvalues'
do i=1,n
print*,i,wr(i),wi(i)
enddo
D=0.0
D(1,1)=wr(1)
D(2,2)=wr(2)
D(3,3)=wr(3)
C = matmul(D,transpose(vl))
B = matmul(vr,C)
print*,'A'
do i=1, n
print*, B(i,:)
enddo
The printed result is:
eigenvalues
1 1.00000000 0.00000000
2 3.30277562 0.00000000
3 -0.302775621 0.00000000
A
0.688247263 0.160159975 0.764021933
0.00000000 1.66581571 0.817408621
0.00000000 2.45222616 0.848407149
A is not the original A, not even considering an eventual factor.
I guess I am somehow mistaken since I checked the eigenvectors by computing matmul(A,vr) = matmul(vr,D) and matmul(transpose(vl),A) = matmul(D, transpose(vl)), and it worked.
Where am I wrong?
The problem is that transpose(vl) is not the inverse of vr. The normalisation given by sgeev is that each eigenvector (each column of vl or vr) is individually normalised. This means that dot_product(vl(:,i), vr(:,j)) is zero if i/=j, but is in general <1 if i==j.
If you want to get P^{-1}, you need to scale each column of vl by a factor of 1/dot_product(vl(:,i),vr(:,i) before transposing it.
I want to compute the eigenvalues and eigenvectors of a matrix. I'm using sgeev of MKL lapack.
I have this very simple test code:
integer :: i,n, info
real, allocatable:: A(:,:), B(:,:), C(:,:)
real, allocatable:: wr(:), wi(:), vl(:, :), vr(:, :), work(:)
n=3
allocate(vr(n,n), vl(n,n), wr(n), wi(n), work(4*n))
allocate(A(n,n),B(n,n), C(n,n))
A(1,:)=(/-1.0,3.0,-1.0/)
A(2,:)=(/-3.0,5.0,-1.0/)
A(3,:)=(/-3.0,3.0,1.0/)
call sgeev('V','V',n,A,n,wr,wi,vl,n,vr,n,work,size(work,1),info)
print*,info
do i=1,n
print*,i,wr(i),wi(i)
enddo
print*,'vr'
do i=1, n
print*, vr(i,:)
enddo
print*,'vl'
do i=1, n
print*, vl(i,:)
enddo
It gives the right eigenvalues (2, 2, 1) but the wrong eigenvectors.
I have:
vr
-0.577350259 0.557844639 -0.539340019
-0.577350557 0.704232574 -0.273908198
-0.577349961 0.439164847 0.796295524
vl
-0.688247085 -0.617912114 -0.815013587
0.688247383 0.771166325 0.364909053
-0.229415640 -0.153254643 0.450104564
when vr should be
-1 1 1
0 1 1
3 0 1
What am I doing the wrong way?
Your matrix is degenerate (has two eigenvalues which are the same as one another), so the corresponding eigenvectors can be an arbitrary linear combination of the two degenerate eigenvectors.
Also, the output of sgeev normalises the eigenvectors, whereas the eigenvectors you have given are not normalised.
The first eigenvalue given is 1, and the corresponding eigenvector is the first column of vr, l1=(-0.57..., -0.57..., -0.57...). This is proportional to the third eigenvector you have given, (1, 1, 1).
The second and third eigenvalues are both 2. The corresponding eigenvectors are the second and third columns of vr, l2=(0.55..., 0.70..., 0.43...) and l3=(-0.53..., -0.27..., 0.79...). Taking 0.27...*l2+0.70...*l3 gives (-0.22..., 0, 0.66...), proportional to (-1, 0, 3), and taking 0.79...*l2-0.43...*l3 gives (0.66..., 0.66..., 0), proportional to (1, 1, 0).
In Newton's method, to solve a nonlinear system of equations we need to find the Jacobian matrix and the determinant of the inverse of the Jacobian matrix.
Here are my component functions,
real function f1(x,y)
parameter (pi = 3.141592653589793)
f1 = log(abs(x-y**2)) - sin(x*y) - sin(pi)
end function f1
real function f2(x,y)
f2 = exp(x*y) + cos(x-y) - 2
end function f2
For the 2x2 case I am computing the Jacobian matrix and determinant of the inverse of Jacobian matrix like this,
x = [2,2]
h = 0.00001
.
.
! calculate approximate partial derivative
! you can make it more accurate by reducing the
! value of h
j11 = (f1(x(1)+h,x(2))-f1(x(1),x(2)))/h
j12 = (f1(x(1),x(2)+h)-f1(x(1),x(2)))/h
j21 = (f2(x(1)+h,x(2))-f2(x(1),x(2)))/h
j22 = (f2(x(1),x(2)+h)-f2(x(1),x(2)))/h
! calculate the Jacobian
J(1,:) = [j11,j12]
J(2,:) = [j21,j22]
! calculate inverse Jacobian
inv_J(1,:) = [J(2,2),-J(1,2)]
inv_J(2,:) = [-J(2,1),J(1,1)]
DET=J(1,1)*J(2,2) - J(1,2)*J(2,1)
inv_J = inv_J/DET
.
.
How do I in Fortran extend this to evaluate a Jacobian for m functions evaluated at n points?
Here is a more flexible jacobian calculator.
The results with the 2×2 test case are what you expect
arguments (x)
2.00000000000000
2.00000000000000
values (y)
1.44994967586787
53.5981500331442
Jacobian
0.807287239448229 3.30728724371454
109.196300248300 109.196300248300
I check the results against a symbolic calculation for the given inputs of
Console.f90
program Console1
use ISO_FORTRAN_ENV
implicit none
! Variables
integer, parameter :: wp = real64
real(wp), parameter :: pi = 3.141592653589793d0
! Interfaces
interface
function fun(x,n,m) result(y)
import
integer, intent(in) :: n,m
real(wp), intent(in) :: x(m)
real(wp) :: y(n)
end function
end interface
real(wp) :: h
real(wp), allocatable :: x(:), y(:), J(:,:)
! Body of Console1
x = [2d0, 2d0]
h = 0.0001d0
print *, "arguments"
print *, x(1)
print *, x(2)
y = test(x,2,2)
print *, "values"
print *, y(1)
print *, y(2)
J = jacobian(test,x,2,h)
print *, "Jacobian"
print *, J(1,:)
print *, J(2,:)
contains
function test(x,n,m) result(y)
! Test case per original question
integer, intent(in) :: n,m
real(wp), intent(in) :: x(m)
real(wp) :: y(n)
y(1) = log(abs(x(1)-x(2)**2)) - sin(x(1)*x(2)) - sin(pi)
y(2) = exp(x(1)*x(2)) + cos(x(1)-x(2)) - 2
end function
function jacobian(f,x,n,h) result(u)
procedure(fun), pointer, intent(in) :: f
real(wp), allocatable, intent(in) :: x(:)
integer, intent(in) :: n
real(wp), intent(in) :: h
real(wp), allocatable :: u(:,:)
integer :: j, m
real(wp), allocatable :: y1(:), y2(:), e(:)
m = size(x)
allocate(u(n,m))
do j=1, m
e = element(j, m) ! Get kronecker delta for j-th value
y1 = f(x-e*h/2,n,m)
y2 = f(x+e*h/2,n,m)
u(:,j) = (y2-y1)/h ! Finite difference for each column
end do
end function
function element(i,n) result(e)
! Kronecker delta vector. All zeros, except the i-th value.
integer, intent(in) :: i, n
real(wp) :: e(n)
e(:) = 0d0
e(i) = 1d0
end function
end program Console1
I will answer about evaluation in different points. This is quite simple. You just need an array of points, and if the points are in some regular grid, you may not even need that.
You may have an array of xs and array of ys or you can have an array of derived datatype with x and y components.
For the former:
real, allocatable :: x(:), y(:)
x = [... !probably read from some data file
y = [...
do i = 1, size(x)
J(i) = Jacobian(f, x(i), y(i))
end do
If you want to have many functions at the same time, the problem is always in representing functions. Even if you have an array of function pointers, you need to code them manually. A different approach is to have a full algebra module, where you enter some string representing a function and you can evaluate such function and even compute derivatives symbolically. That requires a parser, an evaluator, it is a large task. There are libraries for this. Evaluation of such a derivative will be slow unless further optimizing steps (compiling to machine code) are undertaken.
Numerical evaluation of the derivative is certainly possible. It will slow the convergence somewhat, depending on the order of the approximation of the derivative. You do a difference of two points for the numerical derivative. You can make an interpolating polynomial from values in multiple points to get a higher-order approximation (finite difference approximations), but that costs machine cycles.
Normally we can use auto difference tools as #John Alexiou mentioned. However in practise I prefer using MATLAB to analytically solve out the Jacobian and then use its build-in function fortran() to convert the result to a f90 file. Take your function as an example. Just type these into MATLAB
syms x y
Fval=sym(zeros(2,1));
Fval(1)=log(abs(x-y^2)) - sin(x*y) - sin(pi);
Fval(2)=exp(x*y) + cos(x-y) - 2;
X=[x;y];
Fjac=jacobian(Fval,X);
fortran(Fjac)
which will yield
Fjac(1,1) = -y*cos(x*y)-((-(x-y**2)/abs(-x+y**2)))/abs(-x+y**2)
Fjac(1,2) = -x*cos(x*y)+(y*((-(x-y**2)/abs(-x+y**2)))*2.0D0)/abs(-
&x+y**2)
Fjac(2,1) = -sin(x-y)+y*exp(x*y)
Fjac(2,2) = sin(x-y)+x*exp(x*y)
to you. You just get an analytical Jacobian fortran function.
Meanwhile, it is impossible to solve the inverse of a mxn matrix because of rank mismatching. You should simplify the system of equations to get a nxn Jacobin.
Additionally, when we use Newton-Raphson's method we do not solve the inverse of the Jacobin which is time-consuming and inaccurate for a large system. An easy way is to use dgesv in LAPACK for dense Jacobin. As we only need to solve the vector x from system of linear equations
Jx=-F
dgesv use LU decomposition and Gaussian elimination to solve above system of equations which is extremely faster than solving inverse matrix.
If the system of equations is large, you can use UMFPACK and its fortran interface module mUMFPACK to solve the system of equations in which J is a sparse matrix. Or use subroutine ILUD and LUSOL in a wide-spread sparse matrix library SPARSEKIT2.
In addition to these, there are tons of other methods which try to solve the Jx=-F faster and more accurate such as Generalized Minimal Residual (GMRES) and Stabilized Bi-Conjugate Gradient (BICGSTAB) which is a strand of literature.
I am trying to write a FORTRAN code to evaluate the fast Fourier transform of the Gaussian function f(r)=exp(-(r^2)) using FFTW3 library. As everyone knows, the Fourier transform of the Gaussian function is another Gaussian function.
I consider evaluating the Fourier-transform integral of the Gaussian function in the spherical coordinate.
Hence the resulting integral can be simplified to be integral of [r*exp(-(r^2))*sin(kr)]dr.
I wrote the following FORTRAN code to evaluate the discrete SINE transform DST which is the discrete Fourier transform DFT using a PURELY real input array. DST is performed by C_FFTW_RODFT00 existing in FFTW3, taking into account that the discrete values in position space are r=i*delta (i=1,2,...,1024), and the input array for DST is the function r*exp(-(r^2)) NOT the Gaussian. The sine function in the integral of [r*exp(-(r^2))*sin(kr)]dr resulting from the INTEGRATION over the SPHERICAL coordinates, and it is NOT the imaginary part of exp(ik.r) that appears when taking the analytic Fourier transform in general.
However, the result is not a Gaussian function in the momentum space.
Module FFTW3
use, intrinsic :: iso_c_binding
include 'fftw3.f03'
end module
program sine_FFT_transform
use FFTW3
implicit none
integer, parameter :: dp=selected_real_kind(8)
real(kind=dp), parameter :: pi=acos(-1.0_dp)
integer, parameter :: n=1024
real(kind=dp) :: delta, k
real(kind=dp) :: numerical_F_transform
integer :: i
type(C_PTR) :: my_plan
real(C_DOUBLE), dimension(1024) :: y
real(C_DOUBLE), dimension(1024) :: yy, yk
integer(C_FFTW_R2R_KIND) :: C_FFTW_RODFT00
my_plan= fftw_plan_r2r_1d(1024,y,yy,FFTW_FORWARD, FFTW_ESTIMATE)
delta=0.0125_dp
do i=1, n !inserting the input one-dimension position function
y(i)= 2*(delta)*(i-1)*exp(-((i-1)*delta)**2)
! I multiplied by 2 due to the definition of C_FFTW_RODFT00 in FFTW3
end do
call fftw_execute_r2r(my_plan, y,yy)
do i=2, n
k = (i-1)*pi/n/delta
yk(i) = 4*pi*delta*yy(i)/2 !I divide by 2 due to the definition of
!C_FFTW_RODFT00
numerical_F_transform=yk(i)/k
write(11,*) i,k,numerical_F_transform
end do
call fftw_destroy_plan(my_plan)
end program
Executing the previous code gives the following plot which is not for Gaussian function.
Can anyone help me understand what the problem is? I guess the problem is mainly due to FFTW3. Maybe I did not use it properly especially concerning the boundary conditions.
Looking at the related pages in the FFTW site (Real-to-Real Transforms, transform kinds, Real-odd DFT (DST)) and the header file for Fortran, it seems that FFTW expects FFTW_RODFT00 etc rather than FFTW_FORWARD for specifying the kind of
real-to-real transform. For example,
! my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_FORWARD, FFTW_ESTIMATE )
my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_RODFT00, FFTW_ESTIMATE )
performs the "type-I" discrete sine transform (DST-I) shown in the above page. This modification seems to fix the problem (i.e., makes the Fourier transform a Gaussian with positive values).
The following is a slightly modified version of OP's code to experiment the above modification:
! ... only the modified part is shown...
real(dp) :: delta, k, r, fftw, num, ana
integer :: i, j, n
type(C_PTR) :: my_plan
real(C_DOUBLE), allocatable :: y(:), yy(:)
delta = 0.0125_dp ; n = 1024 ! rmax = 12.8
! delta = 0.1_dp ; n = 128 ! rmax = 12.8
! delta = 0.2_dp ; n = 64 ! rmax = 12.8
! delta = 0.4_dp ; n = 32 ! rmax = 12.8
allocate( y( n ), yy( n ) )
! my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_FORWARD, FFTW_ESTIMATE )
my_plan= fftw_plan_r2r_1d( n, y, yy, FFTW_RODFT00, FFTW_ESTIMATE )
! Loop over r-grid
do i = 1, n
r = i * delta ! (2-a)
y( i )= r * exp( -r**2 )
end do
call fftw_execute_r2r( my_plan, y, yy )
! Loop over k-grid
do i = 1, n
! Result of FFTW
k = i * pi / ((n + 1) * delta) ! (2-b)
fftw = 4 * pi * delta * yy( i ) / k / 2 ! the last 2 due to RODFT00
! Numerical result via quadrature
num = 0
do j = 1, n
r = j * delta
num = num + r * exp( -r**2 ) * sin( k * r )
enddo
num = num * 4 * pi * delta / k
! Analytical result
ana = sqrt( pi )**3 * exp( -k**2 / 4 )
! Output
write(10,*) k, fftw
write(20,*) k, num
write(30,*) k, ana
end do
Compile (with gfortran-8.2 + FFTW3.3.8 + OSX10.11):
$ gfortran -fcheck=all -Wall sine.f90 -I/usr/local/Cellar/fftw/3.3.8/include -L/usr/local/Cellar/fftw/3.3.8/lib -lfftw3
If we use FFTW_FORWARD as in the original code, we get
which has a negative lobe (where fort.10, fort.20, and fort.30 correspond to FFTW, quadrature, and analytical results). Modifying the code to use FFTW_RODFT00 changes the result as below, so the modification seems to be working (but please see below for the grid definition).
Additional notes
I have slightly modified the grid definition for r and k in my code (Lines (2-a) and (2-b)), which is found to improve the accuracy. But I'm still not sure whether the above definition matches the definition used by FFTW, so please read the manual for details...
The fftw3.f03 header file gives the interface for fftw_plan_r2r_1d
type(C_PTR) function fftw_plan_r2r_1d(n,in,out,kind,flags) bind(C, name='fftw_plan_r2r_1d')
import
integer(C_INT), value :: n
real(C_DOUBLE), dimension(*), intent(out) :: in
real(C_DOUBLE), dimension(*), intent(out) :: out
integer(C_FFTW_R2R_KIND), value :: kind
integer(C_INT), value :: flags
end function fftw_plan_r2r_1d
(Because of no Tex support, this part is very ugly...) The integral of 4 pi r^2 * exp(-r^2) * sin(kr)/(kr) for r = 0 -> infinite is pi^(3/2) * exp(-k^2 / 4) (obtained from Wolfram Alpha or by noting that this is actually a 3-D Fourier transform of exp(-(x^2 + y^2 + z^2)) by exp(-i*(k1 x + k2 y + k3 z)) with k =(k1,k2,k3)). So, although a bit counter-intuitive, the result becomes a positive Gaussian.
I guess the r-grid can be chosen much coarser (e.g. delta up to 0.4), which gives almost the same accuracy as long as it covers the frequency domain of the transformed function (here exp(-r^2)).
Of course there are negative components of the real part to the FFT of a limited Gaussian spectrum. You are just using the real part of the transform. So your plot is absolutely correct.
You seem to be mistaking the real part with the magnitude, which of course would not be negative. For that you would need to fftw_plan_dft_r2c_1d and then calculate the absolute values of the complex coefficients. Or you might be mistaking the Fourier transform with a limited DFT.
You might want to check here to convince yourself of the correctness of you calculation above:
http://docs.mantidproject.org/nightly/algorithms/FFT-v1.html
Please do keep in mind that the plots on the above page are shifted, so that the 0 frequency is in the middle of the spectrum.
Citing yourself, the nummeric integration of [r*exp(-(r^2))*sin(kr)]dr would have negative components for all k>1 if normalised to 0 for highest frequency.
TLDR: Your plot is absolute state of the art and inline with discrete and limited functional analysis.
I have never done programming in my life and this is my very first code for a uni assignment, I get no errors in the compiling stage but myh program does not run saying that I have the error in the title, guess the problem is when I call the subroutine. Can anyone help me? It is my first code and it is really frustrating. Thank you.
!NUMERICAL COMPUTATION OF INCOMPRESSIBLE COUETTE FLOW USING FINITE DIFFERENCE METHOD
!IMPLICIT APPROACH
!MODEL EQUATION
!PARTIAL(U)/PARTIAL(T)=1/RE*(PARTIAL(U) SQUARE/PARTIAL(Y) SQUARE)
!DEFINE VARIABLES
IMPLICIT NONE
!VELOCITY U AT TIME T, VELOCITY UNEW AT TIME T+1, TIME T
!MAXIMUM 1000 POINTS
REAL V(1000)
REAL VNEW(1000)
REAL T
!GRID SPACING DY, GRID POINTS N+1
REAL DY
INTEGER N
!TIME STEP
REAL DT
!FLOW REYNOLDS NUMBER IN THE MODEL EQUATION
REAL ALPHA
!TOTAL SIMULATION TIME - LOOP NUMBER
INTEGER REP, I, J
!COEFFICIENTS IN LINEAR EQUATION MATRIX, SOURCE TERM K, DIAGONAL B, NON-DIAGONAL A
REAL S(1000), B, A
!INITIALIZATION OF DATA
DATA ALPHA/5000.0/
DATA N/100/
DATA REP/3000/
!CALCULATION OF GRID SPACING
DY=1.0/N
!CALCULATION OF TIME STEP DELTA T, CAN BE LARGER THAN THAT IN AN EXPLICIT METHOD
DT=0.5*RE*DY*DY
DT=ALPHA*DY*DY
!INITIAL CONDITIONS OF VELOCITY PROFILE
!BOTTOM AND INNER POINTS
DO I=1,N
V(I)=0.0
ENDDO
!POINT AT MOVING PLATE
V(N+1)=1.0
!BOUNDARY CONDITIONS AT LOWER AND UPPER POINTS ON PLATE
V(1)=0.0
V(N+1)=1.0
!CALCULATION OF DIAGONAL B AND NON-DIAGONAL A IN LINEAR EQUATION MATRIX
B=1.0+DT/DY/DY/ALPHA
A=-(DT)/2.0/DY/DY/ALPHA
!INITIAL COMPUTATION TIME
T=0.0
!ENTER MAIN LOOP TO MARCH IN TIME DIRECTION
DO I=1,REP
!SIMULATION TIME INCREASE BY DELTA T EACH STEP
T=T+DT
!USE IMPLICIT METHOD TO UPDATE GRID POINT VALUES FOR ALL INTERNAL GRIDS ONLY
!TWO BOUNDARY GRID POINTS VALUES ARE CONSTANT WITHIN THE WHOLE SIMULATION
!CALCULATION OF SOURCE TERM IN LINEAR EQUATION
DO J=2,N
S(J)=(1.0-DT/DY/DY/ALPHA)*V(J)+DT/2.0/DY/DY/ALPHA*V(J+1)+V(J-1)
ENDDO
!INCLUDE BOUNDARY CONDITIONS FOR TWO POINTS NEAR BOUDNARY
S(2)=S(2)-A*V(1)
S(N)=S(N)-A*V(N+1)
!USE SOURCE TERM K, DIAGONAL B, NON-DIAGONAL A, ORDER OF MATRIX N, TO SOLVE LINEAR EQUATION TO GET UPDATED VELOCITY
!CHECK ON INTERNET HOW TO SOLVE THIS BECUASE THIS COMPILER
!DOES NOT SOLVE IT, SOLVE LINEAR EQUATIONS BY A LINEAR SOLVER, FIND AND DOWNLOAD THE MATH LIBRARY FOR THIS COMPILER
CALL SR1(A,B,N,S,VNEW)
!REPLACE OLD VELOCITY VALUES WITH NEW VALUES.
!SINCE UNEW IS FROM UNEW(1), UNEW(2)......., UNEW(N-1), WE SHOULD RE-ARRANGE NUMBERS AS FOLLOWS
DO J=1,N-1
V(J+1)=VNEW(J)
ENDDO
!RETURN TO MAIN LOOP HERE
ENDDO
PRINT*,'HERE'
!OUTPUT VELOCITY PROFILES AT THE END OF COMPUTATION
!CREATE OUPUT FILE NAME
OPEN(15,FILE='PLEASEWORK')
!WRITE GRID POINTS AND VELOCITY VALUES
DO I=1,N+1
WRITE(15,10) V(I),(I-1)*DY
10 FORMAT(2F12.3)
ENDDO
CLOSE(15)
!DISPLAY INFORMATION ON SCREEN
!WRITE(*,*) 'THE OUTPUT VELOCITY IS AFTER', ITER, ' TIME STEPS'
!TERMINATION OF COMPUTER PROGRAM
STOP
END
!!!!!!!!
!!!!!!!!!!!!
!!!!!!!!!
SUBROUTINE SR1(A,B,N,S,VNEW)
REAL DIAGM(N), DIAGU(N), DIAGL(N)
REAL SS(N)
DO J=1,N-1
SS(J)=S(J+1)
ENDDO
DO I=1,N
DIAGM(i)=B
!Sets main diagonal as B for every value of i
IF (I==0) then
DIAGU(I)=A
DIAGL(I)=0
! No lower diagonal coefficient when i = 0
ELSE IF (I==N) THEN
DIAGU(I)=0
! No upper diagonal coefficient when i = Num
DIAGL(I)=A
ELSE
DIAGU(I)=A
! For all other points there is an upper diagonal coefficient
DIAGL(I)=A
! For all other points there is a lower diagonal coefficient
ENDIF
ENDDO
!CALL STANDARD FORTRAN MATH LIBRARY TO SOLVE LINEAR EQUATION AND GET SOLUTION VECTOR X(N-1)
CALL SR2 (DIAGL,DIAGM,DIAGU,SS,VNEW,N-2)
!RETURN TO MAIN PROGRAM AND X(N-1) IS FEEDED INTO UNEW(N-1)
RETURN
END SUBROUTINE
!!!!!!!!!!!!!!!
!!!!!!!!!!!
!!!!!!!!!!!
SUBROUTINE SR2 (A,B,C,D,Z,N)
!a - sub-diagonal (means it is the diagonal below the main diagonal)
!b - the main diagonal
!c - sup-diagonal (means it is the diagonal above the main diagonal)
!K - right part
!UNEW - the answer
!E - number of equations
INTEGER N
REAL A(N), B(N), C(N), D(N)
REAL CP(N), DP(N), Z(N)
REAL M
INTEGER I
DATA M/1/
!initialize c-prime and d-prime
CP(1) = C(1)/B(1)
DP(1) = D(1)/B(1)
!solve for vectors c-prime and d-prime
DO I=2,N
M=b(i)-CP(I-1)*(A(I))
CP(I)=C(I)/M
DP(I)=(D(I)-DP(I-1)*A(I))/M
ENDDO
!initialize UNEW
Z(N)=DP(N)
!solve for x from the vectors c-prime and d-prime
DO I=N-1, 1, -1
Z(I)=DP(I)-CP(I)*Z(I+1)
ENDDO
END SUBROUTINE
As george says in a comment, the problem is with the subroutine SR1. So that this isn't just a CW-stealing-a-comment answer I'll also expand a bit.
The way things are structured SR1 is a different scope from the main program. The IMPLICIT NONE in the main program doesn't apply to the subroutine, so A, B, N, S and VNEW are all implicitly typed. Apart from N,which is an integer, they are (scalar) reals.
The reference to S(J+1), as george says, means that S is not only a scalar real, but also a function. Remember that SR1 is a different scope and no information is passed from the caller to the callee about types, shapes, etc.. Further, that the dummy argument in SR1 called A happens to be same name as the actual argument in the call doesn't mean that the callee "knows" things. Your call to SR2 with the VNEW is also a problem for the same reason.
The question is tagged as "fortran77" so there isn't too much you can do to ensure there is a lot of checking going on, but there may well be compiler options and as you can use IMPLICIT NONE (not Fortran 77) that would detect your problems.
But, the question is also tagged "fortran" and "fortran95" so I'll point out that there are far better ways to detect the issues, using more modern features. Look at interfaces, modules and internal procedures.