gvcsp in fortran 90 - fortran

I want to fine eigenvalue and eigenvector of matrix that is symmetric positive definite define. However, I can not use function GVCSP in Fortran in Linux. Are there another function in Fortran that is equivalent GVCSP function.
here is my code:
program main
integer , parameter :: n=4
real*8 A(n,n), eigval(n), eigvec(n,n),eigvalmatrix(n,n)
integer i,j
do i=1,n
eigval(i)=0
do j=1,n
eigvec(i,j)=0
eigvalmatrix(i,j)=0
end do
end do
A(1,1)=-1
A(1,2)=0
A(1,3)=4
A(1,4)=2
A(2,1)=0
A(2,2)=2
A(2,3)=3
A(2,4)=-2
A(3,1)=4
A(3,2)=3
A(3,3)=0
A(3,4)=-4
A(4,1)=2
A(4,2)=-2
A(4,3)=-4
A(4,4)=1
call GVCSP(A, eigvalmatrix, eigval,eigvec)
end program

Related

Why does lapack dtrmm.f seem to not work properly?

I decided to use lapack subroutine dtrmm instead of matmul to multiply lower triangular (d,d) matrix and general (d,n) matrix. However, it doesn't seem to work correctly. The following code compares results of matlum (top) and dtrmm
Program test
implicit none
integer, parameter :: n = 3, d = 3 ! arbitrary numbers, no luck with other values
integer :: i
real(8) :: a(d,d), b(d,n), c(d,n)
call random_number(a)
call random_number(b)
do i=2,d
a(1:i-1,i) = 0
end do
c = matmul(a,b)
call dtrmm('L','L','N','N',d,n,1,a,d,b,d) ! documentation linked in the question
print*, 'correct result : '
do i=1,d
print*, c(i,:)
end do
print*, 'dtrmm yields : '
do i=1,d
print*, b(i,:)
end do
End Program test
returns this
correct result :
0.75678922130735249 0.51830058846965921 0.51177304237548271
1.1974740765385026 0.46115246753697681 0.98237114765741340
0.98027798241945430 0.53718796235743815 1.0328498570683342
dtrmm yields :
6.7262070844500211E+252 4.6065628207770121E+252 4.5485471599475983E+252
1.0642935166471391E+253 4.0986405551607272E+252 8.7311388520015068E+252
8.7125351741793473E+252 4.7744304178222945E+252 9.1797845822711462E+252
Other lapack suboutines I used work fine. What could be causing this to misbehave? Is this a bug, or have I just badly misunderstood something?
It is a simple data type error. The factor alpha must be of type double precision whereas you supplied an integer of default kind.
Thus
...
! call dtrmm('L','L','N','N',d,n,1,a,d,b,d) WRONG
call dtrmm('L','L','N','N',d,n,1d0,a,d,b,d) ! note 1d0 instead of 1
...
gives the correct result.

Fortran Array Entries Corrupted?

I'm writing a subroutine that transform a regular vector into the one with only non-zero elements. Say, vector a=(0,0,1,2,3)' (n by 1). Then the non-zero vector is c=(1,2,3), and the row index is recorded as ic=(0,0,0,1,2,3) where ic(1)=0, ic(i+1)-ic(i) is the number of non-zero elements in i-th row. The vector index jc=(1,1,1) with size 3 as there are 3 non-zero entries. See the sparse matrix wiki for FYI: https://en.wikipedia.org/wiki/Sparse_matrix.
Despite its simplicity, I'm having troubles in running the following code named sparsem.f90
!This subroutine coverts a regular sparse matrix a into a CSR form
MODULE SPARSEM
CONTAINS
SUBROUTINE vsparse(a,c,jc,ic,counta,ierr,myid)
IMPLICIT NONE
REAL(8), INTENT(IN):: a(:)
INTEGER, INTENT(IN):: counta,myid
REAL(8), INTENT(OUT):: c(counta)
INTEGER, INTENT(OUT):: jc(counta),ic(size(a)+1)
INTEGER:: ierr,countaa,i
character(len=90):: filename
ierr=0
jc=0
c=0.0d0
ic=0
PRINT *, 'SIZE OF A IN VSPARSE', size(a),count(a>0.0d0),counta
IF (COUNT(a>0.0d0) /= counta) THEN
ierr=1
PRINT *, 'ERROR: number count of non-zero a(i,j) is not', counta
ELSE
countaa=0
ic(1)=0
DO i=1,size(a)
IF (a(i) > 0.0d0 ) THEN
countaa=countaa+1
c(countaa)=a(i)
ic(i+1)=ic(i)+1
jc(countaa)=1
IF (countaa<100) PRINT *,'checkcheckcheck', a(i), &
countaa,jc(countaa),c(countaa),jc(1:5)
ELSE
ic(i+1)=ic(i)
END IF
END DO
PRINT *, 'JCJCJCJC',jc(1:5)
END IF
IF (myid==7) THEN
WRITE(filename,'("sparsedens_dcheck",I1,".txt")') myid+1
OPEN(UNIT=212101, FILE="/home/wenya/Workspace/Model4/valuef/"//filename,ACTION='write',status='replace')
DO i=1,counta+1
IF (i<=counta) THEN
WRITE(212101,*) c(i),jc(i)
ELSE
WRITE(212101,*) 0.0D0,0
END IF
END DO
CLOSE(212101)
END IF
return
END SUBROUTINE vsparse
END MODULE SPARSEM
So the three print jc codes shall give 1 1 1 1 1.... Yet starting from the second print jc code, the result is 6750960 6750691 6750692 .... The array of jc has size 9,000,000. And I know the first 2250000 element is 0.
To replicate this problem, here is the main program
PROGRAM MAIN
USE SPARSEM
IMPICIT NONE
REAL(8):: dens_last(9000000)
REAL(8), ALLOCATABLE :: dens(:)
INTEGER, ALLOCATABLE :: ic(:),jc(:)
INTEGER:: i
dens_last(1:2250000)=0.0d0
dens_last(2250001:9000000)=1.0d0/6750000.0d0
ncount=count(dens_last>0.0d0)
ALLOCATE(dens(ncount), ic(9000000+1), jc(ncount)_
CALL VSPASEM(dens_last, dens, jc, ic, ncount,ierr)
DEALLOCATE(dens,ic,jc)
END PROGRAM MAIN
I am using gfortran 6.3.0 and openmpi latest version on a UBUNTU 17.04 computer. Although openmpi is not used in this example, it's used in the rest of the program. Any thoughts? Thanks!

How can I update this Runge-Kutta code for second order ODEs?

I've written a piece of FORTRAN code that solves first order differential equations, for example the one that is in the function at the momement. However, I want to use it for second order ODES (and eventually 2nd order coupled ODEs), but I'm struggling to adapt it. Here's the code:
program rungekutta
implicit none
integer :: istat
real(8)::a,b,h,y_0,t
write(*,*)"Enter interval a,b, step-size h and y_0"
read(*,*)a,b,h,y_0
open(unit=1, file="rungekutta.dat",status='replace', iostat=istat)
if (istat .ne. 0) stop 'error opening rungekutta.dat'
call RungeKuttaSub(y_0,a,b,h)
close(1)
contains
subroutine RungeKuttaSub(y_0,a,b,h)
implicit none
real(8), intent(inout)::a,h,b, y_0
real(8):: y,t, F_1, F_2, F_3, F_4
t=a
y=y_0
do while(t<=b)
F_1=h*f(y,t)
F_2=h*f(y+(1d0/2d0)*F_1, t+h*(1d0/2d0))
F_3=h*f(y+(1d0/2d0)*F_2, t+h*(1d0/2d0))
F_4=h*f(y+F_3,t+h)
y=y+(F_1+2d0*F_2+2d0*F_3+F_4)/6d0
write(1,*)t, y
t=t+h
end do
end subroutine
function f(y,t)
implicit none
real(8)::f,y,t,pi
pi=acos(-1d0)
f=-y+sin(pi*2d0*t)
end function
end program
So as you can see the differential in the function at the moment is y'(t) = -y(t) + sin(2*pi*t). How can I update this for 2nd order ODEs?
Thanks.

Whats wrong with my Hermite Interpolation in Fortran?

Hermite Interpolation woes
I am trying to find the Newton Dividing Differences for the function and derivative values of a given set of x's. I'm running into serious problems with my code working for tiny examples, but failing on bigger one's. As is clearly visible, my answers are very much larger than they original function values.
Does anybody have any idea what I'm doing wrong?
program inter
implicit none
integer ::n,m
integer ::i
real(kind=8),allocatable ::xVals(:),fxVals(:),newtonDivDiff(:),dxVals(:),zxVals(:),zdxVals(:),zfxVals(:)
real(kind=8) ::Px
real(kind=8) ::x
Open(Unit=8,File="data/xVals")
Open(Unit=9,File="data/fxVals")
Open(Unit=10,File="data/dxVals")
n = 4 ! literal number of data pts
m = n*2+1
!after we get the data points allocate the space
allocate(xVals(0:n))
allocate(fxVals(0:n))
allocate(dxVals(0:n))
allocate(newtonDivDiff(0:n))
!allocate the zvalue arrays
allocate(zxVals(0:m))
allocate(zdxVals(0:m))
allocate(zfxVals(0:m))
!since the size is the same we can read in one loop
do i=0,n
Read(8,*) xVals(i)
Read(9,*) fxVals(i)
Read(10,*) dxVals(i)
end do
! contstruct the z illusion
do i=0,m,2
zxVals(i) = xVals(i/2)
zxVals(i+1) = xVals(i/2)
zdxVals(i) = dxVals(i/2)
zdxVals(i+1) = dxVals(i/2)
zfxVals(i) = fxVals(i/2)
zfxVals(i+1) = fxVals(i/2)
end do
!slightly modified business as usual
call getNewtonDivDiff(zxVals,zdxVals,zfxVals,newtonDivDiff,m)
do i=0,n
call evaluatePolynomial(m,newtonDivDiff,xVals(i),Px,zxVals)
print*, xVals(i) ,Px
end do
close(8)
close(9)
close(10)
stop
deallocate(xVals,fxVals,dxVals,newtonDivDiff,zxVals,zdxVals,zfxVals)
end program inter
subroutine getNewtonDivDiff(xVals,dxVals,fxVals,newtonDivDiff,n)
implicit none
integer ::i,k
integer, intent(in) ::n
real(kind=8), allocatable,dimension(:,:) ::table
real(kind=8),intent(in) ::xVals(0:n),dxVals(0:n),fxVals(0:n)
real(kind=8), intent(inout) ::newtonDivDiff(0:n)
allocate(table(0:n,0:n))
table = 0.0d0
do i=0,n
table(i,0) = fxVals(i)
end do
do k=1,n
do i = k,n
if( k .eq. 1 .and. mod(i,2) .eq. 1) then
table(i,k) = dxVals(i)
else
table(i,k) = (table(i,k-1) - table(i-1,k-1))/(xVals(i) - xVals(i-k))
end if
end do
end do
do i=0,n
newtonDivDiff(i) = table(i,i)
!print*, newtonDivDiff(i)
end do
deallocate(table)
end subroutine getNewtonDivDiff
subroutine evaluatePolynomial(n,newtonDivDiff,x,Px,xVals)
implicit none
integer,intent(in) ::n
real(kind=8),intent(in) ::newtonDivDiff(0:n),xVals(0:n)
real(kind=8),intent(in) ::x
real(kind=8), intent(out) ::Px
integer ::i
Px = newtonDivDiff(n)
do i=n,1,-1
Px = Px * (x- xVals(i-1)) + newtonDivDiff(i-1)
end do
end subroutine evaluatePolynomial
Values
x f(x) f'(x)
1.16, 1.2337, 2.6643
1.32, 1.6879, 2.9989
1.48, 2.1814, 3.1464
1.64, 2.6832, 3.0862
1.8, 3.1553, 2.7697
Output
1.1599999999999999 62.040113431002474
1.3200000000000001 180.40121445431600
1.4800000000000000 212.36319446149312
1.6399999999999999 228.61845650513027
1.8000000000000000 245.11610836104515
You are accessing array newtonDivDiff out of bounds.
You are first allocating it as 0:n (main program's n) then you are passing to subroutine getNewtonDivDiff as 0:n (the subroutine's n) but you pass m (m=n*2+1) to the argument n. That means you tell the subroutine that the array has bounds 0:m which is 0:9, but it has only bounds 0:4.
It is quite difficult to debug the program as it stands, I had to use valgrind. If you move your subroutines to a module and change the dummy arguments to assumed shape arrays (:,:) then the bound checking in gfortran (-fcheck=all) will catch the error.
Other notes:
kind=8 is ugly, 8 can mean different things for different compilers. If you want 64bit variables, you can use kind=real64 (real64 comes from module iso_fortran_env in Fortran 2008) or use selected_real_kind() (Fortran 90 kind parameter)
You do not have to deallocate your local arrays in the subroutines, they are deallocated automatically.
Your deallocate statement in the main program is after the stop statement, it will never be executed. I would just delete the stop, there is no reason to have it.

How to call subroutines in fortran parallelized by MPI?

My problem is that I don't know how to call subroutines when I use mpi scheme in Fortran.
I have written this small code named TRY.f90 in which there is a subroutine named CONCENTRATION.f90. How should I change CONCENTRATION.f90 in order to make the code works?
PROGRAM TRY
USE MPI
integer status(mpi_status_size)
INTEGER I, J, K, II, IERR, MY_ID, NUM_PROCS, PSP
INTEGER , PARAMETER :: GRIDX =64, GRIDY=64
REAL , DIMENSION(gridx,gridy) :: PSI
PSI=0
PRINT*, 'VARIABLE'
CALL MPI_INIT(IERR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MY_ID,IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NUM_PROCS,IERR)
CALL CONCENTRATION(GRIDX, GRIDY, NUM_PROCS, MY_ID , PSI)
IF (MY_ID .NE. 0) THEN
CALL mpi_send( PSI(1+MY_ID*GRIDX/NUM_PROCS:(MY_ID+1)*GRIDX/NUM_PROCS:1,1:GRIDY:1),&
(GRIDX/NUM_PROCS)*GRIDY,mpi_real, 0,10,mpi_comm_world,ierr)
END IF
IF (MY_ID .EQ. 0) THEN
DO II=1,NUM_PROCS-1
CALL mpi_recv(PSI(1+II*GRIDX/NUM_PROCS:(II+1)*GRIDX/NUM_PROCS:1,1:GRIDY:1),&
(GRIDX/NUM_PROCS)*GRIDY,mpi_real, &
II,10,mpi_comm_world,status,ierr)
END DO
END IF
CALL MPI_FINALIZE(IERR)
END PROGRAM TRY
I am using a subroutine named CONCENTRATION.f90 which is:
SUBROUTINE CONCENTRATION(GRIDX, GRIDY, NUM_PROCS, MY_ID , PSI)
implicit none
INTEGER*8, INTENT(IN) :: GRIDX, GRIDY
INTEGER , INTENT(IN) :: NUM_PROCS, MY_ID
REAL*8 , DIMENSION(GRIDX,GRIDY), INTENT(OUT) :: PSI
INTEGER*8 I, J
DO I=1+MY_ID*GRIDX/NUM_PROCS, (MY_ID+1)*GRIDX/NUM_PROCS
DO J=1,GRIDY
PSI(I,J)=2.0
END DO
END DO
END SUBROUTINE CONCENTRATION
The code currently gives me error since I think I should have made some changes on the subroutine CONCENTRATION.f90. Or I should also change the way I call the subroutine.
Could you please tell me what are those changes? Thanks for your helps in advance
Your program segfaults because of type mismatch. In the main program you have declared PSI as an array of REAL:
REAL , DIMENSION(gridx,gridy) :: PSI
while in the CONCENTRATION subroutine you use another type of REAL*8:
REAL*8 , DIMENSION(GRIDX,GRIDY), INTENT(OUT) :: PSI
By default REAL is 4 bytes long while REAL*8 (or DOUBLE PRECISION or REAL(KIND=8)) is 8 bytes long. So you are giving to CONCENTRATION an array that is 2 times smaller than what it believes to be and all ranks from NUM_PROCS/2 onwards write past the end of the PSI array and thus cause segfaults. If you run with one process only, then even rank 0 will segfault.
You should also read about MPI collective operations. MPI_GATHER and MPI_GATHERV do exactly what you are trying to achieve whith multiple sends and receives here.
The only change would be to declare concentration as reentrant. That could be the default for Fortran 90. (The bulk of my experience is F77, and reentrant is not the default there.)