Process exchange (fortran + MPI) - fortran

During the realization of the course work I have to write MPI program to solve PDE continuum mechanics. (FORTRAN)
In parallel program a big computational 3D domain (grid MxNxL) is shared between processes. Every process has it's own subdomain for computation(grid NXxNYxNZ = (M/P1)x(N/P2)x(L/P3) , P1*P2*P3 = P - number of processes). After every computation step I have to exchange two border layers between neighbor processes or (it is not important) in case if I don't have a neighbor towards I have to process the border conditions. I have wrote the following implementation, but I dont know how to get rid of extra data shuffling. I want to accelerate my code as possible.
if (XB2 /= MPI_PROC_NULL) then ! If process has right-X neighbor process
call CopyToSb(NX-1,NX,1,NY,1,NZ,XPkgSize)
else
call CopyXLevel(NX,NX+1,-1) ! BORDER PROCESSING (NOT IMPORTANT)
end if
call MPI_SENDRECV_REPLACE(BPtmp, 7*XPkgSize, MPI_REAL4, XB2, 1, XB1, 1, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (XB1 /= MPI_PROC_NULL) then ! If process has left-X neighbor process
call CopyFromSb(-1,0,1,NY,1,NZ,XPkg,XPkgSize)
call CopyToSb(1,2,1,NY,1,NZ,XPkgSize)
else
call CopyXLevel(1,0,1) ! BORDER PROCESSING (NOT IMPORTANT)
end if
call MPI_SENDRECV_REPLACE(BPtmp, 7*XPkgSize, MPI_REAL4, XB1, 2, XB2, 2, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (XB2 /= MPI_PROC_NULL) call CopyFromSb(NX+1,NX+2,1,NY,1,NZ,XPkg,XPkgSize) ! If process has right-X neighbor process
if(iam /= YB1) then ! Because we have cyclic grid on Y direction
if (YB2 /= MPI_PROC_NULL) call CopyToSb(1,NX,NY-1,NY,1,NZ,YPkgSize) ! If process has right-Y neighbor process
call MPI_SENDRECV_REPLACE(BPtmp, 7*YPkgSize, MPI_REAL4, YB2, 3, YB1, 3, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (YB1 /= MPI_PROC_NULL) then ! If process has left-Y neighbor process
call CopyFromSb(1,NX,-1,0,1,NZ,YPkg,YPkgSize)
call CopyToSb(1,NX,1,2,1,NZ,YPkgSize)
end if
call MPI_SENDRECV_REPLACE(BPtmp, 7*YPkgSize, MPI_REAL4, YB1, 4, YB2, 4, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (YB2 /= MPI_PROC_NULL) call CopyFromSb(1,NX,NY+1,NY+2,1,NZ,YPkg,YPkgSize) ! If process has right-Y neighbor process
else
call CopyYLevel(1,NY+1,1) ! BORDER PROCESSING (NOT IMPORTANT)
call CopyYLevel(NY,0,1) ! BORDER PROCESSING (NOT IMPORTANT)
call CopyYLevel(2,NY+2,1) ! BORDER PROCESSING (NOT IMPORTANT)
call CopyYLevel(NY-1,-1,1) ! BORDER PROCESSING (NOT IMPORTANT)
end if
if (ZB2 /= MPI_PROC_NULL) then
call CopyToSb(1,NX,1,NY,NZ-1,NZ,ZPkgSize)
else
call CopyZLevel(NZ,NZ+1,-1) ! BORDER PROCESSING (NOT IMPORTANT)
end if
call MPI_SENDRECV_REPLACE(BPtmp, 7*ZPkgSize, MPI_REAL4, ZB2, 5, ZB1, 5, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (ZB1 /= MPI_PROC_NULL) then
call CopyFromSb(1,NX,1,NY,-1,0,ZPkg,ZPkgSize)
call CopyToSb(1,NX,1,NY,1,2,ZPkgSize)
else
call CopyZLevel(1,0,-1) ! BORDER PROCESSING (NOT IMPORTANT)
end if
call MPI_SENDRECV_REPLACE(BPtmp, 7*ZPkgSize, MPI_REAL4, ZB1, 6, ZB2, 6, COMM_CART, MPI_STATUS_IGNORE, ierr)
if (ZB2 /= MPI_PROC_NULL) call CopyFromSb(1,NX,1,NY,NZ+1,NZ+2,ZPkg,ZPkgSize)
contains
! This is copyToSendBuffer and copyFromSendBuffer functions
! I think it is most problem part of my code.
subroutine CopyToSb(x1,x2,y1,y2,z1,z2,PkgSize)
integer :: x1,x2,y1,y2,z1,z2,PkgSize
BPtmp(1:PkgSize) = RESHAPE(R(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(PkgSize+1:2*PkgSize) = RESHAPE(U(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(2*PkgSize+1:3*PkgSize) = RESHAPE(V(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(3*PkgSize+1:4*PkgSize) = RESHAPE(W(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(4*PkgSize+1:5*PkgSize) = RESHAPE(P(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(5*PkgSize+1:6*PkgSize) = RESHAPE(H(x1:x2,y1:y2,z1:z2),[PkgSize])
BPtmp(6*PkgSize+1:7*PkgSize) = RESHAPE(S(x1:x2,y1:y2,z1:z2),[PkgSize])
end subroutine CopyToSb
subroutine CopyFromSb(x1,x2,y1,y2,z1,z2,Pkg,PkgSize)
integer, dimension(3) :: Pkg
integer :: x1,x2,y1,y2,z1,z2,PkgSize
R(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(1:PkgSize),Pkg)
U(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(PkgSize+1:2*PkgSize),Pkg)
V(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(2*PkgSize+1:3*PkgSize),Pkg)
W(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(3*PkgSize+1:4*PkgSize),Pkg)
P(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(4*PkgSize+1:5*PkgSize),Pkg)
H(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(5*PkgSize+1:6*PkgSize),Pkg)
S(x1:x2,y1:y2,z1:z2) = RESHAPE(BPtmp(6*PkgSize+1:7*PkgSize),Pkg)
end subroutine CopyFromSb
end
The problem is that I have to excahnge slices of array like 2*NY*NZ or NX*2*NZ or NX*NY*2 but in fact I have (-1:NX+2, -1:NY+2, -1:NZ+2) arrays. It is noncontiguous problem. I know what MPI derived datatypes is, but I need some advices(examples) on my way.

Related

PARPACK implementation runs into memory errors

I am making a module in Fortran 90 to run PARPACK on a given matrix. I have an existing ARPACK code which functions normally as expected. I tried converting it into PARPACK and it runs into memory clear errors. I am fairly new to coding and fortran, please excuse any blunders I've made.
The code:
!ARPACK module
module parpack
implicit none
contains
subroutine parp
! use mpi
include '/usr/lib/x86_64-linux-gnu/openmpi/include/mpif.h'
integer comm, myid, nprocs, rc, nloc, status(MPI_STATUS_SIZE)
integer, parameter :: pres=8
integer nev, ncv, maxn, maxnev, maxncv
parameter (maxn=10**7, maxnev=maxn-1, maxncv=maxn)
! Arrays for SNAUPD
integer iparam(11), ipntr(14)
logical, allocatable :: select(:)
real(kind=pres), allocatable :: workd(:), workl(:), worktmp1(:), worktmp2(:)
! Scalars for SNAUPD
character bmat*1, which*2
integer ido, n, info, ierr, ldv
integer i, j, ishfts, maxitr, mode1, nconv
integer(kind=pres) lworkl
real(kind=pres) tol
! Arrays for SNEUPD
real(kind=pres), allocatable :: d(:,:), resid(:), v(:,:), workev(:), z(:,:)
! Scalars for SNEUPD
logical rvec, first
real sigmar, sigmai
!==============================================
real(kind=pres), allocatable :: mat(:,:)
open (11, file = 'matrix.dat', status = 'old')
read (11,*) n
!=============================================
! Dimension of the problem
nev = n/10
ncv = nev+2
ldv = n
bmat = 'I'
which = 'LM'
! Additional environment variables
ido = 0
tol = 0.0E+0
info = 0
lworkl = 3*ncv**2+6*ncv
! Algorithm Mode specifications:
ishfts = 1
maxitr = 300
mode1 = 1
iparam(1) = ishfts
iparam(3) = maxitr
iparam(7) = mode1
! Distribution to nodes
!=============================================
! Matrix allocation
allocate (mat(n,n))
! PDNAUPD
allocate (workd(5*n))
allocate (workl(lworkl))
allocate (resid(n))
allocate (worktmp1(n))
allocate (worktmp2(n))
! PDNEUPD
allocate (d(n,3))
allocate (v(ldv,ncv))
allocate (workev(3*n))
allocate (z(ldv,ncv))
allocate (select(ncv))
!===========================================
! Read Matrix from the provided file
mat = 0
read(11,*) mat
mat = transpose(mat)
!===========================================
! MPI Calling
call MPI_INIT(ierr)
comm = MPI_COMM_WORLD
call MPI_COMM_RANK(comm, myid, ierr)
call MPI_COMM_SIZE(comm, nprocs, ierr)
nloc = n/nprocs
! if ( mod(n, nprocs) .gt. myid ) nloc = nloc + n
!===============================================
20 continue
call pdnaupd(comm, ido, bmat, nloc, which, nev, tol, resid, ncv, v, ldv, iparam, ipntr, workd, workl, lworkl, info) !Top level solver
call MPI_BARRIER(comm,ierr)
print *, ido, info, iparam(5) !for testing
!===============================================
if (ido .eq. -1 .or. ido .eq. 1) then
worktmp1 = 0
if (myid .ne. 0) then !It is slave
call MPI_SEND(workd(ipntr(1)), nloc, MPI_DOUBLE_PRECISION, 0, 0, comm, ierr)
else !It is host
worktmp1(1:nloc) = workd(ipntr(1):ipntr(1)+nloc-1)
i = nprocs
if (i .gt. 1) then
do i=1,nprocs-1
call MPI_RECV(worktmp1(i*nloc+1), nloc, MPI_DOUBLE_PRECISION, i, 0, comm, status, ierr)
end do
endif
endif
call MPI_BARRIER(comm,ierr)
if (myid .eq. 0) then !It is host
! Matrix multiplication
worktmp2 = 0
call matmultiply(n, mat, worktmp1, worktmp2)
workd(ipntr(2):ipntr(2)+nloc-1) = worktmp2(1:nloc)
i = nprocs
if (i .gt. 1) then
do i=1,nprocs-1
call MPI_SEND(worktmp2(i*nloc+1), nloc, MPI_DOUBLE_PRECISION, i, 100*i, comm, ierr)
end do
endif
else !It is slave
call MPI_RECV(workd(ipntr(2)), nloc, MPI_DOUBLE_PRECISION, 0, 100*myid, comm, status, ierr)
endif
go to 20
! call matmultiply(n, mat, workd(ipntr(1):ipntr(1)+n-1), workd(ipntr(2):ipntr(2)+n-1))
! go to 20
endif
! print *, info !for testing
!===============================================================
! Post-processing for eigenvalues
rvec = .true.
if (myid .eq. 0) then
call pdneupd ( comm, rvec, 'A', select, d, d(1,2), z, ldv, sigmar, sigmai, &
workev, bmat, n, which, nev, tol, resid, ncv, v, ldv, iparam, ipntr, &
workd, workl, lworkl, info)
endif
! print *, info !for testing
close(11)
call MPI_FINALIZE(ierr)
return
end subroutine
!==============================================================================================
! Additional Function definitions
subroutine matmultiply(n, mat, v, w)
integer n, i, j
integer, parameter :: pres=8
real(kind = pres) mat(n,n), temp(n)
real(kind = pres) v(n), w(n)
temp = 0
do j = 1,n
do i = 1,n
temp(j) = temp(j) + mat(i,j)*v(i)
end do
end do
w = temp
return
end subroutine
end module
I apologize for the ton of redundant lines and comments, I am yet to clean it up for finalization.
When I run the code on a single thread with ./a.out, I get the following output:
Invalid MIT-MAGIC-COOKIE-1 key 1 0 1629760560
1 0 1629760560
1 0 1629760560
1 0 1629760560
.
.
. <A long chain as the code is exhausting all iterations>
.<first of the numbers is ido, which starts with 1 instead of -1 for some reason, second being
.info and third being iparam(5) which is a random number until the final iteration>
.
99 1 1
munmap_chunk(): invalid pointer
Program received signal SIGABRT: Process abort signal.
Backtrace for this error:
#0 0x7f5a863d0d01 in ???
#1 0x7f5a863cfed5 in ???
#2 0x7f5a8620420f in ???
#3 0x7f5a8620418b in ???
#4 0x7f5a861e3858 in ???
#5 0x7f5a8624e3ed in ???
#6 0x7f5a8625647b in ???
#7 0x7f5a862566cb in ???
#8 0x560f05ac1819 in ???
#9 0x560f05abd7bc in checker
at /home/srivatsank/Desktop/fortran/lap_vs_arp/ptest/ptest.f90:45
#10 0x560f05abd8d9 in main
at /home/srivatsank/Desktop/fortran/lap_vs_arp/ptest/ptest.f90:3
Aborted (core dumped)
line 45 in ptest is call parp
line 3 in ptest is use parpack(name of the module)
The main code is as follows:
program checker
use parpack
use arpack
! use lapack
implicit none
!Program to test LAPACK and ARPACK
! 1. Variable definition
integer a,n,i
real, allocatable :: mat(:,:)
real t0, t1
a=2
! Loop
! do 20 a = 1,3
! Open File
open(unit=10, file = 'matrix.dat', status = 'replace')
! 2. Generate Symmetric matrices
n = 10**a
allocate (mat(n,n))
call RANDOM_NUMBER(mat)
! 3. Save symmetric matrices to r.dat
write (10,*) n
do 30 i=1,n
write(10,*) mat(i,:)
30 end do
deallocate(mat)
close(10)
! 4. Test time taken by each of the routines
! call cpu_time(t0)
! call arp
! call cpu_time(t1)
! print *, 'n:', n, 'ARPACK time taken:', t1-t0
call cpu_time(t0)
call parp
call cpu_time(t1)
print *, 'n:', n, 'PARPACK time taken:', t1-t0
!20 end do
end program checker
The memory error occurs at the very end of the subroutine, when the mail program tries to exit from the subroutine. I have verified this by printing statements as the last line in the subroutine.
And on running mpirun -np 4 a.out, the code just enters the pdneupd process and sits there for eternity. Could anyone help?

Question about MPI_SEND and MPI_RECV command

I modified my code and put MPI_RECV before MPI_SEND. This time, I did not receive any error message; however, it seemed that code still encountered deadlock. The reason is that I opened some files (UNIT=11, 12, 13,14) before MPI_RECV and MPI_SEND commands; then, I collected data through these two commands and wrote them into these files but there was no data written into these files. I paste my modified code below. Would you please have a look at it and give me some suggestions? Thank you so much.
PROGRAM MAIN
USE MPI
USE CAL
IMPLICIT NONE
INTEGER :: nb !Number of valence band
DOUBLE PRECISION :: me !Minimum eigen value
DOUBLE COMPLEX, ALLOCATABLE :: u_s1(:,:) !Array to store the contribution of each eigen state to the total spin orbit torque
DOUBLE COMPLEX, ALLOCATABLE :: u_s2(:,:) !Array to store the contribution of each eigen state to the total spin orbit torque
DOUBLE COMPLEX, ALLOCATABLE :: u_t1(:,:) !Array to collect the contribution of each eigen state to the total spin orbit torque from all processors
DOUBLE COMPLEX, ALLOCATABLE :: u_t2(:,:) !Array to collect the contribution of each eigen state to the total spin orbit torque from all processors
DOUBLE COMPLEX :: sr1 !Sum of Femri surface part for spin orbit torque on all km(1) k points
DOUBLE COMPLEX :: sr2 !Sum of Femri surface part for spin orbit torque on all km(1) k points
DOUBLE PRECISION, ALLOCATABLE, TARGET :: nme(:) !Array to store the minimum eigen value
INTEGER, ALLOCATABLE, TARGET :: nnb(:) !Array to store the number of valence band
INTEGER :: world_size !MPI
INTEGER :: world_rank, ierr !MPI
INTEGER :: irank, j0 !MPI
!
!Initializing MPI
CALL MPI_Init(ierr)
CALL MPI_Comm_size(MPI_COMM_WORLD, world_size, ierr)
CALL MPI_Comm_rank(MPI_COMM_WORLD, world_rank, ierr)
!
!Opening file that stores the total spin orbit torque value for the Fermi surface part
OPEN (UNIT=11, FILE='SOT_Surface.dat', STATUS='UNKNOWN')
!
!Opening file that stores the spin orbit torque for the Fermi surface part versus energy
OPEN (UNIT=12, FILE='SOT_Surface_sve_xz.dat', STATUS='UNKNOWN')
OPEN (UNIT=13, FILE='SOT_Surface_sve_yz.dat', STATUS='UNKNOWN')
!
!Opening file that stores the minimum eigen value and number of valence band
OPEN (UNIT=14, FILE='SOT_mineig_numval.dat', STATUS='UNKNOWN')
!
!Allocating the array used to store the contribution of each eigen state to the total spin orbit torque
ALLOCATE (u_s1(2,nu_wa*km(1)))
ALLOCATE (u_s2(2,nu_wa*km(1)))
!
!Allocating array to collect the contribution of each eigen state to the total spin orbit torque from all processors
IF (world_rank .EQ. 0) THEN
ALLOCATE (u_t1(2,nu_wa*km(1)*km(2)))
ALLOCATE (u_t2(2,nu_wa*km(1)*km(2)))
END IF
u_t1 = CMPLX(0.0d0, 0.0d0)
u_t2 = CMPLX(0.0d0, 0.0d0)
!
!Allocating array to collect the number of valence band and the minimum eigen value
IF (world_rank .EQ. 0) THEN
ALLOCATE (nme(km(2)))
ALLOCATE (nnb(km(2)))
END IF
nme = 0.0d0
nnb = 0
!
!Allocating array to collect the contribution of each eigen state to the total spin orbit torque from all processors
IF (world_rank .EQ. 0) THEN
ALLOCATE (u_t1(2,nu_wa*km(1)*km(2)))
ALLOCATE (u_t2(2,nu_wa*km(1)*km(2)))
END IF
u_t1 = CMPLX(0.0d0, 0.0d0)
u_t2 = CMPLX(0.0d0, 0.0d0)
!
!Allocating array to collect the number of valence band and the minimum eigen value
IF (world_rank .EQ. 0) THEN
ALLOCATE (nme(km(2)))
ALLOCATE (nnb(km(2)))
END IF
nme = 0.0d0
nnb = 0
!
!opening test files
open (unit=21,file='normalisedprefactor.dat',status='unknown')
open (unit=22,file='gd.dat',status='unknown')
open (unit=23,file='con.dat',status='unknown')
open (unit=24,file='par.dat',status='unknown')
open (unit=25,file='grga.dat',status='unknown')
open (unit=26,file='nfdk.dat',status='unknown')
!Reading the Cartesian coordinates of k-point mesh
DO j = 1, km(2), 1
IF (mod(j-1, world_size) .NE. world_rank) CYCLE
DO k = 1, km(1), 1
kp(k,:) = ka(j,k,:)
END DO
!Building up Hamiltonian matrix on k points and diagonalising the matrix to obtain Eigen vectors and values
CALL HAMSUR(vd,kp,nu_wa,nu_nr,km(1),nd1,nd2,nd3,nd4,nd5,hr1,hr2,hr3,hr4,hr5,tb,ec,ev,fermi,an,wf,bv,dk,u_s1,u_s2,sr1,sr2,nb,me)
!
CALL MPI_Barrier(MPI_COMM_WORLD, ierr)
IF (WORLD_RANK .EQ. 0) THEN
u_t1(1:2,1+nu_wa*km(1)*(j-1):nu_wa*km(1)*j) = u_s1
u_t2(1:2,1+nu_wa*km(1)*(j-1):nu_wa*km(1)*j) = u_s2
DO k = 1, WORLD_SIZE-1, 1
IF (j-1+k .EQ. km(2)) EXIT
l = k + 101
n = k + 102
CALL MPI_RECV(u_t1(1,1+nu_wa*km(1)*(j-1+k)), 2*nu_wa*km(1), MPI_DOUBLE_COMPLEX, k,l, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
CALL MPI_RECV(u_t2(1,1+nu_wa*km(1)*(j-1+k)), 2*nu_wa*km(1), MPI_DOUBLE_COMPLEX, k,n, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
END DO
ELSE
l = WORLD_RANK + 101
n = WORLD_RANK + 102
CALL MPI_SEND(u_s1,2*nu_wa*km(1), MPI_DOUBLE_COMPLEX, 0, l, MPI_COMM_WORLD, ierr)
CALL MPI_SEND(u_s2,2*nu_wa*km(1), MPI_DOUBLE_COMPLEX, 0, n, MPI_COMM_WORLD, ierr)
END IF
crr1 = crr1 + sr1
crr2 = crr2 + sr2
IF (WORLD_RANK .EQ. 0) THEN
nme(j-1) = me
nnb(j-1) = nb
DO k = 1, WORLD_SIZE-1, 1
IF (j-1+k .EQ. km(2)) EXIT
l = k + 103
n = k + 104
CALL MPI_RECV(nme(j-1+k), 1, MPI_DOUBLE, k,l, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
CALL MPI_RECV(nnb(j-1+k), 1, MPI_INT, k,n, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
END DO
ELSE
l = WORLD_RANK + 103
n = WORLD_RANK + 104
CALL MPI_SEND(me, 1, MPI_DOUBLE, 0, l, MPI_COMM_WORLD, ierr)
CALL MPI_SEND(nb, 1, MPI_INT, 0, n, MPI_COMM_WORLD, ierr)
END IF
END DO
!
CALL MPI_Barrier(MPI_COMM_WORLD, ierr)
IF (world_rank .EQ. 0) THEN
ALLOCATE (crr1_all(world_size))
ALLOCATE (crr2_all(world_size))
END IF
crr1_all = CMPLX(0.0d0, 0.0d0)
crr2_all = CMPLX(0.0d0, 0.0d0)
CALL MPI_Gather(crr1, 1, MPI_double_complex, crr1_all, 1, MPI_double_complex, 0, MPI_COMM_WORLD, ierr)
CALL MPI_Gather(crr2, 1, MPI_double_complex, crr2_all, 1, MPI_double_complex, 0, MPI_COMM_WORLD, ierr)
!Writing total conductivity value into the file
IF (world_rank .EQ. 0) THEN
crr1_total = CMPLX(0.0d0, 0.0d0)
crr2_total = CMPLX(0.0d0, 0.0d0)
DO i = 1, world_size, 1
crr1_total = crr1_total + crr1_all(i)
crr2_total = crr2_total + crr2_all(i)
END DO
!Finding the minimum eigen value
NULLIFY (p1, p2)
p1 => nme(1)
p2 => nnb(1)
DO i = 2, km(2), 1
IF (p1 .GE. nme(i)) THEN
p1 => nme(i)
END IF
IF (p2 .LE. nnb(i)) THEN
p2 => nnb(i)
END IF
END DO
WRITE (UNIT=14, FMT='(A27,$)') 'The minimum eigen value is:'
WRITE (UNIT=14, FMT=*) p1
WRITE (UNIT=14, FMT='(A30,$)') 'The number of valence band is:'
WRITE (UNIT=14, FMT=*) p2
!
!Constant for the coefficient
pi = DACOS(-1.0d0)
hb = 1.054571817d-34 !(unit - J)
es = 1.602176634d-19 !(unit - J*s)
!
WRITE (UNIT=11, FMT='(A55,$)') 'Spin Orbit Torque without coeffieicnt within x-z plane:'
WRITE (UNIT=11, FMT=*) crr1_total
WRITE (UNIT=11, FMT='(A52,$)') 'Spin Orbit Torque with coefficient within x-z plane:'
WRITE (UNIT=11, FMT=*) crr1_total * es ** 2 * hb / 4.0d0 / pi
WRITE (UNIT=11, FMT='(A55,$)') 'Spin Orbit Torque without coeffieicnt within y-z plane:'
WRITE (UNIT=11, FMT=*) crr2_total
WRITE (UNIT=11, FMT='(A52,$)') 'Spin Orbit Torque with coefficient within y-z plane:'
WRITE (UNIT=11, FMT=*) crr2_total * es ** 2 * hb / 4.0d0 / pi
DO i = 1, nu_wa*km(1)*km(2), 1
WRITE (UNIT=12, FMT=*) u_t1(1:2,i)
WRITE (UNIT=13, FMT=*) u_t2(1:2,i)
END DO
END IF
!
!Finalising MPI
CALL MPI_Finalize(ierr)
!
!Deallocating array that sotres and collect the fermi-surface-part contribution of each eigen state to the total spin orbit torque
DEALLOCATE (u_s1)
DEALLOCATE (u_s2)
DEALLOCATE (u_t1)
DEALLOCATE (u_t2)
!
STOP
END PROGRAM MAIN

Gathering 2D Arrays in Fortran90, MPI_Gather

I have problem with combining several 2D arrays into one big 2D array using MPI in Fortran.
I have equal size 2D arrays containing real numbers, every array is contained in different process:
numerical(subdsize,nt)
I want to combine them to one big array
numerical_final(nx,nt)
I am using the following command
CALL MPI_Gather(numerical(1:subdsize,nt),subdsize*nt,MPI_DOUBLE_PRECISION,numerical_final,subdsize*nt,MPI_DOUBLE_PRECISION,0, MPI_COMM_WORLD, mpierror)
Unfortunately the data that numerical_final array contains are a complete mess. I was looking for solutions really everywhere. I read this topic but it did not help me:
sending blocks of 2D array in C using MPI
I am using Intel Fortran 2018 compiler and Ubuntu 16.04.
Full code below.
I will be grateful for the help.
PROGRAM Advection
IMPLICIT NONE
INCLUDE 'mpif.h'
INTEGER :: nt,nx,i,steptime,tag,j
DOUBLE PRECISION :: R_dx, R_dt, R_c, R_cfl, R_t
DOUBLE PRECISION, DIMENSION(3) :: R_input
DOUBLE PRECISION, ALLOCATABLE, DIMENSION(:) :: xcoord
DOUBLE PRECISION, ALLOCATABLE, DIMENSION(:,:) :: numerical, numerical_final
DOUBLE PRECISION :: time_begin,time_end,time_elapsed
INTEGER:: myrank,nproc,mpierror,xdomains,subdsize
INTEGER:: status(MPI_STATUS_SIZE)
CALL MPI_Init(mpierror)
CALL MPI_Comm_size(MPI_COMM_WORLD,nproc,mpierror)
CALL MPI_Comm_rank(MPI_COMM_WORLD,myrank,mpierror)
IF (nproc<2) THEN
PRINT*, "Error, only more than 1"
CALL MPI_ABORT
END IF
IF (myrank .EQ. 0) THEN
OPEN(UNIT = 1, FILE = 'inputdata.dat')
READ(1,*) R_input(1)
READ(1,*) R_input(2)
READ(1,*) R_input(3)
READ(1,*) nx
CLOSE(1)
END IF
CALL MPI_Bcast(R_input, 3, MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, mpierror)
CALL MPI_Bcast(nx, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, mpierror)
R_c=R_input(1)
R_cfl=R_input(2)
R_t=R_input(3)
R_dx=80./(nx-1)
nt=15
R_dt=R_t/(nt-1)
IF (myrank .EQ. 0) THEN
PRINT*, R_c*R_dt/R_dx
END IF
xdomains = nproc
IF ((MOD(nx,xdomains))==0) THEN
subdsize =nx/xdomains
ELSE
DO
nx=nx+1
IF ((MOD(nx,xdomains)) .EQ. 0) THEN
subdsize=nx/xdomains
EXIT
END IF
END DO
END IF
RAYS
ALLOCATE(xcoord(0:subdsize+1))
ALLOCATE(numerical(0:subdsize+1,nt))
DO i=0,subdsize+1
xcoord(i) = -40.-R_dx+i*R_dx+myrank*R_dx*subdsize
END DO
DO i = 0,subdsize+1
numerical(i,1)=0.5*(sign(1.,xcoord(i))+1.0)
END DO
IF (myrank .EQ. 0) THEN
DO i=1,nt
numerical(0:1,i)=0.
END DO
END IF
IF (myrank .EQ. nproc-1) THEN
DO i=1,nt
numerical(subdsize:subdsize+1,i)=1.
END DO
END IF
DO steptime=1, nt-1
tag = 1
IF (myrank .LT. nproc-1) THEN
CALL MPI_Send (numerical(subdsize,steptime),1,MPI_DOUBLE_PRECISION,myrank+1,tag,MPI_COMM_WORLD,mpierror)
END IF
IF (myrank .GT. 0) THEN
CALL MPI_Recv (numerical(0,steptime),1,MPI_DOUBLE_PRECISION,myrank-1,tag,MPI_COMM_WORLD,status,mpierror )
END IF
IF (myrank .EQ. 0) THEN
DO i = 2, subdsize+1
numerical(i,steptime+1)=numerical(i,steptime)-R_c*R_dt/R_dx*(numerical(i,steptime)-numerical(i-1,steptime))
END DO
ELSE
DO i = 1, subdsize+1
numerical(i,steptime+1)=numerical(i,steptime)-R_c*R_dt/R_dx*(numerical(i,steptime)-numerical(i-1,steptime))
END DO
END IF
END DO
ALLOCATE(numerical_final(nx,nt))
CALL MPI_Gather(numerical(1:subdsize,nt),subdsize*nt,MPI_DOUBLE_PRECISION,numerical_final,subdsize*nt,MPI_DOUBLE_PRECISION,0, MPI_COMM_WORLD, mpierror)
CALL MPI_Finalize(mpierror)
DEALLOCATE (numerical,numerical_final)
END PROGRAM
And inputfile
1.5 !c
0.5 !Courant
5.0 !time
100 !x points

Write several distributed arrays with MPI IO

I am rewriting a numerical simulation code that is parallelized using MPI in one direction.
So far, the arrays containing the data were saved by the master MPI process, which implied transferring the data from all MPI processes to one and allocate huge arrays to store the whole thing. It is not very efficient nor classy, and is a problem for large resolutions.
I am therefore trying to use MPI-IO to write directly the file from the distributed arrays. One of the constraint I have is that the written file needs to respect the fortran "unformatted" format (i.e. 4 bytes integer before and after each field indicating its size).
I wrote a simple test program, that works when I write only one distributed array to the file. However, when I write several arrays, the total size of the file is wrong and when comparing to the equivalent fortran 'unformatted' file, the files are different.
Here is the sample code :
module arrays_dim
implicit none
INTEGER, PARAMETER :: dp = kind(0.d0)
integer, parameter :: imax = 500
integer, parameter :: jmax = 50
integer, parameter :: kmax = 10
end module arrays_dim
module mpi_vars
use mpi
implicit none
integer, save :: ierr, myID, numprocs
integer, save :: i_start, i_end, i_mean, i_loc
integer, save :: subArray, fileH
integer(MPI_OFFSET_KIND), save :: offset, currPos
end module mpi_vars
program test
use mpi
use arrays_dim
use mpi_vars
real(dp), dimension(0:imax,0:jmax+1,0:kmax+1) :: v, w
real(dp), dimension(:,:,:), allocatable :: v_loc, w_loc
integer :: i, j, k
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myID, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)
i_mean = (imax+1)/numprocs
i_start = myID*i_mean
i_end = i_start+i_mean-1
if(i_mean*numprocs<imax+1) then
if(myID == numprocs-1) i_end = imax
endif
i_loc = i_end - i_start + 1
allocate(v_loc(i_start:i_end,0:jmax+1,0:kmax+1))
allocate(w_loc(i_start:i_end,0:jmax+1,0:kmax+1))
print*, 'I am:', myID, i_start, i_end, i_loc
do k=0,kmax+1
do j=0,jmax+1
do i=0,imax
v(i,j,k) = i+j+k
w(i,j,k) = i*j*k
enddo
enddo
enddo
if(myID==0) then
open(10,form='unformatted')
write(10) v
!write(10) w
close(10)
endif
do k=0,kmax+1
do j=0,jmax+1
do i=i_start,i_end
v_loc(i,j,k) = i+j+k
w_loc(i,j,k) = i*j*k
enddo
enddo
enddo
call MPI_Type_create_subarray (3, [imax+1, jmax+2, kmax+2], [i_loc, jmax+2, kmax+2], &
[i_start, 0, 0], &
MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, subArray, ierr)
call MPI_Type_commit(subArray, ierr)
call MPI_File_open(MPI_COMM_WORLD, 'mpi.dat', &
MPI_MODE_WRONLY + MPI_MODE_CREATE + MPI_MODE_APPEND, &
MPI_INFO_NULL, fileH, ierr )
call saveMPI(v_loc, (i_loc)*(jmax+2)*(kmax+2))
!call saveMPI(w_loc, (i_loc)*(jmax+2)*(kmax+2))
call MPI_File_close(fileH, ierr)
deallocate(v_loc,w_loc)
call MPI_FINALIZE(ierr)
end program test
!
subroutine saveMPI(array, n)
use mpi
use arrays_dim
use mpi_vars
implicit none
real(dp), dimension(n) :: array
integer :: n
offset = (imax+1)*(jmax+2)*(kmax+2)*8
if(myID==0) then
call MPI_File_seek(fileH, int(0,MPI_OFFSET_KIND), MPI_SEEK_CUR, ierr)
call MPI_File_write(fileH, [(imax+1)*(jmax+2)*(kmax+2)*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_seek(fileH, offset, MPI_SEEK_CUR, ierr)
call MPI_File_write(fileH, [(imax+1)*(jmax+2)*(kmax+2)*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
endif
call MPI_File_set_view(fileH, int(4,MPI_OFFSET_KIND), MPI_DOUBLE_PRECISION, subArray, 'native', MPI_INFO_NULL, ierr)
call MPI_File_write_all(fileH, array, (i_loc)*(jmax+2)*(kmax+2), MPI_DOUBLE_PRECISION, MPI_STATUS_IGNORE, ierr)
end subroutine saveMPI
when the lines !write(10) w and !call saveMPI(w_loc, (i_loc)*(jmax+2)*(kmax+2)) are commented (i.e. I only write the v array), the code is working fine :
mpif90.openmpi -O3 -o prog main.f90
mpirun.openmpi -np 4 ./prog
cmp mpi.dat fort.10
cmp does not generate an output, so the files are identical.
If however I uncomment these lines, then the resulting files (mpi.dat and fort.10) are different. I am sure that the problem lies in the way I define the offset I use to write the data at the right position on the file, but I do not know how to indicate to the second call of saveMPI that the initial position should be the end of the file. What am I missing ?
Only the first call to saveMPI is working as you expect it to. Everything get messed up from the second call up. Here are few indications of what is happening:
MPI_File_set_view resets the independent file pointers and the shared file pointer to zero. See MPI_File_set_view for more details. So you are actually overwriting v data with w data when you call MPI_File_set_view in saveMPI.
with MPI_File_write, the data is written into those parts of the file specified by the current view. This mean that the way you are adding the size information into the file, is not really compatible with the view previously set for v.
calling MPI_File_seek with MPI_SEEK_CUR set the position relative to the current position of the individual pointer. So, for the second call, it is relative to the individual pointer of process 0
I do not use parallel IO that much, so I can not help more that this unless I step into the docs, which I do not have time to. The hint I can give is to:
add an additional parameter to saveMPI that will contain the absolute displacement of the data to write; this can be an [in out] arg. For the first call, it will be zero and for subsequent calls, it will be the size of all data already written to file, including the size information. It can be updated in saveMPI.
before writing the size information (by process 0) call MPI_File_set_view to reset the view to linear byte stream as originally given by MPI_File_open. This can be done by setting the etype and filetype to both MPI_BYTE in calling MPI_File_set_view. look into the doc of MPI_File_open for more information. You will then have to calls to MPI_File_set_view in saveMPI.
Your saveMPI subroutine could look like
subroutine saveMPI(array, n, disp)
use mpi
use arrays_dim
use mpi_vars
implicit none
real(dp), dimension(n) :: array
integer :: n, disp
offset = (imax+1)*(jmax+2)*(kmax+2)*8
call MPI_File_set_view(fileH, int(disp,MPI_OFFSET_KIND), MPI_BYTE, MPI_BYTE, 'native', MPI_INFO_NULL, ierr)
if(myID==0) then
call MPI_File_seek(fileH, int(0,MPI_OFFSET_KIND), MPI_SEEK_END, ierr)
call MPI_File_write(fileH, [(imax+1)*(jmax+2)*(kmax+2)*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
call MPI_File_seek(fileH, int(offset,MPI_OFFSET_KIND), MPI_SEEK_CUR, ierr)
call MPI_File_write(fileH, [(imax+1)*(jmax+2)*(kmax+2)*8], 1, MPI_INTEGER, MPI_STATUS_IGNORE, ierr)
endif
call MPI_File_set_view(fileH, int(disp+4,MPI_OFFSET_KIND), MPI_DOUBLE_PRECISION, subArray, 'native', MPI_INFO_NULL, ierr)
call MPI_File_write_all(fileH, array, (i_loc)*(jmax+2)*(kmax+2), MPI_DOUBLE_PRECISION, MPI_STATUS_IGNORE, ierr)
disp = disp+offset+8
end subroutine saveMPI
and called like:
disp = 0
call saveMPI(v_loc, (i_loc)*(jmax+2)*(kmax+2), disp)
call saveMPI(w_loc, (i_loc)*(jmax+2)*(kmax+2), disp)
Finally, make sure that you delete the file between two calls because you are using MPI_MODE_APPEND.

MPI struct datatype with an array

I would like to easily send an someObject in one MPI_SEND/RECV call in mpi.
type someObject
integer :: foo
real :: bar,baz
double precision :: a,b,c
double precision, dimension(someParam) :: x, y
end type someObject
I started using a MPI_TYPE_STRUCT, but then realized the sizes of the arrays x and y are dependent upon someParam. I initially thought of nesting a MPI_TYPE_CONTIGUOUS in the struct to represent the arrays, but cannot seem to get this to work. If this is even possible?
! Setup description of the 1 MPI_INTEGER field
offsets(0) = 0
oldtypes(0) = MPI_INTEGER
blockcounts(0) = 1
! Setup description of the 2 MPI_REAL fields
call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
offsets(1) = blockcounts(0) * extent
oldtypes(1) = MPI_REAL
blockcounts(1) = 2
! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
offsets(2) = offsets(1) + blockcounts(1) * extent
oldtypes(2) = MPI_DOUBLE_PRECISION
blockcounts(2) = 3
! Setup x and y MPI_DOUBLE_PRECISION array fields
call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
call MPI_TYPE_COMMIT(sOarraytype, ierr)
call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
offsets(3) = offsets(2) + blockcounts(2) * extent
oldtypes(3) = sOarraytype
blockcounts(3) = 2 ! x and y
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)
What I would like to do:
...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
x(i) = i
y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...
So far I am just getting segmentation faults, without much indication of what I'm doing wrong or if this is even possible. The documentation never said I couldn't use a contiguous datatype inside a struct datatype.
From what it seems you can't nest those kinds of datatypes and was a completely wrong solution.
Thanks to: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html and http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes.pdf for guidance.
the right way to define the MPI_TYPE_STRUCT is as follows:
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
data x/someParam * 0/, w/someParam * 0/
integer sOtype, oldtypes(0:7), blocklengths(0:7), offsets(0:7), iextent, rextent, dpextent
! Define MPI datatype for someObject object
! set up extents
call MPI_TYPE_EXTENT(MPI_INTEGER, iextent, ierr)
call MPI_TYPE_EXTENT(MPI_REAL, rextent, ierr)
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, dpextent, ierr)
! setup blocklengths /foo,bar,baz,a,b,c,x,y/
data blocklengths/1,1,1,1,1,1,someParam,someParam/
! setup oldtypes
oldtypes(0) = MPI_INTEGER
oldtypes(1) = MPI_REAL
oldtypes(2) = MPI_REAL
oldtypes(3) = MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5) = MPI_DOUBLE_PRECISION
oldtypes(6) = MPI_DOUBLE_PRECISION
oldtypes(7) = MPI_DOUBLE_PRECISION
! setup offsets
offsets(0) = 0
offsets(1) = iextent * blocklengths(0)
offsets(2) = offsets(1) + rextent*blocklengths(1)
offsets(3) = offsets(2) + rextent*blocklengths(2)
offsets(4) = offsets(3) + dpextent*blocklengths(3)
offsets(5) = offsets(4) + dpextent*blocklengths(4)
offsets(6) = offsets(5) + dpextent*blocklengths(5)
offsets(7) = offsets(6) + dpextent*blocklengths(6)
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(8, blocklengths, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)
That allows me to send and receive the object with the way I originally wanted!
The MPI struct type is a big headache. If this code is not in a performance-critical part of your code, look into the MPI_PACKED type. The packing call is relatively slow (basically one function call per element you're sending!), so don't use it for very large messages, but is easy fairly easy to use and very flexible in what you can send.