Using mpi_scatterv with 4D fortran array - fortran

I'm trying to break up a 4D array over the third dimension, and send to each node using MPI. Basically, I'm computing derivatives of a matrix, Cpq, with respect to atom positions in each of the three cartesian directions. Cpq is of size nat_sl x nat_sl, so dCpqdR is of size nat_sl x nat_sl x nat x 3. At the end of the day, for ever s,i pair, I have to compute the matrix product of dCpqdR between the transpose of the eigenvectors of Cpq and the eigenvectors of Cpq like so:
temp = MATMUL(TRANSPOSE(Cpq), MATMUL(dCpqdR(:, :, s, i), Cpq))
This is fine, but as it turns out, the loop over s and i is now by far the slow part of my code. Because each can be done independently, I was hoping that I could break up dCpqdR, and give each task it's own s, i to compute the derivative of. That is, I'd like task 1 to get dCpqdR(:,:,1,1), task 2 to get dCpqdR(:,:,1,2), etc.
I've got this working in some sense by using a buffered send/recv pair of calls. The root node allocates a temporary array, fills it, sends to the relevant nodes, and the relevant nodes do their computations as they wish. This is fine, but can be slow and memory inefficient. I'd ideally like to break it up in a more memory efficient way.
The logical thing to do, then, is to use mpi_scatterv, but here is where I start running into trouble, as I'm having trouble figuring out the memory layout for this. I've written this, so far:
call mpi_type_create_subarray(4, (/ nat_sl, nat_sl, nat, 3 /), (/nat_sl, nat_sl, n_pairs(me_image+1), 3/),&
(/0, 0, 0, 0/), mpi_order_fortran, mpi_double_precision, subarr_typ, ierr)
call mpi_type_commit(subarr_typ, ierr)
call mpi_scatterv(dCpqdR, n_pairs(me_image+1), f_displs, subarr_typ,&
my_dCpqdR, 3*nat_sl*3*nat_sl*3*n_pairs(me_image+1), subarr_typ,&
root_image, intra_image_comm, ierr)
I've computed n_pairs using this subroutine:
subroutine mbdvdw_para_init_int_forces()
implicit none
integer :: p, s, i, counter, k, cpu_ind
integer :: num_unique_rpq, n_pairs_per_proc, cpu
real(dp) :: Rpq(3), Rpq_norm, current_val
num_pairs = nat
if(.not.allocated(f_cpu_id)) allocate(f_cpu_id(nat, 3))
n_pairs_per_proc = floor(dble(num_pairs)/nproc_image)
cpu = 0
n_pairs = 0
counter = 1
p = 1
do counter = 0, num_pairs-1, 1
n_pairs(modulo(counter, nproc_image)+1) = n_pairs(modulo(counter, nproc_image)+1) + 1
end do
do s = 1, nat, 1
f_cpu_id(s) = cpu
if((counter.lt.num_pairs)) then
if(p.eq.n_pairs(cpu+1)) then
cpu = cpu + 1
p = 0
end if
end if
p = p + 1
end do
call mp_set_displs( n_pairs, f_displs, num_pairs, nproc_image)
f_displs = f_displs*nat_sl*nat_sl*3
end subroutine mbdvdw_para_init_int_forces
and the full method for the matrix multiplication is
subroutine mbdvdw_interacting_energy(energy, forcedR, forcedh, forcedV)
implicit none
real(dp), intent(out) :: energy
real(dp), dimension(nat, 3), intent(out) :: forcedR
real(dp), dimension(3,3), intent(out) :: forcedh
real(dp), dimension(nat), intent(out) :: forcedV
real(dp), dimension(3*nat_sl, 3*nat_sl) :: temp
real(dp), dimension(:,:,:,:), allocatable :: my_dCpqdR
integer :: num_negative, i_atom, s, i, j, counter
integer, parameter :: eigs_check = 200
integer :: subarr_typ, ierr
! lapack work variables
integer :: LWORK, errorflag
real(dp) :: WORK((3*nat_sl)*(3+(3*nat_sl)/2)), eigenvalues(3*nat_sl)
call start_clock('mbd_int_energy')
call mp_sum(Cpq, intra_image_comm)
eigenvalues = 0.0_DP
forcedR = 0.0_DP
energy = 0.0_DP
num_negative = 0
forcedV = 0.0_DP
errorflag=0
LWORK=3*nat_sl*(3+(3*nat_sl)/2)
call DSYEV('V', 'U', 3*nat_sl, Cpq, 3*nat_sl, eigenvalues, WORK, LWORK, errorflag)
if(errorflag.eq.0) then
do i_atom=1, 3*nat_sl, 1
!open (unit=eigs_check, file="eigs.tmp",action="write",status="unknown",position="append")
! write(eigs_check, *) eigenvalues(i_atom)
!close(eigs_check)
if(eigenvalues(i_atom).ge.0.0_DP) then
energy = energy + dsqrt(eigenvalues(i_atom))
else
num_negative = num_negative + 1
end if
end do
if(num_negative.ge.1) then
write(stdout, '(3X," WARNING: Found ", I3, " Negative Eigenvalues.")'), num_negative
end if
else
end if
energy = energy*nat/nat_sl
!!!!!!!!!!!!!!!!!!!!
! Forces below here. There's going to be some long parallelization business.
!!!!!!!!!!!!!!!!!!!!
call start_clock('mbd_int_forces')
if(.not.allocated(my_dCpqdR)) allocate(my_dCpqdR(nat_sl, nat_sl, n_pairs(me_image+1), 3)), my_dCpqdR = 0.0_DP
if(mbd_vdw_forces) then
do s=1,nat,1
if(me_image.eq.(f_cpu_id(s)+1)) then
do i=1,3,1
temp = MATMUL(TRANSPOSE(Cpq), MATMUL(my_dCpqdR(:, :, counter, i), Cpq))
do j=1,3*nat_sl,1
if(eigenvalues(j).ge.0.0_DP) then
forcedR(s, i) = forcedR(s, i) + 1.0_DP/(2.0_DP*dsqrt(eigenvalues(j)))*temp(j,j)
end if
end do
end do
counter = counter + 1
end if
end do
forcedR = forcedR*nat/nat_sl
do s=1,3,1
do i=1,3,1
temp = MATMUL(TRANSPOSE(Cpq), MATMUL(dCpqdh(:, :, s, i), Cpq))
do j=1,3*nat_sl,1
if(eigenvalues(j).ge.0.0_DP) then
forcedh(s, i) = forcedh(s, i) + 1.0_DP/(2.0_DP*dsqrt(eigenvalues(j)))*temp(j,j)
end if
end do
end do
end do
forcedh = forcedh*nat/nat_sl
call mp_sum(forcedR, intra_image_comm)
call mp_sum(forcedh, intra_image_comm)
end if
call stop_clock('mbd_int_forces')
call stop_clock('mbd_int_energy')
return
end subroutine mbdvdw_interacting_energy
But when run, it's complaining that
[MathBook Pro:58100] *** An error occurred in MPI_Type_create_subarray
[MathBook Pro:58100] *** reported by process [2560884737,2314885530279477248]
[MathBook Pro:58100] *** on communicator MPI_COMM_WORLD
[MathBook Pro:58100] *** MPI_ERR_ARG: invalid argument of some other kind
[MathBook Pro:58100] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[MathBook Pro:58100] *** and potentially your MPI job)
so something is going wrong, but I have no idea what. I know my description is somewhat sparse to start with, so please let me know what information would be necessary to help.

Related

MPI fortran 90 - Abort errors on nodes during execution

I am trying to execute this program using MPI that I have written in fortran:
Program CartesianGrid2D
Implicit None
Include 'mpif.h'
!----------------------------------------!
! Setting of the computational grid !
Integer, Parameter :: NDIM = 2 ! number of space dimensions
Integer, Parameter :: IMAX = 200 ! number of grid points in x-direction
Integer, Parameter :: JMAX = 200 ! number of grid points in y-direction
Real, Parameter :: x0=0.0 !min x-coord
Real, Parameter :: x1=1.0 !max x-coord
Real, Parameter :: y0=0.0 !min y-coord
Real, Parameter :: y1=1.0 !max y-coord
!----------------------------------------!
! Local variable declaration
TYPE tMPI
Integer :: myrank
Integer :: nCPU
Integer :: status(MPI_STATUS_SIZE)
Integer :: iStart, iEnd !idx of starting and ending cell in x-dir
Integer :: jStart, jEnd !idx of starting and ending cell in y-dir
Integer :: imax, jmax !number of cells within each rank
Integer, Allocatable :: mycoords(:) !point coords of the subgrid
Integer :: iErr !flag for errors in x-dir
Integer :: x_thread !number of CPUs in x-dir
Integer :: y_thread !number of CPUs in y-dir
End TYPE tMPI
TYPE(tMPI) :: MPI
Logical, Allocatable :: periods(:)
Integer, Allocatable :: dims(:)
Integer :: TCPU, BCPU, RCPU, LCPU !neighbor ranks of myrank
Integer :: i, j, idx, jdx, source
Integer :: COMM_CART !cartesian MPI communicator
Real :: dx, dy
Real, Allocatable :: x(:), y(:) ! grid coordinates
!----------------------------------------!
! 1) MPI initialization
CALL MPI_INIT(MPI%iErr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, MPI%myrank, MPI%iErr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,MPI%nCPU,MPI%iErr)
! check the number of CPUs
If(MOD(MPI%nCPU,2).ne.0) then
print *, 'ERROR. Number of CPU must be even!'
CALL MPI_FINALIZE(MPI%iErr)
Stop
End if
CALL MPI_BARRIER(MPI_COMM_WORLD,MPI%iErr)
! 2) Create a Cartesian topology
! check the number of cells
If(MOD(IMAX,2).ne.0) then
print *, 'ERROR. Number of x-cells must be even!'
CALL MPI_FINALIZE(MPI%iErr)
Stop
End if
If(MOD(JMAX,2).ne.0) then
print *, 'ERROR. Number of y-cells must be even!'
CALL MPI_FINALIZE(MPI%iErr)
Stop
End if
! Domain decomposition
MPI%x_thread = MPI%nCPU/2
MPI%y_thread = MPI%nCPU - MPI%x_thread
Allocate(dims(NDIM), periods(NDIM), MPI%mycoords(NDIM))
dims = (/ MPI%x_thread, MPI%y_thread /)
periods = .FALSE.
CALL MPI_CART_CREATE(MPI_COMM_WORLD,NDIM,dims,periods,.TRUE.,COMM_CART,MPI%iErr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, MPI%myrank, MPI%iErr)
! 2.3) Find CPU neighbords
CALL MPI_CART_SHIFT(COMM_CART,0,1,source,RCPU,MPI%iErr)
CALL MPI_CART_SHIFT(COMM_CART,0,-1,source,LCPU,MPI%iErr)
CALL MPI_CART_SHIFT(COMM_CART,1,0,source,TCPU,MPI%iErr)
CALL MPI_CART_SHIFT(COMM_CART,-1,0,source,BCPU,MPI%iErr)
! coordinates of the subgrid
CALL MPI_CART_COORDS(COMM_CART,MPI%myrank,NDIM,MPI%mycoords,MPI%iErr)
MPI%imax = IMAX/MPI%x_thread
MPI%jmax = JMAX/MPI%y_thread
MPI%iStart = 1 + MPI%mycoords(1)*MPI%imax
MPI%iEnd = MPI%iStart + MPI%imax - 1
MPI%jStart = 1 + MPI%mycoords(2)*MPI%jmax
MPI%jEnd = MPI%jStart + MPI%jmax - 1
! 3) Comoute the real mesh
dx = (x1-x0)/Real(IMAX-1)
dy = (y1-y0)/Real(JMAX-1)
Allocate(x(MPI%IMAX))
Allocate(y(MPI%JMAX))
idx = 0
Do i = MPI%iStart, MPI%iEnd
idx = idx + 1
x(idx) = (x0-dx/2.) + (i-1)*dx
End do
jdx = 0
Do i = MPI%jStart, MPI%jEnd
jdx = jdx + 1
y(jdx) = (y0-dy/2.) + (j-1)*dy
End do
! 4) Plot the output and finalize the program
CALL ASCII_Output(x,y,MPI%imax,MPI%jmax,MPI%myrank)
CALL MPI_FINALIZE(MPI%iErr)
End program CartesianGrid2D
Subroutine ASCII_Output(x,y,imax,jmax,myrank)
!-----------------------------------------!
Implicit None
!-----------------------------------------!
Integer :: imax, jmax, myrank
Real :: x(imax), y(jmax)
Integer :: i, j, DataUnit
Character(len=10) :: cmyrank
Character(len=200) :: IOFileName
!-----------------------------------------!
Write(cmyrank,'(I4.4)') myrank
IOFileName = 'CartesianGrid_output-'//TRIM(cmyrank)//'.dat'
DataUnit = 100+myrank
Open(Unit=DataUnit, File=Trim(IOFileName), Status='Unknown', Action='Write')
Write(DataUnit,*) imax
Write(DataUnit,*) jmax
Do i = 1, imax
Write(DataUnit,*) x(i)
End do
Do j = 1, jmax
Write(DataUnit,*) y(j)
End do
Close(DataUnit)
End Subroutine ASCII_Output
However whenever I try to execute I have got this list of errors popping up:
Abort(795947788) on node 0 (rank 0 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x60000384c9f0, periods=0x60000384c9e0, reorder=1, comm_cart=0x16f8532a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
Abort(863056652) on node 1 (rank 1 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x600003030760, periods=0x600003030860, reorder=1, comm_cart=0x16ef332a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
Abort(460403468) on node 2 (rank 2 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x6000039647d0, periods=0x6000039647e0, reorder=1, comm_cart=0x16d05f2a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
Abort(191968012) on node 3 (rank 3 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x600000b78470, periods=0x600000b784e0, reorder=1, comm_cart=0x16f75f2a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
Abort(997274380) on node 4 (rank 4 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x6000039843e0, periods=0x6000039843d0, reorder=1, comm_cart=0x16ba532a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
Abort(259076876) on node 5 (rank 5 in comm 0): Fatal error in internal_Cart_create: Invalid argument, error stack:
internal_Cart_create(102): MPI_Cart_create(MPI_COMM_WORLD, ndims=2, dims=0x6000016cab90, periods=0x6000016caaa0, reorder=1, comm_cart=0x16d3232a0) failed
MPIR_Cart_create_impl(43): Size of the communicator (6) is smaller than the size of the Cartesian topology (9)
What I first do is : mpif90 -cpp -lmpi NameOfTheProgram.f90 and then whenever I execute the a.out I do mpirun -np 6 ./a.out
Running this on a MacBook Air M1 (Whenever I have to run fortran I usually use a gfortran compiler).
Your computation
MPI%x_thread = MPI%nCPU/2
MPI%y_thread = MPI%nCPU - MPI%x_thread
makes no sense. As the error message indicates, the product of x_thread and y_thread is not equal to your communicator size.
Please use MPI_Dims_create to set these parameters.

MPI_WTIME is not giving me speedup as required

Program Main
implicit none
include 'mpif.h'
!Define parameters
integer::my_rank,p2,n2,ierr,source
integer, parameter :: n=3,m=3,o=m*n
real(kind=8) aaa(n),ddd(n),bbb(n),ccc(n),xxx(n),b(m,n),start, finish
integer i, j
real h
real(kind=8),dimension(:),allocatable::sol1
h=0.25
b=0
do i=1,m
b(i,i)=1/(1.2**i)
b(i,i-1)=-b(i,i)
enddo
call MPI_INIT(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,p2,ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,my_rank,ierr)
allocate(sol1(o))
start=MPI_WTIME()
do i=1,n
aaa(i)=-1/h**2
bbb(i)=2/h**2+b(my_rank+1,my_rank+1)
ccc(i)=-1/h**2
ddd(i)=1/h**2
enddo
call thomas(aaa,bbb,ccc,ddd,xxx,n)
finish=MPI_WTIME()
print*, finish-start
write(*,*) xxx, my_rank
call MPI_GATHER(xxx,n, MPI_REAL, sol1,n,MPI_REAL8,0, MPI_COMM_WORLD,ierr)
print*,sol1
call MPI_FINALIZE(ierr)
end program main
subroutine thomas(ld,md,ud,rh,solution,n)
implicit none
integer,parameter :: r8 = kind(1.d0)
integer,intent(in) :: n
real(r8),dimension(n),intent(in) :: ld,md,ud,rh
real(r8),dimension(n),intent(out) :: solution
real(r8),dimension(n) :: P,Q
real(r8) :: m
integer i
P(1) = ud(1)/md(1)
Q(1) = rh(1)/md(1)
do i = 2,n
m = md(i)-p(i-1)*ld(i)
P(i) = ud(i)/m
Q(i) = (rh(i)-Q(i-1)*ld(i))/m
end do
solution(n) = Q(n)
do i = n-1, 1, -1
solution(i) = Q(i)-P(i)*solution(i+1)
end do
end subroutine thomas
Here I used MPI_WTIME() to find the execution time. It seems like when I increase the number of processor than I am not getting the speedup. In this code I have m=3 (I make m equal equal to no of processor). I run with mpirun -np 3 sp.exe). Now I change say m=10 and run with mpirun -np 10 sp.exe. I should get the less time, isn't it? or I am missing something here. The community helped me before with some issues and now I am getting another issue. I would really appreciate the help if somebody would point out something.Isn't the chunk of code starting with do loop done by invidual processors( which I want)?

Storing a Variable with a Multi-Dimensional Index in Fortran

Question
Consider the following code:
program example
implicit none
integer, parameter :: n_coeffs = 1000
integer, parameter :: n_indices = 5
integer :: i
real(8), dimension(n_coeffs) :: coeff
integer, dimension(n_coeffs,n_indices) :: index
do i = 1, n_coeffs
coeff(i) = real(i*3,8)
index(i,:) = [2,4,8,16,32]*i
end do
end
For any 5 dimensional index I need to obtain the associated coefficient, without knowing or calculating i. For instance, given [2,4,8,16,32] I need to obtain 3.0 without computing i.
Is there a reasonable solution, perhaps using sparse matrices, that would work for n_indices in the order of 100 (though n_coeffs still in the order of 1000)?
A Bad Solution
One solution would be to define a 5 dimensional array as in
real(8), dimension(2000,4000,8000,16000,32000) :: coeff2
do i = 1, ncoeffs
coeff2(index(i,1),index(i,2),index(i,3),index(i,4),index(i,5)) = coeff(i)
end do
then, to get the coefficient associated with [2,4,8,16,32], call
coeff2(2,4,8,16,32)
However, besides being very wasteful of memory, this solution would not allow n_indices to be set to a number higher than 7 given the limit of 7 dimensions to an array.
OBS: This question is a spin-off of this one. I have tried to ask the question more precisely having failed in the first attempt, an effort that greatly benefited from the answer of #Rodrigo_Rodrigues.
Actual Code
In case it helps here is the code for the actual problem I am trying to solve. It is an adaptive sparse grid method for approximating a function. The main goal is to make the interpolation at the and as fast as possible:
MODULE MOD_PARAMETERS
IMPLICIT NONE
SAVE
INTEGER, PARAMETER :: d = 2 ! number of dimensions
INTEGER, PARAMETER :: L_0 = 4 ! after this adaptive grid kicks in, for L <= L_0 usual sparse grid
INTEGER, PARAMETER :: L_max = 9 ! maximum level
INTEGER, PARAMETER :: bound = 0 ! 0 -> for f = 0 at boundary
! 1 -> adding grid points at boundary
! 2 -> extrapolating close to boundary
INTEGER, PARAMETER :: max_error = 1
INTEGER, PARAMETER :: L2_error = 1
INTEGER, PARAMETER :: testing_sample = 1000000
REAL(8), PARAMETER :: eps = 0.01D0 ! epsilon for adaptive grid
END MODULE MOD_PARAMETERS
PROGRAM MAIN
USE MOD_PARAMETERS
IMPLICIT NONE
INTEGER, DIMENSION(d,d) :: ident
REAL(8), DIMENSION(d) :: xd
INTEGER, DIMENSION(2*d) :: temp
INTEGER, DIMENSION(:,:), ALLOCATABLE :: grid_index, temp_grid_index, grid_index_new, J_index
REAL(8), DIMENSION(:), ALLOCATABLE :: coeff, temp_coeff, J_coeff
REAL(8) :: temp_min, temp_max, V, T, B, F, x1
INTEGER :: k, k_1, k_2, h, i, j, L, n, dd, L1, L2, dsize, count, first, repeated, add, ind
INTEGER :: time1, time2, clock_rate, clock_max
REAL(8), DIMENSION(L_max,L_max,2**(L_max),2**(L_max)) :: coeff_grid
INTEGER, DIMENSION(d) :: level, LL, ii
REAL(8), DIMENSION(testing_sample,d) :: x_rand
REAL(8), DIMENSION(testing_sample) :: interp1, interp2
! ============================================================================
! EXECUTABLE
! ============================================================================
ident = 0
DO i = 1,d
ident(i,i) = 1
ENDDO
! Initial grid point
dsize = 1
ALLOCATE(grid_index(dsize,2*d),grid_index_new(dsize,2*d))
grid_index(1,:) = 1
grid_index_new = grid_index
ALLOCATE(coeff(dsize))
xd = (/ 0.5D0, 0.5D0 /)
CALL FF(xd,coeff(1))
CALL FF(xd,coeff_grid(1,1,1,1))
L = 1
n = SIZE(grid_index_new,1)
ALLOCATE(J_index(n*2*d,2*d))
ALLOCATE(J_coeff(n*2*d))
CALL SYSTEM_CLOCK (time1,clock_rate,clock_max)
DO WHILE (L .LT. L_max)
L = L+1
n = SIZE(grid_index_new,1)
count = 0
first = 1
DEALLOCATE(J_index,J_coeff)
ALLOCATE(J_index(n*2*d,2*d))
ALLOCATE(J_coeff(n*2*d))
J_index = 0
J_coeff = 0.0D0
DO k = 1,n
DO i = 1,d
DO j = 1,2
IF ((bound .EQ. 0) .OR. (bound .EQ. 2)) THEN
temp = grid_index_new(k,:)+(/ident(i,:),ident(i,:)*(grid_index_new(k,d+i)-(-1)**j)/)
ELSEIF (bound .EQ. 1) THEN
IF (grid_index_new(k,i) .EQ. 1) THEN
temp = grid_index_new(k,:)+(/ident(i,:),ident(i,:)*(-(-1)**j)/)
ELSE
temp = grid_index_new(k,:)+(/ident(i,:),ident(i,:)*(grid_index_new(k,d+i)-(-1)**j)/)
ENDIF
ENDIF
CALL XX(d,temp(1:d),temp(d+1:2*d),xd)
temp_min = MINVAL(xd)
temp_max = MAXVAL(xd)
IF ((temp_min .GE. 0.0D0) .AND. (temp_max .LE. 1.0D0)) THEN
IF (first .EQ. 1) THEN
first = 0
count = count+1
J_index(count,:) = temp
V = 0.0D0
DO k_1 = 1,SIZE(grid_index,1)
T = 1.0D0
DO k_2 = 1,d
CALL XX(1,temp(k_2),temp(d+k_2),x1)
CALL BASE(x1,grid_index(k_1,k_2),grid_index(k_1,k_2+d),B)
T = T*B
ENDDO
V = V+coeff(k_1)*T
ENDDO
CALL FF(xd,F)
J_coeff(count) = F-V
ELSE
repeated = 0
DO h = 1,count
IF (SUM(ABS(J_index(h,:)-temp)) .EQ. 0) THEN
repeated = 1
ENDIF
ENDDO
IF (repeated .EQ. 0) THEN
count = count+1
J_index(count,:) = temp
V = 0.0D0
DO k_1 = 1,SIZE(grid_index,1)
T = 1.0D0
DO k_2 = 1,d
CALL XX(1,temp(k_2),temp(d+k_2),x1)
CALL BASE(x1,grid_index(k_1,k_2),grid_index(k_1,k_2+d),B)
T = T*B
ENDDO
V = V+coeff(k_1)*T
ENDDO
CALL FF(xd,F)
J_coeff(count) = F-V
ENDIF
ENDIF
ENDIF
ENDDO
ENDDO
ENDDO
ALLOCATE(temp_grid_index(dsize,2*d))
ALLOCATE(temp_coeff(dsize))
temp_grid_index = grid_index
temp_coeff = coeff
DEALLOCATE(grid_index,coeff)
ALLOCATE(grid_index(dsize+count,2*d))
ALLOCATE(coeff(dsize+count))
grid_index(1:dsize,:) = temp_grid_index
coeff(1:dsize) = temp_coeff
DEALLOCATE(temp_grid_index,temp_coeff)
grid_index(dsize+1:dsize+count,:) = J_index(1:count,:)
coeff(dsize+1:dsize+count) = J_coeff(1:count)
dsize = dsize + count
DO i = 1,count
coeff_grid(J_index(i,1),J_index(i,2),J_index(i,3),J_index(i,4)) = J_coeff(i)
ENDDO
IF (L .LE. L_0) THEN
DEALLOCATE(grid_index_new)
ALLOCATE(grid_index_new(count,2*d))
grid_index_new = J_index(1:count,:)
ELSE
add = 0
DO h = 1,count
IF (ABS(J_coeff(h)) .GT. eps) THEN
add = add + 1
J_index(add,:) = J_index(h,:)
ENDIF
ENDDO
DEALLOCATE(grid_index_new)
ALLOCATE(grid_index_new(add,2*d))
grid_index_new = J_index(1:add,:)
ENDIF
ENDDO
CALL SYSTEM_CLOCK (time2,clock_rate,clock_max)
PRINT *, 'Elapsed real time1 = ', DBLE(time2-time1)/DBLE(clock_rate)
PRINT *, 'Grid Points = ', SIZE(grid_index,1)
! ============================================================================
! Compute interpolated values:
! ============================================================================
CALL RANDOM_NUMBER(x_rand)
CALL SYSTEM_CLOCK (time1,clock_rate,clock_max)
DO i = 1,testing_sample
V = 0.0D0
DO L1=1,L_max
DO L2=1,L_max
IF (L1+L2 .LE. L_max+1) THEN
level = (/L1,L2/)
T = 1.0D0
DO dd = 1,d
T = T*(1.0D0-ABS(x_rand(i,dd)/2.0D0**(-DBLE(level(dd)))-DBLE(2*FLOOR(x_rand(i,dd)*2.0D0**DBLE(level(dd)-1))+1)))
ENDDO
V = V + coeff_grid(L1,L2,2*FLOOR(x_rand(i,1)*2.0D0**DBLE(L1-1))+1,2*FLOOR(x_rand(i,2)*2.0D0**DBLE(L2-1))+1)*T
ENDIF
ENDDO
ENDDO
interp2(i) = V
ENDDO
CALL SYSTEM_CLOCK (time2,clock_rate,clock_max)
PRINT *, 'Elapsed real time2 = ', DBLE(time2-time1)/DBLE(clock_rate)
END PROGRAM
For any 5 dimensional index I need to obtain the associated
coefficient, without knowing or calculating i. For instance, given
[2,4,8,16,32] I need to obtain 3.0 without computing i.
function findloc_vector(matrix, vector) result(out)
integer, intent(in) :: matrix(:, :)
integer, intent(in) :: vector(size(matrix, dim=2))
integer :: out, i
do i = 1, size(matrix, dim=1)
if (all(matrix(i, :) == vector)) then
out = i
return
end if
end do
stop "No match for this vector"
end
And that's how you use it:
print*, coeff(findloc_vector(index, [2,4,8,16,32])) ! outputs 3.0
I must confess I was reluctant to post this code because, even though this answers your question, I honestly think this is not what you really want/need, but you dind't provide enough information for me to know what you really do want/need.
Edit (After actual code from OP):
If I decrypted your code correctly (and considering what you said in your previous question), you are declaring:
REAL(8), DIMENSION(L_max,L_max,2**(L_max),2**(L_max)) :: coeff_grid
(where L_max = 9, so size(coeff_grid) = 21233664 =~160MB) and then populating it with:
DO i = 1,count
coeff_grid(J_index(i,1),J_index(i,2),J_index(i,3),J_index(i,4)) = J_coeff(i)
ENDDO
(where count is of the order of 1000, i.e. 0.005% of its elements), so this way you can fetch the values by its 4 indices with the array notation.
Please, don't do that. You don't need a sparse matrix in this case either. The new approach you proposed is much better: storing the indices in each row of an smaller array, and fetching on the array of coefficients by the corresponding location of those indices in its own array. This is way faster (avoiding the large allocation) and much more memory-efficient.
PS: Is it mandatory for you to stick to Fortran 90? Its a very old version of the standard and chances are that the compiler you're using implements a more recent version. You could improve the quality of your code a lot with the intrinsic move_alloc (for less array copies), the kind constants from the intrinsic module iso_fortran_env (for portability), the [], >, <, <=,... notation (for readability)...

Fortran character format string as subroutine argument

I am struggling with reading a text string in. Am using gfortran 4.9.2.
Below I have written a little subroutine in which I would like to submit the write format as argument.
Ideally I'd like to be able to call it with
call printarray(mat1, "F8.3")
to print out a matrix mat1 in that format for example. The numbers of columns should be determined automatically inside the subroutine.
subroutine printarray(x, udf_temp)
implicit none
real, dimension(:,:), intent(in) :: x ! array to be printed
integer, dimension(2) :: dims ! array for shape of x
integer :: i, j
character(len=10) :: udf_temp ! user defined format, eg "F8.3, ...
character(len = :), allocatable :: udf ! trimmed udf_temp
character(len = 10) :: udf2
character(len = 10) :: txt1, txt2
integer :: ncols ! no. of columns of array
integer :: udf_temp_length
udf_temp_length = len_trim(udf_temp)
allocate(character(len=udf_temp_length) :: udf)
dims = shape(x)
ncols = dims(2)
write (txt1, '(I5)') ncols
udf2 = trim(txt1)//adjustl(udf)
txt2 = "("//trim(udf2)//")"
do i = 1, dims(1)
write (*, txt2) (x(i, j), j = 1, dims(2)) ! this is line 38
end do
end suroutine printarray
when I set len = 10:
character(len=10) :: udf_temp
I get compile error:
call printarray(mat1, "F8.3")
1
Warning: Character length of actual argument shorter than of dummy argument 'udf_temp' (4/10) at (1)
When I set len = *
character(len=*) :: udf_temp
it compiles but at runtime:
At line 38 of file where2.f95 (unit = 6, file = 'stdout')
Fortran runtime error: Unexpected element '( 8
What am I doing wrong?
Is there a neater way to do this?
Here's a summary of your question that I will try to address: You want to have a subroutine that will print a specified two-dimensional array with a specified format, such that each row is printed on a single line. For example, assume we have the real array:
real, dimension(2,8) :: x
x = reshape([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], shape=[2,8], order=[2,1])
! Then the array is:
! 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000
! 9.000 10.000 11.000 12.000 13.000 14.000 15.000 16.000
We want to use the format "F8.3", which prints floating point values (reals) with a field width of 8 and 3 decimal places.
Now, you are making a couple of mistakes when creating the format within your subroutine. First, you try to use udf to create the udf2 string. This is a problem because although you have allocated the size of udf, nothing has been assigned to it (pointed out in a comment by #francescalus). Thus, you see the error message you reported: Fortran runtime error: Unexpected element '( 8.
In the following, I make a couple of simplifying changes and demonstrate a few (slightly) different techniques. As shown, I suggest the use of * to indicate that the format can be applied an unlimited number of times, until all elements of the output list have been visited. Of course, explicitly stating the number of times to apply the format (ie, "(8F8.3)" instead of "(*(F8.3))") is fine, but the latter is slightly less work.
program main
implicit none
real, dimension(2,8) :: x
character(len=:), allocatable :: udf_in
x = reshape([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], shape=[2,8], order=[2,1])
udf_in = "F8.3"
call printarray(x, udf_in)
contains
subroutine printarray(x, udf_in)
implicit none
real, dimension(:,:), intent(in) :: x
character(len=*), intent(in) :: udf_in
integer :: ncols ! size(x,dim=2)
character(len=10) :: ncols_str ! ncols, stringified
integer, dimension(2) :: dims ! shape of x
character(len=:), allocatable :: udf0, udf1 ! format codes
integer :: i, j ! index counters
dims = shape(x) ! or just use: ncols = size(x, dim=2)
ncols = dims(2)
write (ncols_str, '(i0)') ncols ! use 'i0' for min. size
udf0 = "(" // ncols_str // udf_in // ")" ! create string: "(8F8.3)"
udf1 = "(*(" // udf_in // "))" ! create string: "(*(F8.3))"
print *, "Version 1:"
do i = 1, dims(1)
write (*, udf0) (x(i, j), j = 1,ncols) ! implied do-loop over j.
end do
print *, "Version 2:"
do i = 1, dims(1)
! udf1: "(*(F8.3))"
write (*, udf1) (x(i, j), j = 1,ncols) ! implied do-loop over j
end do
print *, "Version 3:"
do i = 1, size(x,dim=1) ! no need to create nrows/ncols vars.
write(*, udf1) x(i,:) ! let the compiler handle the extents.
enddo
end subroutine printarray
end program main
Observe: the final do-loop ("Version 3") is very simple. It does not need an explicit count of ncols because the * takes care of it automatically. Due to its simplicity, there is really no need for a subroutine at all.
besides the actual error (not using the input argument), this whole thing can be done much more simply:
subroutine printarray(m,f)
implicit none
character(len=*)f
real m(:,:)
character*10 n
write(n,'(i0)')size(m(1,:))
write(*,'('//n//f//')')transpose(m)
end subroutine
end
note no need for the loop constructs as fortran will automatically write the whole array , line wrapping as you reach the length of data specified by your format.
alternately you can use a loop construct, then you can use a '*' repeat count in the format and obviate the need for the internal write to construct the format string.
subroutine printarray(m,f)
implicit none
character(len=*)f
real m(:,:)
integer :: i
do i=1,size(m(:,1))
write(*,'(*('//f//'))')m(i,:)
enddo
end subroutine
end

Prevent changing variables with intent(in)

so reading the following question (Correct use of FORTRAN INTENT() for large arrays) I learned that defining a variable with intent(in) isn't enough, since when the variable is passed to another subroutine/function, it can be changed again. So how can I avoid this? In the original thread they talked about putting the subroutine into a module, but that doesn't help for me. For example I want to calculate the determinant of a matrix with a LU-factorization. Therefore I use the Lapack function zgetrf, but however this function alters my input matrix and the compiler don't displays any warnings. So what can I do?
module matHelper
implicit none
contains
subroutine initMat(AA)
real*8 :: u
double complex, dimension(:,:), intent(inout) :: AA
integer :: row, col, counter
counter = 1
do row=1,size(AA,1)
do col=1,size(AA,2)
AA(row,col)=cmplx(counter ,0)
counter=counter+1
end do
end do
end subroutine initMat
!subroutine to write a Matrix to file
!Input: AA - double complex matrix
! fid - integer file id
! fname - file name
! stat - integer status =replace[0] or old[1]
subroutine writeMat(AA,fid, fname, stat)
integer :: fid, stat
character(len=*) :: fname
double complex, dimension(:,:), intent(in) :: AA
integer :: row, col
character (len=64) :: fmtString
!opening file with given options
if(fid /= 0) then
if(stat == 0) then
open(unit=fid, file=fname, status='replace', &
action='write')
else if(stat ==1) then
open(unit=fid, file=fname, status='old', &
action='write')
else
print*, 'Error while trying to open file with Id', fid
return
end if
end if
!initializing matrix print format
write(fmtString,'(I0)') size(aa,2)
fmtString = '('// trim(fmtString) //'("{",ES10.3, ",", 1X, ES10.3,"}",:,1X))'
!write(*,*) fmtString
!writing matrix to file by iterating through each row
do row=1,size(aa,1)
write(fid,fmt = fmtString) AA(row,:)
enddo
write(fid,*) ''
end subroutine writeMat
!function to calculate the determinant of the input
!Input: AA - double complex matrix
!Output determinantMat - double complex,
! 0 if AA not a square matrix
function determinantMat(AA)
double complex, dimension(:,:), intent(in) :: AA
double complex :: determinantMat
integer, dimension(min(size(AA,1),size(AA,2)))&
:: ipiv
integer :: ii, info
!check if not square matrix, then set determinant to 0
if(size(AA,1)/= size(AA,2)) then
determinantMat = 0
return
end if
!compute LU facotirzation with LAPACK function
call zgetrf(size(AA,1),size(AA,2), AA,size(AA,1), ipiv,info)
if(info /= 0) then
determinantMat = cmplx(0.D0, 0.D0)
return
end if
determinantMat = cmplx(1.D0, 0.D0)
!determinant of triangular matrix is product of diagonal elements
do ii=1,size(AA,1)
if(ipiv(ii) /= ii) then
!a permutation was done, so a factor of -1
determinantMat = -determinantMat *AA(ii,ii)
else
!no permutation, so no -1
determinantMat = determinantMat*AA(ii,ii)
end if
end do
end function determinantMat
end module matHelper
!***********************************************************************
!module which stores matrix elements, dimension, trace, determinant
program test
use matHelper
implicit none
double complex, dimension(:,:), allocatable :: AA, BB
integer :: n, fid
fid = 0;
allocate(AA(3,3))
call initMat(AA)
call writeMat(AA,0,' ', 0)
print*, 'Determinante: ',determinantMat(AA) !changes AA
call writeMat(AA,0, ' ', 0)
end program test
PS: I am using the ifort compiler v15.0.3 20150407
I do not have ifort at home, but you may want to try compiling with '-check interfaces' and maybe with '-ipo'. You may need the path to 'zgetrf' for the '-check interfaces' to work, and if that is not source then it may not help.
If you declare 'function determinantMat' as 'PURE FUNCTION determinantMat' then I am pretty sure it would complain because 'zgetrf' is not known to be PURE nor ELEMENTAL. Try ^this stuff^ first.
If LAPACK has a module, then zgetrf could be known to be, or not be, PURE/ELEMENTAL. https://software.intel.com/en-us/articles/blas-and-lapack-fortran95-mod-files
I would suggest you add to your compile line:
-check interfaces -ipo
During initial build I like (Take it out for speed once it works):
-check all -warn all
Making a temporary array is one way around it. (I have not compiled this, so it is only a conceptual exemplar.)
PURE FUNCTION determinantMat(AA)
USE LAPACK95 !--New Line--!
IMPLICIT NONE !--New Line--!
double complex, dimension(:,:) , intent(IN ) :: AA
double complex :: determinantMat !<- output
!--internals--
integer, dimension(min(size(AA,1),size(AA,2))) :: ipiv
!!--Next line is new--
double complex, dimension(size(AA,1),size(AA,2)) :: AA_Temp !!<- I have no idea if this will work, you may need an allocatable??
integer :: ii, info
!check if not square matrix, then set determinant to 0
if(size(AA,1)/= size(AA,2)) then
determinantMat = 0
return
end if
!compute LU factorization with LAPACK function
!!--Next line is new--
AA_Temp = AA !--Initialise AA_Temp to be the same as AA--!
call zgetrf(size(AA_temp,1),size(AA_Temp,2), AA_Temp,size(AA_Temp,1), ipiv,info)
if(info /= 0) then
determinantMat = cmplx(0.D0, 0.D0)
return
end if
determinantMat = cmplx(1.D0, 0.D0)
!determinant of triangular matrix is product of diagonal elements
do ii=1,size(AA_Temp,1)
if(ipiv(ii) /= ii) then
!a permutation was done, so a factor of -1
determinantMat = -determinantMat *AA_Temp(ii,ii)
else
!no permutation, so no -1
determinantMat = determinantMat*AA_Temp(ii,ii)
end if
end do
end function determinantMat
With the 'USE LAPACK95' you probably do not need PURE, but if you wanted it to be PURE then you want to explicitly say so.