Inconsistent rows allocation in scalapack - fortran

Consider the following simple fortran program
program test_vec_allocation
use mpi
implicit none
integer(kind=8) :: N
! =========================BLACS and MPI=======================
integer :: ierr, size, rank,dims(2)
! -------------------------------------------------------------
integer, parameter :: block_size = 100
integer :: context, nprow, npcol, local_nprow, local_npcol
integer :: numroc, indxl2g, descmat(9),descvec(9)
integer :: mloc_mat ,nloc_mat ,mloc_vec ,nloc_vec
call blacs_pinfo(rank,size)
dims=0
call MPI_Dims_create(size, 2, dims, ierr)
nprow = dims(1);npcol = dims(2)
call blacs_get(0,0,context)
call blacs_gridinit(context, 'R', nprow, npcol)
call blacs_gridinfo(context, nprow, npcol, local_nprow,local_npcol)
N = 700
mloc_vec = numroc(N,block_size,local_nprow,0, nprow)
nloc_vec = numroc(1,block_size,local_npcol,0, npcol)
print *,"Rank", rank, mloc_vec, nloc_vec
call blacs_gridexit(context)
call blacs_exit(0)
end program test_vec_allocation
when I run it with 11 mpi ranks i get
Rank 0 100 1
Rank 4 100 1
Rank 2 100 1
Rank 1 100 1
Rank 3 100 1
Rank 10 0 1
Rank 6 100 1
Rank 5 100 1
Rank 9 0 1
Rank 8 0 1
Rank 7 0 1
which is how i would expect scalapack to divide this array, however, for even number of ranks i get:
Rank 0 200 1
Rank 8 200 0
Rank 9 100 1
Rank 10 100 0
Rank 1 200 0
Rank 6 200 1
Rank 11 100 0
Rank 3 200 1
Rank 4 200 0
Rank 2 200 0
Rank 7 200 0
Rank 5 200 0
which makes no sense, why would rank 0 get 200 elements for block size 100 and ranks * block size > N.
Because of this my program works for mpi ranks 1,2,3,5,7,11, but fails for ranks 4,6,8,9,10,12, etc (I dont why it is failing for rank 9!). Can anyone explain what is wrong in my approach?
GFortran version: 6.1.0
SCALPACK version: 2.1.0
MacOS version: 10.11

There are a number of things wrong with your code
1) Firstly don't use Integer( 8 ). As Vladimir put it, please unlearn this. Not only is it not portable and therefore very bad practice (please see many examples here, e.g. Fortran 90 kind parameter) here it is wrong as numroc expects an integer of default kind as its first argument (see e.g. https://software.intel.com/content/www/us/en/develop/documentation/mkl-developer-reference-fortran/top/scalapack-routines/scalapack-utility-functions-and-routines/numroc.html)
2) You call an MPI routine before you call MPI_Init, with a hand full of exceptions (and this isn't one) this results in undefined behaviour. Note the description at https://www.netlib.org/blacs/BLACS/QRef.html#BLACS_PINFO makes no reference to actually calling MPI_Init. As such I also prefer to call MPI_Finalise
3) You have misunderstood MPI_Dims_create. You seem to assume you will get a 1 dimensional distribution, but you actually ask it for a two dimensional one. Quoting from the standard at https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
The entries in the array dims are set to describe a Cartesian grid
with ndims dimensions and a total of nnodes nodes. The dimensions are
set to be as close to each other as possible,using an appropriate
divisibility algorithm. The caller may further constrain the
operation of this routine by specifying elements of array dims. If
dims[i] is set to a positive number,the routine will not modify the
number of nodes in dimension i; only those entries where dims[i] = 0
are modified by the call.
You set dims equal to zero, so the routine is free to set both dimensions. Thus for 11 processes you will get a 1x11 or 11x1 grid, which is what you seem to expect. However for 12 processes, as The dimensions are set to be as close to each other as possible you will get either a 3x4 or 4x3 grid, NOT 12x1. If it is 3x4 along each row you expect numroc to return 3 processes with 200 elements ( 2 blocks ), and 1 with 100. As there are 3 rows you therefore expect 3x3=9 processes returning 200 and 3x1=3 returning 100. This is what you see. Also try 15 procs - you will see an odd number of processes that according to you "does not work", this is because (advanced maths alert) 15=3x5. Incidentally on my machine 9 processes does NOT return 3x3 - this looks like a bug in openmpi to me.

Related

gfortran - Variable format expressions workaround [duplicate]

I wrote a program to calculate a square finite difference matrix, where you can enter the number of rows (equals the number of columns) -> this is stored in the variable matrix. The program works fine:
program fin_diff_matrix
implicit none
integer, dimension(:,:), allocatable :: A
integer :: matrix,i,j
print *,'Enter elements:'
read *, matrix
allocate(A(matrix,matrix))
A = 0
A(1,1) = 2
A(1,2) = -1
A(matrix,matrix) = 2
A(matrix,matrix-1) = -1
do j=2,matrix-1
A(j,j-1) = -1
A(j,j) = 2
A(j,j+1) = -1
end do
print *, 'Matrix A: '
write(*,1) A
1 format(6i10)
end program fin_diff_matrix
For the output I want that matrix is formatted for the output, e.g. if the user enters 6 rows the output should also look like:
2 -1 0 0 0 0
-1 2 -1 0 0 0
0 -1 2 -1 0 0
0 0 -1 2 -1 0
0 0 0 -1 2 -1
0 0 0 0 -1 2
The output of the format should also be variable, e.g. if the user enters 10, the output should also be formatted in 10 columns. Research on the Internet gave the following solution for the format statement with angle brackets:
1 format(<matrix>i<10)
If I compile with gfortran in Linux I always get the following error in the terminal:
fin_diff_matrix.f95:37.12:
1 format(<matrix>i10)
1
Error: Unexpected element '<' in format string at (1)
fin_diff_matrix.f95:35.11:
write(*,1) A
1
Error: FORMAT label 1 at (1) not defined
What doesn't that work and what is my mistake?
The syntax you are trying to use is non-standard, it works only in some compilers and I discourage using it.
Also, forget the FORMAT() statements for good, they are obsolete.
You can get your own number inside the format string when you construct it yourself from several parts
character(80) :: form
form = '( (i10,1x))'
write(form(2:11),'(i10)') matrix
write(*,form) A
You can also write your matrix in a loop per row and then you can use an arbitrarily large count number or a * in Fortran 2008.
do i = 1, matrix
write(*,'(999(i10,1x))') A(:,i)
end do
do i = 1, matrix
write(*,'(*(i10,1x))') A
end do
Just check if I did not transpose the matrix inadvertently.

Why the **periods** argument in MPI_Cart_create isn't working as expected?

I am trying to parallelize a distributed hydrological model using MPI (in Fortran). The model takes its input data from a file and most of the inputs are 2D arrays. My goal is to decompose the 2D global data into a small chunk 2D data and send each chunk to different cores. Then do some computation on the small chunk and finally gather the results back to the root.
I just followed the wonderful answer by Jonathan Dursi at Sending 2D arrays in Fortran with MPI_Gather, and I wanted to add more flexibility to include the cases when the 2D array data domain in both dimensions is not equally divisible by the number of processor cores.
I have also seen the solution for the same problem based on MPI_Alltoallw at Scatter Matrix Blocks of Different Sizes using MPI again by Jonathan Dursi. That method needs the creation of 4 types of blocks. For me it is a bit difficult to implement that because I don't know the exact size of the global domain. The model is intended to be applied in different domain sizes, i.e. in different river basins to be specific. Also the model has many input data (some subroutines have up to 15 different types of 2D data) to communicate between cores.
So, I was trying to solve the problem this way.
Create optimal division of processors in each dimension of the Cartesian grid
Define a new Cartesian communicator (with periods=.true., this is very important)
Create new datatype with MPI_Type_create_subarray (with equally sized type. Actually, it is not equally sized, but I forced it with periods=.true. on both dimensions. And here is where the problem occurred)
Scatter the data with MPI_Scatterv
Do the computation
Gather the results with MPI_Gatherv (I know the gatherv truncates some data, but that is not a problem as that part of the data is not needed)
The problem is that the periods=.true. doesn't have an effect, or I misunderstand what periods means in MPI_Cart_create. My understanding was that if periods are true in a dimension, then after the end of the data in that dimension it will pick the data in the beginning of the dimension (i.e. the data at the beginning will be taken twice, first by let's say process 0, and then by the last process in that dimension). But this is not happening for me at the moment. The last process in that dimension picks the data at the beginning of the next row. After that everything goes crazy. Had the periods been functioning as I expected, I guess the approach was correct.
System and Environment (Ubuntu 21.10, GFortran 11.2.0, Open MPI 4.1.1)
I would be very happy if you have any idea why periods=.true. is not responding as expected.
I am playing with it by creating a 10 by 10 random number, and I paste everything here. Please pardon me if 10x10 is a big matrix for your eyes. For me it works fine when the number of processes is 4 and gives incorrect answer with 6 processes.
program topo
USE mpi_f08
implicit none
!VARIABLE DECLARATION ==> related to the application
integer, parameter :: rows = 10, cols = 10
integer, dimension(rows*cols) :: file
integer, dimension(rows, cols) :: array_a
integer, allocatable, dimension(:,:) :: array_b
integer :: i, j, p
!VARIABLE DECLARATION ==> related to MPI
TYPE(MPI_Comm) :: comcart, comm=MPI_COMM_WORLD
integer :: comrank, comsize, ierror
!VARIABLE DECLARATION ==> related to Topology
integer, parameter :: ndims = rank(array_a)
integer :: dims(ndims)
logical :: periodic(ndims), reorder
!VARIABLE DECLARATION ==> related to SUBARRAY
TYPE(MPI_Datatype) :: newtype, resizedtype
integer(kind=MPI_ADDRESS_KIND) :: extent, lb
integer, dimension(ndims) :: starts, subsizes, gsizes
integer :: intsize
!VARIABLE DECLARATION ==> related to MPI_Scatterv
integer, allocatable, dimension(:) :: displs, counts
!Initialize MPI environment
call MPI_Init(ierror)
call MPI_Comm_rank(comm, comrank, ierror)
call MPI_Comm_size(comm, comsize, ierror)
!Read input data from file (please consider as it is read from file)
if (comrank==0) then
file=(/7,4,5,5,6,1,7,6,8,9,&
8,7,4,1,8,0,0,5,0,8,&
1,7,8,3,2,7,5,5,6,6,&
7,1,8,8,2,4,5,1,3,0,&
3,3,0,4,0,3,4,5,9,2,&
1,9,7,5,8,2,0,7,9,7,&
0,1,0,2,1,7,3,7,2,1,&
9,3,0,1,3,0,5,2,0,2,&
6,7,8,0,6,6,0,2,6,8,&
0,2,3,4,2,8,3,6,2,3/)
!open(unit=20, file="random_num.txt")
! read(20, 100) array_a
!close(20)
array_a=reshape(file,[10, 10])
write(*,*) "The original array from file"
write(*,100) array_a
write(*,*) "=============================================="
100 format(10i10)
end if
!Specifying the number of cores in each dimension. Put (0, 0) to let MPI decide
dims = 0
!Create optimal division of processors in each dimension of the Cartesian grid
Call MPI_Dims_create(comsize, ndims, dims, ierror)
!!Check the allocated cores in each dimension
!if (comrank==0) then
! write(*,*) "The number of cores in Y and X directions are: ", dims
!end if
!Initialize some of the inputs for MPI_Cart_create
periodic = .true.
reorder = .false.
!Define a new communicator
Call MPI_Cart_create(comm, ndims, dims, periodic, reorder, comcart, ierror)
!Update ranks and size based on the new communicator (just in case)
call MPI_Comm_rank(comcart, comrank, ierror)
call MPI_Comm_size(comcart, comsize, ierror)
!Initialize some of the inputs for MPI_Type_create_subarray
gsizes=shape(array_a)
!Here is the trick to make equal-sized chunks (of course the periodic must be true)
subsizes=ceiling(real(gsizes)/real(dims)) ! Or you can activate the alternatives below if you don't like this expression
!subsizes(1)=ceiling(real(gsizes(1))/real(dims(1)))
!subsizes(2)=ceiling(real(gsizes(2))/real(dims(2)))
starts = 0
!!Check the global and local sizes in each dimension
!if (comrank==0) then
! write(*,*) "The number of global grids in Y and X directions are: ", gsizes
! write(*,*) "The number of subgrids in Y and X directions are: ", subsizes
!end if
!Create new datatype with MPI_Type_create_subarray
call MPI_Type_create_subarray(ndims, gsizes, subsizes, starts, MPI_ORDER_FORTRAN, MPI_INTEGER, newtype, ierror)
!Set the extent of the newtype
call MPI_Type_size(MPI_INTEGER, intsize, ierror)
extent = subsizes(1)*intsize
lb = 0
!Resize the newtype based on the new lower bound(lb), and its upper bound is set to be lb + extent.
call MPI_Type_create_resized(newtype, lb, extent, resizedtype, ierror)
!Prepare the new datatype (resizedtype) for use
call MPI_Type_commit(resizedtype, ierror)
!Initialize some of the inputs for MPI_Scatterv
allocate(array_b(subsizes(1), subsizes(2)))
allocate (displs(comsize))
allocate (counts(comsize))
counts = 1 ! we will send one of these new types to everyone
do j = 1, dims(2)
do i = 1, dims(1)
displs(1+(i-1)+dims(1)*(j-1))=(i-1)+subsizes(2)*dims(1)*(j-1)
end do
end do
!write(*,*) "Displs are: ", displs
!Everything is ready. Let's scatter the the data
call MPI_Scatterv(array_a, counts, displs, resizedtype, array_b, subsizes(1)*subsizes(2), MPI_INTEGER, 0, comcart, ierror)
!Check the received chunk of data at each core
do p=1, comsize
if (comrank == p-1) then
write (*, *) 'Rank ', comrank, ' received: '
do i=1, subsizes(2)
write(*, *) array_b(:, i)
end do
end if
call MPI_Barrier(comcart, ierror)
end do
!Do some computation in each core
where (array_b==0)
array_b=999
end where
!Check the chunk of data at each core after computation, i.e. the data soon to be send
do p=1, comsize
if (comrank == p-1) then
write(*, *) 'Rank ', comrank, ' sending: '
do i=1, subsizes(2)
write(*, *) array_b(:,i)
end do
end if
call MPI_Barrier(comcart, ierror)
end do
!Gather the computation results back into the root
call MPI_Gatherv(array_b, subsizes(1)*subsizes(2), MPI_INTEGER, array_a, counts, displs, resizedtype, 0, comcart, ierror)
!Check the final result collected at the root
if (comrank == 0) then
write(*, *) ' Root received: '
do i=1,gsizes(2)
write(*, *) array_a(:, i)
end do
end if
!Deallocate resources associated with the committed type
call MPI_Type_free(resizedtype,ierror)
!Deallocate allocated arrays
deallocate(array_b)
deallocate(displs)
deallocate(counts)
!Terminate MPI environment
call MPI_Finalize(ierror)
end program topo
THE RESULT FROM 6 PROCESSORS LOOKS LIKE
The original array from file
7 4 5 5 6 1 7 6 8 9
8 7 4 1 8 0 0 5 0 8
1 7 8 3 2 7 5 5 6 6
7 1 8 8 2 4 5 1 3 0
3 3 0 4 0 3 4 5 9 2
1 9 7 5 8 2 0 7 9 7
0 1 0 2 1 7 3 7 2 1
9 3 0 1 3 0 5 2 0 2
6 7 8 0 6 6 0 2 6 8
0 2 3 4 2 8 3 6 2 3
==================================================================
Rank 0 received:
7 4 5 5
8 7 4 1
1 7 8 3
7 1 8 8
3 3 0 4
Rank 1 received:
6 1 7 6
8 0 0 5
2 7 5 5
2 4 5 1
0 3 4 5
Rank 2 received:
8 9 8 7
0 8 1 7
6 6 7 1
3 0 3 3
9 2 1 9
Rank 3 received:
0 1 0 2
9 3 0 1
6 7 8 0
0 2 3 4
0 0 3 4
Rank 4 received:
1 7 3 7
3 0 5 2
6 6 0 2
2 8 3 6
0 0 640473088 22028
Rank 5 received:
2 1 9 3
0 2 6 7
6 8 0 2
2 3 0 0
640467440 22028 -487235792 32764
Rank 0 sending:
7 4 5 5
8 7 4 1
1 7 8 3
7 1 8 8
3 3 999 4
Rank 1 sending:
6 1 7 6
8 999 999 5
2 7 5 5
2 4 5 1
999 3 4 5
Rank 2 sending:
8 9 8 7
999 8 1 7
6 6 7 1
3 999 3 3
9 2 1 9
Rank 3 sending:
999 1 999 2
9 3 999 1
6 7 8 999
999 2 3 4
999 999 3 4
Rank 5 sending:
2 1 9 3
999 2 6 7
6 8 999 2
2 3 999 999
640467440 22028 -487235792 32764
Rank 4 sending:
1 7 3 7
3 999 5 2
6 6 999 2
2 8 3 6
999 999 640473088 22028
Root received:
7 4 5 5 6 1 7 6 8 9
8 7 4 1 8 999 999 5 999 8
1 7 8 3 2 7 5 5 6 6
7 1 8 8 2 4 5 1 3 999
3 3 999 4 999 3 4 5 9 2
1 9 7 5 8 2 0 7 9 7
999 1 999 2 1 7 3 7 2 1
9 3 999 1 3 999 5 2 999 2
6 7 8 999 6 6 999 2 6 8
999 2 3 4 2 8 3 6 2 3
RUN FINISHED; exit value 0; real time: 200ms; user: 210ms; system: 210ms
I just reformatted the code and the results for better visualization. Thanks

Can we really avoid extra space when all the values are non-negative?

This question is a follow-up of another one I had asked quite a while ago:
We have been given an array of integers and another number k and we need to find the total number of continuous subarrays whose sum equals to k. For e.g., for the input: [1,1,1] and k=2, the expected output is 2.
In the accepted answer, #talex says:
PS: BTW if all values are non-negative there is better algorithm. it doesn't require extra memory.
While I didn't think much about it then, I am curious about it now. IMHO, we will require extra memory. In the event that all the input values are non-negative, our running (prefix) sum will go on increasing, and as such, sure, we don't need an unordered_map to store the frequency of a particular sum. But, we will still need extra memory (perhaps an unordered_set) to store the running (prefix) sums that we get along the way. This obviously contradicts what #talex said.
Could someone please confirm if we absolutely do need extra memory or if it could be avoided?
Thanks!
Let's start with a slightly simpler problem: all values are positive (no zeros). In this case the sub arrays can overlap, but they cannot contain one another.
I.e.: arr = 2 1 5 1 1 5 1 2, Sum = 8
2 1 5 1 1 5 1 2
|---|
|-----|
|-----|
|---|
But this situation can never occur:
* * * * * * *
|-------|
|---|
With this in mind there is algorithm that doesn't require extra space (well.. O(1) space) and has O(n) time complexity. The ideea is to have left and right indexes indicating the current sequence and the sum of the current sequence.
if the sum is k increment the counter, advance left and right
if the sum is less than k then advance right
else advance left
Now if there are zeros the intervals can contain one another, but only if the zeros are on the margins of the interval.
To adapt to non-negative numbers:
Do as above, except:
skip zeros when advancing left
if sum is k:
count consecutive zeros to the right of right, lets say zeroes_right_count
count consecutive zeros to the left of left. lets say zeroes_left_count
instead of incrementing the count as before, increase the counter by: (zeroes_left_count + 1) * (zeroes_right_count + 1)
Example:
... 7 0 0 5 1 2 0 0 0 9 ...
^ ^
left right
Here we have 2 zeroes to the left and 3 zeros to the right. This makes (2 + 1) * (3 + 1) = 12 sequences with sum 8 here:
5 1 2
5 1 2 0
5 1 2 0 0
5 1 2 0 0 0
0 5 1 2
0 5 1 2 0
0 5 1 2 0 0
0 5 1 2 0 0 0
0 0 5 1 2
0 0 5 1 2 0
0 0 5 1 2 0 0
0 0 5 1 2 0 0 0
I think this algorithm would work, using O(1) space.
We maintain two pointers to the beginning and end of the current subsequence, as well as the sum of the current subsequence. Initially, both pointers point to array[0], and the sum is obviously set to array[0].
Advance the end pointer (thus extending the subsequence to the right), and increase the sum by the value it points to, until that sum exceeds k. Then advance the start pointer (thus shrinking the subsequence from the left), and decrease the sum, until that sum gets below k. Keep doing this until the end pointer reaches the end of the array. Keep track of the number of times the sum was exactly k.

Do loop is stuck at the first subarray in MPI SCATTER and GATHER

I have two arrays, array global has 8 values and it will be scatter among array local with 2 values. What I was trying to do is, take the big array, split into small arrays, do some work, then put it back together.
Problem:
Even though I successfully scattered the data, the do loop as written is only working for the first sub array local. What I want is all of the integers in the scattered local array should be multiplied by 2, then gathered into the global array.
Code for the do loop (some work has been done here):
do j = 1,2
local(j) = j*2
print *, j
end do
Here's the full code. If you go down below you'll notice the part which I need your help.
MODULE MPI
IMPLICIT NONE
INCLUDE 'mpif.h'
INTEGER :: MYID,TOTPS, IERR, MPISTTS
CONTAINS
SUBROUTINE MPIINIT
IMPLICIT NONE
CALL MPI_INIT( IERR )
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MYID,IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,TOTPS,IERR)
RETURN
END SUBROUTINE MPIINIT
END MODULE MPI
PROGRAM SCATTER
USE MPI
IMPLICIT NONE
CALL MPIINIT
CALL TEST
CALL MPI_FINALIZE(IERR)
CONTAINS
SUBROUTINE TEST
USE MPI
IMPLICIT NONE
INTEGER :: I,J
INTEGER,DIMENSION(8) :: GLOBAL
INTEGER,DIMENSION(2) :: LOCAL
if (myid .eq. 0) then
do i = 1,8
global(i) = i
end do
end if
call mpi_scatter(global,2,mpi_integer,local,2,mpi_integer,0, &
mpi_comm_world,ierr)
print*,"task",myid,":",local
call mpi_barrier(mpi_comm_world,ierr)
!!!!!!! do some work here
do j = 1,2
local(j) = j*2
print*,j
end do
!!!!!! end work
call mpi_gather(local,2,mpi_integer,global,2,mpi_integer,0, &
mpi_comm_world,ierr)
if(myid .eq. 0) then
print*,"task",myid,":",global
end if
END SUBROUTINE TEST
END PROGRAM SCATTER
Notes:
(1) I've been reading & learning from this thread but it looks challenging for now.
(2) Run code mpif90 SCATTER.f90 .. mpirun -np 4 ./a.out
Output:
task 0 : 1 2
task 1 : 3 4
task 2 : 5 6
task 3 : 7 8
1
2
1
2
1
2
1
2
task 0 : 2 4 2 4 2 4 2 4
What I want to get is: task 0 : 2 4 6 8 10 12 14 16
You wrote
local(j) = j * 2
print*, j
I don't think that does what you think it does.
You probably meant to write
local(j) = local(j) * 2
print*, local(j)

Random sampling in fortran

I have the following data
X Y INFTIME
1 1 0
1 2 4
1 3 4
1 4 3
2 1 3
2 2 1
2 3 3
2 4 4
3 1 2
3 2 2
3 3 0
3 4 2
4 1 4
4 2 3
4 3 3
4 4 0
X and Y represent he X and Y components in the square grid of 4 by 4.
Here I want to sample randomly 10% from the population which are infected i.e, whose INFTIME is non zero. I did not get any idea of coding so could not start it.
Any suggestions and idea will be great for me.
Thanks
EDIT:
DO T = 1,10
DO i = 1, 625
IF(INFTIME(i)/=0 .AND. INFTIME(i) .LE. T)THEN
CALL RANDOM_NUMBER(u(i))
u(i) = 1+aint(u(i)*25)
CALL RANDOM_NUMBER(v(i))
v(i) = 1+aint(v(i)*25)
CALL RANDOM_NUMBER(w(i))
w(i) = 1+aint(w(i)*10)
ENDIF
ENDDO
ENDDO
do p = 1,625
WRITE(*,*) u(p),v(p),w(p)
enddo
This is my code what I tried but it only gives the random numbers, not the connection to the data. I used the data of 25 by 25 grids i.e, 625 individuals and time of infection 1 to 10
Follow what ja72 said. You have three 1D arrays of the same size (16). All you need to do is pick a number between 1 and 16, check to see if INFTIME is zero and accept the value as needed, then repeat until you've taken 10% of the samples (which would be 1.6 values, so I presume you'd just take 2? Or do you have more data than this 4x4 you presented?)
Edit You need to call the random number generator before the if statement:
do t=1,10
do i=1,625
ind = 1+int(624*rand(seed))
if(inftime(ind).neq.0 .and. inftime(ind).le.t) then
stuff
endif
enddo
enddo
The call ind=1+int(625*rand(seed)) will pick a random integer between 1 (when rand(seed)=0) and 625 (when rand(seed)=1). Then you can do what you need if the if statement is satisfied.
EDIT: program epimatrix
IMPLICIT NONE
INTEGER ::l, i,T,K
REAL, DIMENSION(1:625):: X,y,inftime
INTEGER::seed,my_cnt
INTEGER,DIMENSION(8) :: time1
CALL DATE_AND_TIME(values=time1)
seed = 1000*time1(7)+time1(8)
call srand(seed)
OPEN(10, FILE = 'epidemicSIR.txt', FORM = 'FORMATTED')
DO l = 1,625
READ(10,*,END = 200) X(l), Y(l), INFTIME(l)
! WRITE(*,*) X(l),Y(l), INFTIME(l)
! if you know how it was formatted, you should use
! read(10,20) X(l), Y(l), INFTIME(l)
! where 20 is the format
ENDDO
200 CONTINUE
CLOSE(10)
DO T = 1,10
my_cnt=0
write(*,*) "T=",T
DO while (my_cnt.le.63)
K = 1+int(624*rand())
IF(INFTIME(K)/=0 .AND. INFTIME(K) .LE. T)THEN
write(*,*) X(k),Y(k),INFTIME(k)
my_cnt=my_cnt+1
ENDIF
enddo
write(*,*) " "
ENDDO
end program
EDIT 2
I've adjusted the program to fix some of the issues. I've tried keeping my edits in lowercase so that you can see the difference. The do-while loop allows the code to continue running until the condition my_cnt.le.63 has been met (which means you have 63 lines of X, Y, inftime per T). I've added a line to output T and another line to add a space so that the data might be more clear when looking at the output.
This should take care of all the issues you've been running into. If not, I'll keep checking this page.