mpi_waitall in mpich2 with null values in array_of_requests - fortran

I get the following error with MPICH-2.1.5 and PGI compiler;
Fatal error in PMPI_Waitall: Invalid MPI_Request, error stack:
PMPI_Waitall(311): MPI_Waitall(count=4, req_array=0x2ca0ae0, status_array=0x2c8d220) failed
PMPI_Waitall(288): The supplied request in array element 0 was invalid (kind=0)
in the following example Fortran code for a stencil based algorithm,
Subroutine data_exchange
! data declaration
integer request(2*neighbor),status(MPI_STATUS_SIZE,2*neighbor)
integer n(neighbor),iflag(neighbor)
integer itag(neighbor),neigh(neighbor)
! Data initialization
request = 0; n = 0; iflag = 0;
! Create data buffers to send and recv
! Define values of n,iflag,itag,neigh based on boundary values
! Isend/Irecv look like this
ir=0
do i=1,neighbor
if(iflag(i).eq.1) then
ir=ir+1
call MPI_Isend(buf_send(i),n(i),MPI_REAL,neigh(i),itag(i),MPI_COMM_WORLD,request(ir),ierr)
ir=ir+1
call MPI_Irecv(buf_recv(i),nsize,MPI_REAL,neigh(i),MPI_ANY_TAG,MPI_COMM_WORLD,request(ir),ierr)
endif
enddo
! Calculations
call MPI_Waitall(2*neighbor,request,status,ierr)
end subroutine
The error occurs when the array_of_request in mpi_waitall gets a null value (request(i)=0). The null value in array_of_request comes up when the conditional iflag(i)=1 is not satisfied. The straight forward solution is to comment out the conditional but then that would introduce overheads of sending and receiving messages of 0 sizes which is not feasible for large scale systems (1000s of cores).
As per the MPI-forum link, the array_of_requests list may contain null or inactive handles.
I have tried following,
not initializing array_of_requests,
resizing array_of_request to match the MPI_isend + MPI_irecv count,
assigning dummy values to array_of_request
I also tested the very same code with MPICH-1 as wells as OpenMPI 1.4 and the code works without any issue.
Any insights would be really appreciated!

You could just move the first increment of ir into the conditional as well. Then you would have all handles in request(1:ir) at the and of the loop and issue:
call MPI_Waitall(ir,request(1:ir),status(:,1:ir),ierr)
This would make sure all requests are initialized properly.
Another thing: does n(i) in MPI_Isend hold the same value as nsize in the corresponding MPI_Irecv?
EDIT:
After consulting the MPI Standard (3.0, Ch. 3.7.3) I think you need to initialize the request array to MPI_REQUEST_NULL if you want give the whole request array to MPI_Waitall.

Related

Seg fault in fortran MPI_COMM_CREATE_GROUP if using a group not directly created from MPI_COMM_WORLD

I'm having a segmentation fault that I can not really understand in a simple code, that just:
calls the MPI_INIT
duplicates the global communicator, via MPI_COMM_DUP
creates a group with half of processes of the global communicator, via MPI_COMM_GROUP
finally from this group creates a new communicator via MPI_COMM_CREATE_GROUP
Specifically I use this last call, instead of just using MPI_COMM_CREATE, because it's only collective over the group of processes contained in group, while MPI_COMM_CREATE is collective over every process in COMM.
The code is the following
program mpi_comm_create_grp
use mpi
IMPLICIT NONE
INTEGER :: mpi_size, mpi_err_code
INTEGER :: my_comm_dup, mpi_new_comm, mpi_group_world, mpi_new_group
INTEGER :: rank_index
INTEGER, DIMENSION(:), ALLOCATABLE :: rank_vec
CALL mpi_init(mpi_err_code)
CALL mpi_comm_size(mpi_comm_world, mpi_size, mpi_err_code)
!! allocate and fill the vector for the new group
allocate(rank_vec(mpi_size/2))
rank_vec(:) = (/ (rank_index , rank_index=0, mpi_size/2) /)
!! create the group directly from the comm_world: this way works
! CALL mpi_comm_group(mpi_comm_world, mpi_group_world, mpi_err_code)
!! duplicating the comm_world creating the group form the dup: this ways fails
CALL mpi_comm_dup(mpi_comm_world, my_comm_dup, mpi_err_code)
!! creatig the group of all processes from the duplicated comm_world
CALL mpi_comm_group(my_comm_dup, mpi_group_world, mpi_err_code)
!! create a new group with just half of processes in comm_world
CALL mpi_group_incl(mpi_group_world, mpi_size/2, rank_vec,mpi_new_group, mpi_err_code)
!! create a new comm from the comm_world using the new group created
CALL mpi_comm_create_group(mpi_comm_world, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
!! deallocate and finalize mpi
if(ALLOCATED(rank_vec)) DEALLOCATE(rank_vec)
CALL mpi_finalize(mpi_err_code)
end program !mpi_comm_create_grp
If instead of duplicating the COMM_WORLD, I directly create the group from the global communicator (commented line), everything works just fine.
The parallel debugger I'm using traces back the seg fault to a call to MPI_GROUP_TRANSLATE_RANKS, but, as far as I know, the MPI_COMM_DUP duplicates all the attributes of the copied communicator, ranks numbering included.
I am using the ifort version 18.0.5, but I also tried with the 17.0.4, and 19.0.2 with no better results.
Well the thing is a little tricky, at least for me, but after some tests and help, the root of the problem was found.
In the code
CALL mpi_comm_create_group(mpi_comm_world, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
Creates a new communicator for the group mpi_new_group, previously
created. However the mpi_comm_world, which is used as first argument, is not in the same context as mpi_new_group, as explained in the mpich reference:
MPI_COMM_DUP will create a new communicator over the same group as
comm but with a new context
So the correct call would be:
CALL mpi_comm_create_group(my_comm_copy, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
I.e. , replacing the mpi_comm_world for my_comm_copy, that is the one from which the mpi_group_world was created.
I am still not sure why it is working with OpenMPI, but it is generally more tolerant
with this sort of things.
Like suggested in the comments I wrote to openmpi user list, and they replied
That is perfectly valid. The MPI processes that make up the group are all part of comm world. I would file a bug with Intel MPI.
So I try and post a question on Intel forum.
It is a bug they solved in the last version of the libray, 19.3.

Retrospectively closing a NetCDF file created with Fortran

I'm running a distributed model stripped to its bare minimum below:
integer, parameter :: &
nx = 1200,& ! Number of columns in grid
ny = 1200,& ! Number of rows in grid
nt = 6000 ! Number of timesteps
integer :: it ! Loop counter
real :: var1(nx,ny), var2(nx,ny), var3(nx,ny), etc(nx,ny)
! Create netcdf to write model output
call check( nf90_create(path="out.nc",cmode=nf90_clobber, ncid=nc_out_id) )
! Loop over time
do it = 1,nt
! Calculate a lot of variables
...
! Write some variables in out.nc at each timestep
CALL check( nf90_put_var(ncid=nc_out_id, varid=var1_varid, values=var1, &
start = (/ 1, 1, it /), count = (/ nx, ny, 1 /)) )
! Close the netcdf otherwise it is not readable:
if (it == nt) call check( nf90_close(nc_out_id) )
enddo
I'm in the development stage of the model so, it inevitably crashes at unexpected points (usually at the Calculate a lot of variables stage), which means that, if the model crashes at timestep it =3000, 2999 timesteps will be written to the netcdf output file, but I will not be able to read the file because the file has not been closed. Still, the data have been written: I currently have a 2GB out.nc file that I can't read. When I ncdump the file it shows
netcdf out.nc {
dimensions:
x = 1400 ;
y = 1200 ;
time = UNLIMITED ; // (0 currently)
variables:
float var1 (time, y, x) ;
data:
}
My questions are: (1) Is there a way to close the file retrospectively, even outside Fortran, to be able to read the data that have already been written? (2) Alternatively, is there another way to write the file in Fortran that would make the file readable even without closing it?
When nf90_close is called, buffered output is written to disk and the file ID is relinquished so it can be reused. The problem is most likely due to buffered output not having been written to the disk when the program terminates due to a crash, meaning that only the changes you made in "define mode" are present in the file (as shown by ncdump).
You therefore need to force the data to be written to the disk more often. There are three ways of doing this (as far as I am aware).
nf90_sync - which synchronises the buffered data to disk when called. This gives you the most control over when to output data (every loop step, or every n loop steps, for example), which can allow you to optimize for speed vs robustness, but introduces more programming and checking overhead for you.
Thanks to #RussF for this idea. Creating or opening the file using the nf90_share flag. This is the recommended approach if the netCDF file is intended to be used by multiple readers/writers simultaneously. It is essentially the same as an automatic implementation of nf90_sync for writing data. It gives less control, but also less programming overhead. Note that:
This only applies to netCDF-3 classic or 64-bit offset files.
Finally, an option I wouldn't recommend, but am including for completeness (and I guess there may be situations where this is the best option, although none spring to mind) - closing and reopening the file. I don't recommend this, because it will slow down your program, and adds greater possibility of causing errors.

Trouble using MPI_BCAST with MPI_CART_CREATE

I am having trouble with MPI_BCAST in Fortran. I create a new communicator using MPI_CART_CREATE (say 'COMM_NEW'). When I broadcast data from root using old communicator (i.e. MPI_COMM_WORLD) it works fine. But, when i use new communicator that i just created it gives the error:
[compute-4-15.local:15298] *** An error occurred in MPI_Bcast
[compute-4-15.local:15298] *** on communicator MPI_COMM_WORLD
[compute-4-15.local:15298] *** MPI_ERR_COMM: invalid communicator
[compute-4-15.local:15298] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
It do get the result from the processors involved in COMM_NEW, and also the above error, think the problem is with other processors which are not included in COMM_NEW, but are present in MPI_COMM_WORLD. Any help will be greatly appreciated. Is it because the number of processors in COMM_NEW is less than total processors. If so how do i broadcast among a set of processors which are less than the total. Thanks.
My sample code is:
!PROGRAM TO BROADCAST THE DATA FROM ROOT TO DEST PROCESSORS
PROGRAM MAIN
IMPLICIT NONE
INCLUDE 'mpif.h'
!____________________________________________________________________________________
!-------------------------------DECLARE VARIABLES------------------------------------
INTEGER :: ERROR, RANK, NPROCS, I
INTEGER :: SOURCE, TAG, COUNT, NDIMS, COMM_NEW
INTEGER :: A(10), DIMS(1)
LOGICAL :: PERIODS(1), REORDER
!____________________________________________________________________________________
!-------------------------------DEFINE VARIABLES-------------------------------------
SOURCE = 0; TAG = 1; COUNT = 10
PERIODS(1) = .FALSE.
REORDER = .FALSE.
NDIMS = 1
DIMS(1) = 6
!____________________________________________________________________________________
!--------------------INITIALIZE MPI, DETERMINE SIZE AND RANK-------------------------
CALL MPI_INIT(ERROR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NPROCS, ERROR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, RANK, ERROR)
!
CALL MPI_CART_CREATE(MPI_COMM_WORLD, NDIMS, DIMS, PERIODS, REORDER, COMM_NEW, ERROR)
IF(RANK==SOURCE)THEN
DO I=1,10
A(I) = I
END DO
END IF
!____________________________________________________________________________________
!----------------BROADCAST VECTOR A FROM ROOT TO DESTINATIONS------------------------
CALL MPI_BCAST(A,10,MPI_INTEGER,SOURCE,COMM_NEW,ERROR)
!PRINT*, RANK
!WRITE(*, "(10I5)") A
CALL MPI_FINALIZE(ERROR)
END PROGRAM
I think the error you give at the top of your question doesn't match up with the code at the bottom since it's complaining about a Bcast on MPI_COMM_WORLD and you don't actually do one in your code.
Anyway, if you're running with more processes than dimensions, some of the processes won't be included in COMM_NEW. Instead, when the call to MPI_CART_CREATE returns, they'll get MPI_COMM_NULL for COMM_NEW instead of the new communicator with the topology. You just need to do a check to make sure you have a real communicator instead of MPI_COMM_NULL before doing the Bcast (or just have all of the ranks above DIMS(1) not enter the Bcast.
To elaborate on Wesley Bland's answer and to clarify the apparent discrepancy in the error message. When the number of MPI processes in MPI_COMM_WORLD is larger than the number of processes in the created Cartesian grid, some of the processes won't become members of the new Cartesian communicator and will get MPI_COMM_NULL -- the invalid communicator handle -- as a result. Calling a collective communication operation requires a valid inter- or intra-communicator handle. Unlike the allowed usage of MPI_PROC_NULL in point-to-point operations, using the invalid communicator handle in collective calls is erroneous. The last statement is not explicitly written in the MPI standard - instead, the language used is:
If comm is an intracommunicator, then ... If comm is an intercommunicator, then ...
Since MPI_COMM_NULL is neither an intra-, nor an inter-communicator, it doesn't fall in any of the two categories of defined behaviour and hence leads to an error condition.
Since communication errors have to occur in some context (i.e. in a valid communicator), Open MPI substitutes MPI_COMM_WORLD in the call to the error handler and hence the error message says "*** on communicator MPI_COMM_WORLD". This is the relevant code section from ompi/mpi/c/bcast.c, where MPI_Bcast is implemented:
if (ompi_comm_invalid(comm)) {
return OMPI_ERRHANDLER_INVOKE(MPI_COMM_WORLD, MPI_ERR_COMM,
FUNC_NAME);
}
...
if (MPI_IN_PLACE == buffer) {
return OMPI_ERRHANDLER_INVOKE(comm, MPI_ERR_ARG, FUNC_NAME);
}
Your code triggers the error handler inside the first check. In all other error checks comm is used instead (since it is determined to be a valid communicator handle) and the error message will state something like "*** on communicator MPI COMMUNICATOR 5 SPLIT FROM 0".

Retrieve data from file written in FORTRAN during program run

I am trying to write a series of values for time (real values) into a dat file in FORTRAN. This is a part of an MPI code and the code runs for a long time. So I would like to extract data at every time step and print it into a file and read the file any time during the execution of the program. Currently, the problem I am facing is, the values of time are not written into the file until the program ends. I have put the open statement before the do loop and the close statement after the end of do loop.
The parts of my code look like:
open(unit=57,file='inst.dat')
do loop starts
.
.
.
write(57,*) time
.
.
.
end do
close(57)
try call flush(unit). Check your compiler docs as this is i think an extension.
You mention MPI: For parallel codes I think you need to give each thread its own file/unit,
or take other measures to avoid conflicts.
From Gfortran manual:
Beginning with the Fortran 2003 standard, there is a FLUSH statement that should be preferred over the FLUSH intrinsic.
The FLUSH intrinsic and the Fortran 2003 FLUSH statement have identical effect: they flush the runtime library's I/O buffer so that the data becomes visible to other processes. This does not guarantee that the data is committed to disk.
On POSIX systems, you can request that all data is transferred to the storage device by calling the fsync function, with the POSIX file descriptor of the I/O unit as argument (retrieved with GNU intrinsic FNUM). The following example shows how:
! Declare the interface for POSIX fsync function
interface
function fsync (fd) bind(c,name="fsync")
use iso_c_binding, only: c_int
integer(c_int), value :: fd
integer(c_int) :: fsync
end function fsync
end interface
! Variable declaration
integer :: ret
! Opening unit 10
open (10,file="foo")
! ...
! Perform I/O on unit 10
! ...
! Flush and sync
flush(10)
ret = fsync(fnum(10))
! Handle possible error
if (ret /= 0) stop "Error calling FSYNC"
How about closing the file after every time step (assuming a reasonable amount of time elapses between time steps)?
do loop starts
.
.
!Note: an if statement should wrap the following so that it is
!only called by one processor.
open(unit=57,file='inst.dat')
write(57,*) time
close(57)
.
.
end do
Alternatively if the time between time steps is short, writing the data after blocks of 10, 100, ... iterations may be more efficient.

catching a deadlock in a simple odd-even sending

I'm trying to solve a simple problem with MPI, my implementation is MPICH2 and my code is in fortran. I have used the blocking send and receive, the idea is so simple but when I run it it crashes!!! I have absolutely no idea what is wrong? can anyone make quote on this issue please? there is a piece of the code:
integer, parameter :: IM=100, JM=100
REAL, ALLOCATABLE :: T(:,:), TF(:,:)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,RNK,IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,SIZ,IERR)
prv = rnk-1
nxt = rnk+1
LIM = INT(IM/SIZ)
IF (rnk==0) THEN
ALLOCATE(TF(IM,JM))
prv = MPI_PROC_NULL
ELSEIF(rnk==siz-1) THEN
NXT = MPI_PROC_NULL
LIM = LIM+MOD(IM,SIZ)
END IF
IF (MOD(RNK,2)==0) THEN
CALL MPI_SEND(T(2,:),JM+2,MPI_REAL,PRV,10,MPI_COMM_WORLD,IERR)
CALL MPI_RECV(T(1,:),JM+2,MPI_REAL,PRV,20,MPI_COMM_WORLD,STAT,IERR)
ELSE
CALL MPI_RECV(T(LIM+2,:),JM+2,MPI_REAL,NXT,10,MPI_COMM_WORLD,STAT,IERR)
CALL MPI_SEND(T(LIM+1,:),JM+2,MPI_REAL,NXT,20,MPI_COMM_WORLD,IERR)
END IF
as I understood even processes are not receiving anything while the odd ones finish sending successfully, in some cases when I added some print to observe what is going on I saw that the variable NXT is changing during the sending procedure!!! for example all the odd process was sending message to process 0 not their next one!
The array T is not allocated so reading or writing from it is an error.
I can't see the whole program, but some observation on what I can see:
1) make sure that rnk, size, and prv are integers. Likely, prv is real (by default
typing rules) and you are sending a real to an integer, so tags don't match, hence deadlock.
2) I'd use sendrcv rather than send / recv ; recv / send code section. Two sendrecv's are cleaner ( 2 lines of code vs. 7), guaranteed not to deadlock, and is faster when you have bi-directional links (almost always true.)