In MPI how to get communicator when using several executables? - fortran

In MPI we work with multiple process that do not share anything but communicate with recv/send operations. The recv/send operations are done with respect to a communicator which can be the whole set of processors or a subset of them. The basic commands are:
call MPI_Comm_size ( MPI_COMM_WORLD, nproc, ierr )
call MPI_Comm_rank ( MPI_COMM_WORLD, myrank, ierr )
with MPI_COMM_WORLD the communicator associated to the set of all processors. One interesting feature of MPI is that we can run several executables together with the command:
mpirun -n 3 prog1 : -n 2 prog2
with 3 nodes assigned to the first executable and 2 to the second. However for practical work, one would like to have a communicator associated to prog1 or prog2. Is there a way to get thi directly without using the command MPI_COMM_SPLIT?

There is no such predefined communicator specified by the standard.
The great philosopher Jagger once said “you can’t always get what you want” and your best bet here is indeed to use MPI_Comm_split() and the value of MPI_COMM_WORLD's MPI_APPNUM attribute as the color argument.
From the MPI 3.1 standard chapter 10.5.3
10.5.3 MPI_APPNUM
There is a predefined attribute MPI_APPNUM of MPI_COMM_WORLD. In Fortran, the attribute is an integer value. In C,
the attribute is a pointer to an integer value. If a process was
spawned with MPI_COMM_SPAWN_MULTIPLE, MPI_APPNUM is the command number
that generated the current process. Numbering starts from zero. If a
process was spawned with MPI_COMM_SPAWN, it will have MPI_APPNUM equal
to zero. Additionally, if the process was not started by a spawn call,
but by an implementation specific startup mechanism that can handle
multiple process specifications, MPI_APPNUM should be set to the number
of the corresponding process specification. In particular, if it is
started with
mpiexec spec0 [: spec1 : spec2 : ...]
MPI_APPNUM should be set to the number of the corresponding specification.
If an application was not spawned with MPI_COMM_SPAWN or
MPI_COMM_SPAWN_MULTIPLE, and MPI_APPNUM does not make sense in the
context of the implementation-specific startup mechanism, MPI_APPNUM is
not set.
MPI implementations may optionally provide a mechanism to
override the value of MPI_APPNUM through the info argument. MPI
reserves the following key for all SPAWN calls.
appnum Value contains
an integer that overrides the default value for MPI_APPNUM in the
child.
Rationale.
When a single application is started, it is able to
figure out how many processes there are by looking at the size of
MPI_COMM_WORLD. An application consisting of multiple SPMD
sub-applications has no way to find out how many sub-applications there
are and to which sub-application the process belongs. While there are
ways to figure it out in special cases, there is no general mechanism.
MPI_APPNUM provides such a general mechanism. (End of rationale.)

For anybody interested in splitting the MPI_COMM_WORLD, I wrote a small utility library (one header). It splits the communicator into local communicators and establishes intercommunicators between them. It takes care of many of the technicalities:
You can find the header at: https://github.com/cfd-go/MPI_MPMD
It also gives the same syntax for running multiple programs at once, and spawning programs. If you have any questions or ideas to extend this, open an issue at github.
For example, in your first program you do:
MPMDHelper MPMD;
MPI_Init(&argc, &argv);
MPMD.Init(MPI_COMM_WORLD, "programA");
MPMD.local //<-- this is your local Comm.
In the other program you do:
MPMDHelper MPMD;
MPI_Init(&argc, &argv);
MPMD.Init(MPI_COMM_WORLD, "programB");
MPMD.local //<-- this is your local Comm.
MPMD["programA"].local //<-- intercommunicator for communication with programA
Hope it helps.

An alternative option is to use the client-server mechanism that is described by MPI standard (the chapter on this can be found here). The idea there is that you compile two independent MPI applications. One of them eventually becomes a server and opens a port for connections. Then the other one is the client that has to connect to that port. The code will look something like this:
Server:
program server
use mpi_f08
implicit none
integer :: error
type(MPI_Comm) :: intercomm
real, dimension(5) :: data = [1,2,3,4,5]
character(len=MPI_MAX_PORT_NAME) :: port_name
! Normal MPI initialization
call MPI_Init(error)
! Here we open a port for incoming connections
call MPI_Open_port(MPI_INFO_NULL, port_name, error)
! Copy it in order to pass the address to a client
print*, "PORT NAME:", port_name
! Accept the incoming connection creating the intercommunicator
call MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, intercomm, error)
! Send test data
call MPI_Send(data, 5, MPI_FLOAT, 0, 0, intercomm)
print*, "DATA SENT"
! Close connection
call MPI_Comm_disconnect(intercomm, error)
call MPI_Finalize(error)
end program server
Client:
program client
use mpi_f08
implicit none
integer :: error
type(MPI_Comm) :: intercomm
type(MPI_Status) :: status
real, dimension(5) :: data = [0,0,0,0,0]
character(len=MPI_MAX_PORT_NAME) :: port_name
call MPI_Init(error)
! Here we copy the port name obtained from the server
print*, "Type in port name"
read(*,*) port_name
! Establish a connection creating the intercommunicator
call MPI_Comm_connect(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, intercomm, error)
! Receive test data
call MPI_Recv(data, 5, MPI_FLOAT, 0, 0, intercomm, status, error)
print*, "DATA RECEIVED", data
! Close connection
call MPI_Comm_disconnect(intercomm, error)
call MPI_Finalize(error)
end program client
In a real program you may find some other way of transferring the information about the port address to the client (e.g. name publishing or file system transfer). Then you can just run your code like this:
mpirun -n <N1> server &
mpirun -n <N2> client
Few things to note about this approach. If you create only one intercommunicator - you will have only one MPI task from each side communicating to the partner code. If you need to send a lot of data - you may want to consider creating multiple intercommunicators. In addition, the implementation of this part of the MPI standard may be somewhat finicky (for instance in Open MPI 2.x there was a bug preventing its usage completely).

Related

SIGBUS occurs when fortran code reads file on linux cluster

When I run my fortran code in parallel on a linux cluster with mpirun I get a sigbus error.
It occurs while reading a file, the timing is irregular, and sometimes it proceeds without error.
I have tried debug compilation options like -g, but I haven't gotten any information on what line the error is coming from.
Actually the code was executed previously in three different clusters without this error, but the error is only occurring on this machine.
I personally suspect this is related to the performance of the machine (especially storage i/o), but I am not sure.
The program code is simple. Each process executed by mpirun reads the file corresponding to its rank as follows.
!!!!!!!!!! start of code
OPEN(11, FILE='FILE_NAME_WITH_RANK', FORM='UNFORMATTED')
READ(11,*) ISIZE
ALLOCATE(SOME_VARIABLE(ISIZE))
DO I = 1, ISIZE
READ(11,*) SOME_VARIABLE(I)
ENDDO
READ(11,*) ISIZE2
ALLOCATE(SOME_VARIABLE2(ISIZE2))
DO I = 1, ISIZE2
READ(11,*) SOME_VARIABLE2(I)
ENDDO
! MORE VARIABLES
CLOSE(11)
!!!!!!!!!! end of code
I used 191 cpu, and the total size of 191 files it loads is about 11 GB.
The cluster used for execution consists of 24 nodes with 16 cpu each (384 cpu total) and uses common storage that is shared with another cluster.
I ran the code in parallel by specifying nodes 1 through 12 as the hostfile.
Initially, I had 191 cpu read all files at the same time out of sequence.
After doing so, the program ended with a sigbus error. Also, for some nodes, the ssh connection was delayed, and the bashrc file cannot be found by node with stale file handle error.
The stale file handle error waited a bit and it seemed to recover by itself, but I'm not sure what the system administrator did.
So, I changed it to the following code so that only one cpu can read the file at a time.
!!!!!!!!!! start of code
DO ICPU = 0, NUMBER_OF_PROCESS-1
IF(ICPU.EQ.MY_PROCESS) CALL READ_FILE
CALL MPI_BARRIER(MPI_COMMUNICATOR,IERR)
ENDDO
!!!!!!!!!! end of code
This seemed to work fine for single execution, but if I ran more than one of these programs at the same time, the first mpirun stopped and both ended with a sigbus error eventually.
My next attempt is to minimize the execution of the read statement by deleting the do statement when reading the array. However, due to limited time, I couldn't test the effectiveness of this modification.
Here are some additional information.
If I execute a search or copy a file with an explorer such as nautilus while running a parallel program, nautilus does not respond or the running program raise sigbus. In severe cases, I wasn't able to connect the VNC server with stale file handle errors.
I use OpenMPI 2.1.1, GNU Fortran 4.9.4.
I compile the program with following
$OPENMPIHOME/bin/mpif90 -mcmodel=large -fmax-stack-var-size-64 -cpp -O3 $SOURCE -o $EXE
I execute the program with following in gnome terminal
$OPENMPIHOME/bin/mpirun -np $NP -x $LD_LIBRARY_PATH --hostfile $HOSTFILE $EXE
The cluster is said to be running commercial software like FLUENT without problems.
Summing up the above, my personal suspicion is that the storage of the cluster is dismounted due to the excessive disk I/O generated by my code, but I don't know if this makes sense because I have no cluster knowledge.
If yes, I wonder if there is a way to minimize the disk I/O, if it is enough to proceed with the vectorized I/O mentioned above, or if there is an additional part.
I would appreciate it if you could tell me anything about the problem.
Thanks in advance.
!!!
I wrote an example code. As mentioned above, it may not be easy to reproduce because the occurrence varies depending on the machine.
PROGRAM BUSWRITE
IMPLICIT NONE
INTEGER, PARAMETER :: ISIZE1 = 10000, ISIZE2 = 20000, ISIZE3 = 30000
DOUBLE PRECISION, ALLOCATABLE :: ARRAY1(:), ARRAY2(:), ARRAY3(:)
INTEGER :: I
INTEGER :: I1, I2, I3
CHARACTER*3 CPUNUM
INCLUDE 'mpif.h'
INTEGER ISTATUS(MPI_STATUS_SIZE)
INTEGER :: IERR, NPES, MYPE
CALL MPI_INIT(IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NPES,IERR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MYPE,IERR)
I1=MOD(MYPE/100,10)+48
I2=MOD(MYPE/10 ,10)+48
I3=MOD(MYPE ,10)+48
CPUNUM=CHAR(I1)//CHAR(I2)//CHAR(I3)
OPEN(11, FILE=CPUNUM//'.DAT', FORM='UNFORMATTED')
ALLOCATE(ARRAY1(ISIZE1))
ALLOCATE(ARRAY2(ISIZE2))
ALLOCATE(ARRAY3(ISIZE3))
DO I = 1, ISIZE1
ARRAY1(I) = I
WRITE(11) ARRAY1(I)
ENDDO
DO I = 1, ISIZE2
ARRAY2(I) = I**2
WRITE(11) ARRAY2(I)
ENDDO
DO I = 1, ISIZE3
ARRAY3(I) = I**3
WRITE(11) ARRAY3(I)
ENDDO
CLOSE(11)
CALL MPI_FINALIZE(IERR)
END PROGRAM
mpif90 -ffree-line-length-0 ./buswrite.f90 -o ./buswrite
mpirun -np 32 ./buswrite
I've got 32 000.DAT ~ 031.DAT
PROGRAM BUSREAD
IMPLICIT NONE
INTEGER, PARAMETER :: ISIZE1 = 10000, ISIZE2 = 20000, ISIZE3 = 30000
DOUBLE PRECISION, ALLOCATABLE :: ARRAY1(:), ARRAY2(:), ARRAY3(:)
INTEGER :: I
INTEGER :: I1, I2, I3
CHARACTER*3 CPUNUM
INCLUDE 'mpif.h'
INTEGER ISTATUS(MPI_STATUS_SIZE)
INTEGER :: IERR, NPES, MYPE
CALL MPI_INIT(IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NPES,IERR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MYPE,IERR)
I1=MOD(MYPE/100,10)+48
I2=MOD(MYPE/10 ,10)+48
I3=MOD(MYPE ,10)+48
CPUNUM=CHAR(I1)//CHAR(I2)//CHAR(I3)
OPEN(11, FILE=CPUNUM//'.DAT', FORM='UNFORMATTED')
ALLOCATE(ARRAY1(ISIZE1))
ALLOCATE(ARRAY2(ISIZE2))
ALLOCATE(ARRAY3(ISIZE3))
DO I = 1, ISIZE1
READ(11) ARRAY1(I)
IF(ARRAY1(I).NE.I) STOP
ENDDO
DO I = 1, ISIZE2
READ(11) ARRAY2(I)
IF(ARRAY2(I).NE.I**2) STOP
ENDDO
DO I = 1, ISIZE3
READ(11) ARRAY3(I)
IF(ARRAY3(I).NE.I**3) STOP
ENDDO
CLOSE(11)
CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)
IF(MYPE.EQ.0) WRITE(*,*) 'GOOD'
CALL MPI_FINALIZE(IERR)
END PROGRAM
mpif90 -ffree-line-length-0 ./busread.f90 -o ./busread
mpirun -np 32 ./busread
I've got 'GOOD' output text from terminal as expected, but the machine in question is terminated with a sigbus error while running busread.
The issue was not observed after a device reboot. Even though I ran 4 programs at the same time under the same conditions, no problem occurred. In addition, other teams that used the device also had similar problems, which were resolved after reboot. The conclusion is a bit ridiculous, but if there are any people experiencing similar problems, I would like to summarize it as follows.
If your program terminates abnormally due to a memory error (like sigbus and sigsegv) while reading or writing a file, you can check the following.
Make sure there are no errors in your program. Check whether the time of occurrence of the error is constant or irregular, whether other programs have the same symptoms, whether it runs well on other machines, and whether there is a problem when run with a memory error checking tool such as valgrind.
Optimize the file I/O part. In the case of fortran, processing an entire array is tens of times faster than processing by element.
Immediately after an error occurs, try ssh connection to the machine (or node) to check whether the connection is smooth and that the file system is well accessed. If you cannot access the bashrc file or an error such as stale file handle occurs, please contact the system manager after combining the above reviewed information.
If someone has anything to add or if this post isn't appropriate, please let me know.

Seg fault in fortran MPI_COMM_CREATE_GROUP if using a group not directly created from MPI_COMM_WORLD

I'm having a segmentation fault that I can not really understand in a simple code, that just:
calls the MPI_INIT
duplicates the global communicator, via MPI_COMM_DUP
creates a group with half of processes of the global communicator, via MPI_COMM_GROUP
finally from this group creates a new communicator via MPI_COMM_CREATE_GROUP
Specifically I use this last call, instead of just using MPI_COMM_CREATE, because it's only collective over the group of processes contained in group, while MPI_COMM_CREATE is collective over every process in COMM.
The code is the following
program mpi_comm_create_grp
use mpi
IMPLICIT NONE
INTEGER :: mpi_size, mpi_err_code
INTEGER :: my_comm_dup, mpi_new_comm, mpi_group_world, mpi_new_group
INTEGER :: rank_index
INTEGER, DIMENSION(:), ALLOCATABLE :: rank_vec
CALL mpi_init(mpi_err_code)
CALL mpi_comm_size(mpi_comm_world, mpi_size, mpi_err_code)
!! allocate and fill the vector for the new group
allocate(rank_vec(mpi_size/2))
rank_vec(:) = (/ (rank_index , rank_index=0, mpi_size/2) /)
!! create the group directly from the comm_world: this way works
! CALL mpi_comm_group(mpi_comm_world, mpi_group_world, mpi_err_code)
!! duplicating the comm_world creating the group form the dup: this ways fails
CALL mpi_comm_dup(mpi_comm_world, my_comm_dup, mpi_err_code)
!! creatig the group of all processes from the duplicated comm_world
CALL mpi_comm_group(my_comm_dup, mpi_group_world, mpi_err_code)
!! create a new group with just half of processes in comm_world
CALL mpi_group_incl(mpi_group_world, mpi_size/2, rank_vec,mpi_new_group, mpi_err_code)
!! create a new comm from the comm_world using the new group created
CALL mpi_comm_create_group(mpi_comm_world, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
!! deallocate and finalize mpi
if(ALLOCATED(rank_vec)) DEALLOCATE(rank_vec)
CALL mpi_finalize(mpi_err_code)
end program !mpi_comm_create_grp
If instead of duplicating the COMM_WORLD, I directly create the group from the global communicator (commented line), everything works just fine.
The parallel debugger I'm using traces back the seg fault to a call to MPI_GROUP_TRANSLATE_RANKS, but, as far as I know, the MPI_COMM_DUP duplicates all the attributes of the copied communicator, ranks numbering included.
I am using the ifort version 18.0.5, but I also tried with the 17.0.4, and 19.0.2 with no better results.
Well the thing is a little tricky, at least for me, but after some tests and help, the root of the problem was found.
In the code
CALL mpi_comm_create_group(mpi_comm_world, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
Creates a new communicator for the group mpi_new_group, previously
created. However the mpi_comm_world, which is used as first argument, is not in the same context as mpi_new_group, as explained in the mpich reference:
MPI_COMM_DUP will create a new communicator over the same group as
comm but with a new context
So the correct call would be:
CALL mpi_comm_create_group(my_comm_copy, mpi_new_group, 0, mpi_new_comm, mpi_err_code)
I.e. , replacing the mpi_comm_world for my_comm_copy, that is the one from which the mpi_group_world was created.
I am still not sure why it is working with OpenMPI, but it is generally more tolerant
with this sort of things.
Like suggested in the comments I wrote to openmpi user list, and they replied
That is perfectly valid. The MPI processes that make up the group are all part of comm world. I would file a bug with Intel MPI.
So I try and post a question on Intel forum.
It is a bug they solved in the last version of the libray, 19.3.

Multiple communicators in MPI

The background of this question is in some computational areas such as Computational Fluid Dynamics (CFD). We often need finer mesh/grid in some critical regions while the background mesh can be coarser. For example the adaptive refine mesh to track shock waves and nesting domains in meteorology.
The Cartesian topology is used and domain decomposition is shown in the following sketch. In this case, 4*2=8 processors are used. The single number means the processor's rank and (x,y) means its topological coordinate.
Assume the mesh is refined in the regions with ranks 2, 3, 4, 5 (in the middle) and the local refinement ratio is defined as R=D_coarse/D_fine=2 in this case. Since the mesh is refined, so the time advancement should also be refined as well. This needs in the refined region the time steps t, t+1/2*dt, t+dt should be computed while only time steps t and t+dt are computed in global regions. This requires a smaller communicator which only includes ranks in the middle for extra computation. A global rank + coordinate and correspondent local ones (in red) sketch is shown as following:
However, I have some errors in implementation of this scheme and a snippet code in Fortran (not complete) is shown:
integer :: global_comm, local_comm ! global and local communicators
integer :: global_rank, local_rank !
integer :: global_grp, local_grp ! global and local groups
integer :: ranks(4) ! ranks in the refined region
integer :: dim ! dimension
integer :: left(-2:2), right(-2:2) ! ranks of neighbouring processors in 2 directions
ranks=[2,3,4,5]
!---- Make global communicator and their topological relationship
call mpi_init(ierr)
call mpi_cart_create(MPI_COMM_WORLD, 2, [4,2], [.false., .false.], .true., global_comm, ierr)
call mpi_comm_rank(global_comm, global_rank, ierr)
do dim=1, 2
call mpi_cart_shift(global_comm, dim-1, 1, left(-dim), right(dim), ierr)
end do
!---- make local communicator and its topological relationship
! Here I use group and create communicator
! create global group
call mpi_comm_group(MPI_COMM_WORLD, global_grp, ierr)
! extract 4 ranks from global group to make a local group
call mpi_group_incl(global_grp, 4, ranks, local_grp, ierr)
! make new communicator based on local group
call mpi_comm_create(MPI_COMM_WORLD, local_grp, local_comm, ierr)
! make topology for local communicator
call mpi_cart_create(global_comm, 2, [2,2], [.false., .false.], .true., local_comm, ierr)
! **** get rank for local communicator
call mpi_comm_rank(local_comm, local_rank, ierr)
! Do the same thing to make topological relationship as before in local communicator.
...
When I run the program, the problem comes from ' **** get rank for local communicator' step. My idea is to build two communicators: global and local communicators and local one is embedded in the global one. Then create their correspondent topological relationship in global and local communicators respectively. I do not if my concept is wrong or some syntax is wrong. And thank you very much if you can give me some suggestions.
The error message is
*** An error occurred in MPI_Comm_rank
*** reported by process [817692673,4]
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_COMM: invalid communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
You are creating a 2x2 Cartesian topology from the group of the global communicator, which contains eight ranks. Therefore, in four of them the value of local_comm as returned by MPI_Cart_create will be MPI_COMM_NULL. Calling MPI_Comm_rank on the null communicator results in the error.
If I understand your logic correctly, you should instead do something like:
if (local_comm /= MPI_COMM_NULL) then
! make topology for local communicator
call mpi_cart_create(local_comm, 2, [2,2], [.false., .false.], .true., &
local_cart_comm, ierr)
! **** get rank for local communicator
call mpi_comm_rank(local_cart_comm, local_rank, ierr)
...
end if

Trouble using MPI_BCAST with MPI_CART_CREATE

I am having trouble with MPI_BCAST in Fortran. I create a new communicator using MPI_CART_CREATE (say 'COMM_NEW'). When I broadcast data from root using old communicator (i.e. MPI_COMM_WORLD) it works fine. But, when i use new communicator that i just created it gives the error:
[compute-4-15.local:15298] *** An error occurred in MPI_Bcast
[compute-4-15.local:15298] *** on communicator MPI_COMM_WORLD
[compute-4-15.local:15298] *** MPI_ERR_COMM: invalid communicator
[compute-4-15.local:15298] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
It do get the result from the processors involved in COMM_NEW, and also the above error, think the problem is with other processors which are not included in COMM_NEW, but are present in MPI_COMM_WORLD. Any help will be greatly appreciated. Is it because the number of processors in COMM_NEW is less than total processors. If so how do i broadcast among a set of processors which are less than the total. Thanks.
My sample code is:
!PROGRAM TO BROADCAST THE DATA FROM ROOT TO DEST PROCESSORS
PROGRAM MAIN
IMPLICIT NONE
INCLUDE 'mpif.h'
!____________________________________________________________________________________
!-------------------------------DECLARE VARIABLES------------------------------------
INTEGER :: ERROR, RANK, NPROCS, I
INTEGER :: SOURCE, TAG, COUNT, NDIMS, COMM_NEW
INTEGER :: A(10), DIMS(1)
LOGICAL :: PERIODS(1), REORDER
!____________________________________________________________________________________
!-------------------------------DEFINE VARIABLES-------------------------------------
SOURCE = 0; TAG = 1; COUNT = 10
PERIODS(1) = .FALSE.
REORDER = .FALSE.
NDIMS = 1
DIMS(1) = 6
!____________________________________________________________________________________
!--------------------INITIALIZE MPI, DETERMINE SIZE AND RANK-------------------------
CALL MPI_INIT(ERROR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NPROCS, ERROR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, RANK, ERROR)
!
CALL MPI_CART_CREATE(MPI_COMM_WORLD, NDIMS, DIMS, PERIODS, REORDER, COMM_NEW, ERROR)
IF(RANK==SOURCE)THEN
DO I=1,10
A(I) = I
END DO
END IF
!____________________________________________________________________________________
!----------------BROADCAST VECTOR A FROM ROOT TO DESTINATIONS------------------------
CALL MPI_BCAST(A,10,MPI_INTEGER,SOURCE,COMM_NEW,ERROR)
!PRINT*, RANK
!WRITE(*, "(10I5)") A
CALL MPI_FINALIZE(ERROR)
END PROGRAM
I think the error you give at the top of your question doesn't match up with the code at the bottom since it's complaining about a Bcast on MPI_COMM_WORLD and you don't actually do one in your code.
Anyway, if you're running with more processes than dimensions, some of the processes won't be included in COMM_NEW. Instead, when the call to MPI_CART_CREATE returns, they'll get MPI_COMM_NULL for COMM_NEW instead of the new communicator with the topology. You just need to do a check to make sure you have a real communicator instead of MPI_COMM_NULL before doing the Bcast (or just have all of the ranks above DIMS(1) not enter the Bcast.
To elaborate on Wesley Bland's answer and to clarify the apparent discrepancy in the error message. When the number of MPI processes in MPI_COMM_WORLD is larger than the number of processes in the created Cartesian grid, some of the processes won't become members of the new Cartesian communicator and will get MPI_COMM_NULL -- the invalid communicator handle -- as a result. Calling a collective communication operation requires a valid inter- or intra-communicator handle. Unlike the allowed usage of MPI_PROC_NULL in point-to-point operations, using the invalid communicator handle in collective calls is erroneous. The last statement is not explicitly written in the MPI standard - instead, the language used is:
If comm is an intracommunicator, then ... If comm is an intercommunicator, then ...
Since MPI_COMM_NULL is neither an intra-, nor an inter-communicator, it doesn't fall in any of the two categories of defined behaviour and hence leads to an error condition.
Since communication errors have to occur in some context (i.e. in a valid communicator), Open MPI substitutes MPI_COMM_WORLD in the call to the error handler and hence the error message says "*** on communicator MPI_COMM_WORLD". This is the relevant code section from ompi/mpi/c/bcast.c, where MPI_Bcast is implemented:
if (ompi_comm_invalid(comm)) {
return OMPI_ERRHANDLER_INVOKE(MPI_COMM_WORLD, MPI_ERR_COMM,
FUNC_NAME);
}
...
if (MPI_IN_PLACE == buffer) {
return OMPI_ERRHANDLER_INVOKE(comm, MPI_ERR_ARG, FUNC_NAME);
}
Your code triggers the error handler inside the first check. In all other error checks comm is used instead (since it is determined to be a valid communicator handle) and the error message will state something like "*** on communicator MPI COMMUNICATOR 5 SPLIT FROM 0".

Retrieve data from file written in FORTRAN during program run

I am trying to write a series of values for time (real values) into a dat file in FORTRAN. This is a part of an MPI code and the code runs for a long time. So I would like to extract data at every time step and print it into a file and read the file any time during the execution of the program. Currently, the problem I am facing is, the values of time are not written into the file until the program ends. I have put the open statement before the do loop and the close statement after the end of do loop.
The parts of my code look like:
open(unit=57,file='inst.dat')
do loop starts
.
.
.
write(57,*) time
.
.
.
end do
close(57)
try call flush(unit). Check your compiler docs as this is i think an extension.
You mention MPI: For parallel codes I think you need to give each thread its own file/unit,
or take other measures to avoid conflicts.
From Gfortran manual:
Beginning with the Fortran 2003 standard, there is a FLUSH statement that should be preferred over the FLUSH intrinsic.
The FLUSH intrinsic and the Fortran 2003 FLUSH statement have identical effect: they flush the runtime library's I/O buffer so that the data becomes visible to other processes. This does not guarantee that the data is committed to disk.
On POSIX systems, you can request that all data is transferred to the storage device by calling the fsync function, with the POSIX file descriptor of the I/O unit as argument (retrieved with GNU intrinsic FNUM). The following example shows how:
! Declare the interface for POSIX fsync function
interface
function fsync (fd) bind(c,name="fsync")
use iso_c_binding, only: c_int
integer(c_int), value :: fd
integer(c_int) :: fsync
end function fsync
end interface
! Variable declaration
integer :: ret
! Opening unit 10
open (10,file="foo")
! ...
! Perform I/O on unit 10
! ...
! Flush and sync
flush(10)
ret = fsync(fnum(10))
! Handle possible error
if (ret /= 0) stop "Error calling FSYNC"
How about closing the file after every time step (assuming a reasonable amount of time elapses between time steps)?
do loop starts
.
.
!Note: an if statement should wrap the following so that it is
!only called by one processor.
open(unit=57,file='inst.dat')
write(57,*) time
close(57)
.
.
end do
Alternatively if the time between time steps is short, writing the data after blocks of 10, 100, ... iterations may be more efficient.