Invalid pointer and segmentation fault when using MPI_Gather in Fortran - fortran

I have a simple program, which is supposed to gather a number of small arrays into one big one using MPI.
PROGRAM main
include 'mpif.h'
integer ierr, i, myrank, thefile, n_procs
integer, parameter :: BUFSIZE = 3
complex*16, allocatable :: loc_arr(:), glob_arr(:)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, n_procs, ierr)
allocate(loc_arr(BUFSIZE))
loc_arr = 0.7 * myrank - cmplx(0.3, 0, kind=8)
allocate(glob_arr(n_procs* BUFSIZE))
write (*,*) myrank, shape(glob_arr)
call MPI_Gather(loc_arr, BUFSIZE, MPI_DOUBLE_COMPLEX,&
glob_arr, n_procs * BUFSIZE, MPI_DOUBLE_COMPLEX,&
0, MPI_COMM_WORLD, ierr)
write (*,*) myrank,"Errorcode:" , ierr
call MPI_FINALIZE(ierr)
END PROGRAM main
I have some experience with MPI in C, but for Fortran 90 nothing seems to work. Here is how I compile(I use ifort) and run it:
mpif90 test.f90 -check all && mpirun -np 4 ./a.out
1 12
3 12
3 Errorcode: 0
1 Errorcode: 0
0 12
2 12
2 Errorcode: 0
0 Errorcode: 0
*** Error in `./a.out': free(): invalid pointer: 0x0000000000a25790 ***
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 10889 RUNNING AT LenovoX1kabel
= EXIT CODE: 6
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 10889 RUNNING AT LenovoX1kabel
= EXIT CODE: 6
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
What do I do wrong? Sometimes I will get this pointer problem, sometimes I will a segmentation fault, but to me it doesn't look like any of the ifort checks complain.
All the Errorcodes are 0, so I'm not sure where I go wrong.

You should never specify the number of processes in MPI collectives. That is a simple rule of thumb.
Therefore the line n_procs * BUFSIZE is clearly wrong.
And indeed the manual states that: recvcount Number of elements for any single receive (integer, significant only at root).
You should just use BUFSIZE. This is the same for C and Fortran.

Related

Using MPI_AllReduce with MPI_IN_PLACE in Fortran

I am trying to calculate a sum of a column vector using mpi.
I have read the following thread
Can MPI sendbuf and recvbuf be the same thing?
and found the use of MPI_IN_PLACE would be a good choice to minimize the memory usage for very large data.
However, in my trivial test the mpi_allreduce keeps reporting memory segmentation fault:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
I could not figure out what went wrong.
Any help would be greatly appreciated.
Here's my code:
program mpitest
implicit none
include "mpif.h"
integer :: i, mprank, mpsize, mpierr
real(KIND=8), dimension(5) :: Qin
call MPI_INIT(mpierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,mpsize,mpierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,mprank,mpierr)
write(*,*) mprank,'/',mpsize
do i = 1, 5
Qin(i) = (mprank+1)*i
write(*,*) Qin(i), mprank
enddo
call MPI_AllReduce(MPI_IN_PLACE,Qin,5,MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD)
if (mprank == 0) then
do i = 1, 5
write(*,*) Qin(i)
enddo
endif
call MPI_FINALIZE(mpierr)
end program mpitest

Strange occurrence with a send/recv MPI pair

I have an application where the root rank is sending messages to all ranks in the following way:
tag = 22
if( myrankid == 0 )then
do i = 1, nproc
if(I==1)then
do j = 1, nvert
xyz((j-1)*3+1) = data((j-1)*3+1,1)
xyz((j-1)*3+2) = data((j-1)*3+2,1)
xyz((j-1)*3+3) = data((j-1)*3+3,1)
enddo
else
call mpi_send(data, glb_nvert(i)*3, mpi_real, i-1, tag, comm, ierr)
endif
enddo
else
call mpi_recv(data, glb_nvert(i)*3, mpi_real, 0, tag,comm, stat,ierr)
endif
My problem is that at only when running above 3000 ranks this pair hangs at a certain mpi rank (on my specific app it is rank 2009)
Now, I do check that the sizes and arrays are consistent and the only thing I found interesting was the comm. The comm is a communicator which I have duplicated from another MPI communicator.
When I print comm like print*, comm all ranks except the root prints the same integer, except for the root.
E.g.
The root prints:
-1006632941
while rhe remaining 2999 ranks prints:
-1006632951
Is that really what causing the problem?
I have tried using intel mpi and the cray mpi.

Is MPI_IBcast guaranteed to send even if some ranks don't participate

I am creating an MPI program, where I am trying to send the same data to all processes as soon as they finish their calculation. The processes can have large differences in their computation time, so I don't want that one processor waits for another.
The root process is guaranteed to always send first.
I know that MPI_Bcast acts as a Barries, so I experimented with MPI_IBcast:
program main
use mpi
implicit none
integer rank, nprcos, ierror, a(10), req
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, nprcos, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
a = rank
if(rank /= 2) then
call MPI_IBCAST(a, size(a), MPI_INTEGER, 0, MPI_COMM_WORLD, req, ierror)
call MPI_WAIT(req, MPI_STATUS_IGNORE, IERROR)
endif
write (*,*) 'Hello World from process: ', rank, 'of ', nprcos, "a = ", a(1)
call MPI_FINALIZE(ierror)
end program main
From my experiments it seems, that irregardless of which rank is "boycotting" the MPI_IBcast it always works on all the others:
> $ mpifort test.f90 && mpiexec --mca btl tcp,self -np 4 ./a.out
Hello World from process: 2 of 4 a = 2
Hello World from process: 1 of 4 a = 0
Hello World from process: 0 of 4 a = 0
Hello World from process: 3 of 4 a = 0
Is this a guaranteed behavior or is this just specific to my OpenMPI implementation? How else could I implement this? I can only think of loop over MPI_Isends.
No, this is not guaranteed, all ranks in the communicator should participate. Within MPI this is the definition of a collective communication.

use mpi_cart_create with master and slave

I want to create a topology without 0 processor in it. My idea is to make processor 0 as master and create topology as slave. After certain computation sin topology, I will send data to master. Here is my code:
include "mpif.h"
integer maxn
integer myid,Root,numprocs,numtasks,taskid
integer comm2d, ierr
integer dims(2)
logical periods(2)
data periods/2*.false./
Root = 0
CALL MPI_INIT( ierr )
CALL MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr )
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr )
numtasks = numprocs-1
if(myid .eq. Root) then
print *, 'Hello I am master'
endif
c Get a new communicator for a decomposition of the domain.
c Let MPI find a "good" decomposition
dims(1) = 0
dims(2) = 0
CALL MPI_DIMS_CREATE(numtasks,2,dims,ierr)
CALL MPI_CART_CREATE(MPI_COMM_WORLD,2,dims,periods,.true.,
* comm2d,ierr)
c Get my position in this communicator
CALL MPI_COMM_RANK( comm2d, taskid, ierr )
c
print *, 'task ID= ',taskid
if (myid .eq. master) then
print *,dims(1),dims(2)
endif
CALL MPI_Comm_free( comm2d, ierr )
30 CALL MPI_FINALIZE(ierr)
STOP
END
But, when I run above code; I am getting the following error.
Fatal error in PMPI_Comm_rank: Invalid communicator, error stack:
PMPI_Comm_rank(121): MPI_Comm_rank(MPI_COMM_NULL, rank=0x7fff08bf960c)
failed PMPI_Comm_rank(73).: Null communicator
How can I eliminate the error? What I am doing wrong.
You start your MPI job with numprocs processes. Then you create a Cartesian topology with numtasks = numprocs-1 processes. Therefore, one of the processes ends up not being part of the Cartesian communicator and receives MPI_COMM_NULL in comm2d. Calling MPI_COMM_RANK with a null communicator is an error. The solution is to fix your code to first check the value of comm2d:
CALL MPI_CART_CREATE(MPI_COMM_WORLD,2,dims,periods,.true.,
* comm2d,ierr)
IF (comm2d.NE.MPI_COMM_NULL) THEN
...
ENDIF

request array in MPI non-blocking send/recv

I am trying to reproduce this C example in Fortran. my code so far:
use mpi
implicit none
integer, parameter :: maxn = 8
integer, allocatable :: xlocal(:,:)
integer :: i, j, lsize, errcnt, toterr, buff
integer :: ierror, nproc, pid, root = 0, nreq = 0
integer, allocatable :: request(:), status(:,:)
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, pid, ierror)
if (mod(maxn, nproc) /= 0) then
write(*,*) 'Array size (maxn) should be a multiple of the number of processes'
call MPI_ABORT(MPI_COMM_WORLD, 1, ierror)
end if
lsize = maxn/nproc
allocate(xlocal(0:lsize+1, maxn))
allocate(request(nproc))
allocate(status(MPI_STATUS_SIZE,nproc))
xlocal(0,:) = -1
xlocal(1:lsize,:) = pid
xlocal(lsize+1,:) = -1
! send down unless on bottom
if (pid < nproc-1) then
nreq = nreq + 1
call MPI_ISEND(xlocal(lsize,:), maxn, MPI_INTEGER, &
pid+1, 0, MPI_COMM_WORLD, request(nreq), ierror)
write(*,'(2(A,I1),A)') 'process ', pid, ' sent to process ', pid+1, ':'
write(*,*) xlocal(lsize,:)
end if
if (pid > 0) then
nreq = nreq + 1
call MPI_IRECV(xlocal(0,:), maxn, MPI_INTEGER, &
pid-1, 0, MPI_COMM_WORLD, request(nreq), ierror)
write(*,'(2(A,I1),A)') 'process ', pid, ' received from process ', pid-1, ':'
write(*,*) xlocal(0,:)
end if
! send up unless on top
if (pid > 0) then
nreq = nreq + 1
call MPI_ISEND(xlocal(1,:), maxn, MPI_INTEGER, &
pid-1, 1, MPI_COMM_WORLD, request(nreq), ierror)
write(*,'(2(A,I1),A)') 'process ', pid, ' sent to process ', pid-1, ':'
write(*,*) xlocal(1,:)
end if
if (pid < nproc-1) then
nreq = nreq + 1
call MPI_IRECV(xlocal(lsize+1,:), maxn, MPI_INTEGER, &
pid+1, 1, MPI_COMM_WORLD, request(nreq), ierror)
write(*,'(2(A,I1),A)') 'process ', pid, ' received from process ', pid+1, ':'
write(*,*) xlocal(lsize+1,:)
end if
call MPI_WAITALL(nreq, request, status, ierror)
! check results
errcnt = 0
do i = 1, lsize
do j = 1, maxn
if (xlocal(i,j) /= pid) errcnt = errcnt + 1
end do
end do
do j = 1, maxn
if (xlocal(0,j) /= pid-1) errcnt = errcnt + 1
if ((pid < nproc-1) .and. (xlocal(lsize+1,j) /= pid+1)) errcnt = errcnt + 1
end do
call MPI_REDUCE(errcnt, toterr, 1, MPI_INTEGER, MPI_SUM, 0, MPI_COMM_WORLD)
if (pid == root) then
if (toterr == 0) then
write(*,*) "no errors found"
else
write(*,*) "found ", toterr, " errors"
end if
end if
deallocate(xlocal)
deallocate(request)
deallocate(status)
call MPI_FINALIZE(ierror)
but i am running into segmentation faults and can not figure out why. I have a feeling it is due to the request array. can someone explain the correct way of using the request array in Fortran? none of the references I found clarify this.
thx in advance
In case you haven't already done so, consider compiling your program with some flags that will help you in debugging, e.g. with gfortran, you can use -O0 -g -fbounds-check (if that does not help, you might add -fsanitize=address for versions >= 4.8). Other compilers have similar options for debugging.
Doing that, and running with 2 processes, you program crashes at the MPI_Reduce line. If you look up the specifications (e.g. OpenMPI 1.8) you can see that this subroutine requires one more argument, i.e., you forgot to add the ierror argument at the end.
It is a bit tragic that even though the subprograms from the mpi module are accessible through a use association, and thus should be checked for argument consistency to avoid these trivial errors, not all subprograms are necessarily in that module. I don't know which MPI implementation you use, but I checked my local MPICH installation and it does not have most subroutines in the module, so no explicit interfaces exist for them. I guess you are in a similar situation, but I guess other implementations might suffer a similar fate. You could compare it to the C header file missing the function prototype for MPI_Reduce. I guess the reason for this is that originally there was only a Fortran 77 interface for most implementations.
Some final comments: be careful not to just copy-paste the C code. The arrays you pass are not contiguous and will result in a temporary copy to be passed to the MPI routines, which is very inefficient (not that it really matters in this case).