Fortran MPI status error - fortran

I am getting the following error on compiling the following code
Code:
IMPLICIT REAL*8(A-H,O-Z)
include 'common_files.inc'
CHARACTER*100 MNO, MESSAGE
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
starttime = MPI_WTIME()
/* ........rest of code.................
Compilation output:
main.f:23.46:
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
1
Error: Variable 'mpi_status_size' cannot appear in the expression at (1)
main.f:23.62:
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
1
Error: The module or main program array 'status' at (1) must have constant shape
The 'common_files.inc' file contains the header files like 'include 'mpif.h''. Unfortunately I am not allowed to post the remaining code.
I am compiling the above using the following command
mpif90 -g main.f
What could be the possible reasons for the error?.

You clearly have an issue with the include 'mpif.h' statement:
See for example:
IMPLICIT REAL*8(A-H,O-Z)
c include 'mpif.h'
integer rank, size, ierror, status(MPI_STATUS_SIZE)
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror)
print *, size, " ", rank
call MPI_Finalize(ierr)
end
gives me:
$ mpif90 foo.f
foo.f:4.46:
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
1
Error: Variable 'mpi_status_size' cannot appear in the expression at (1)
foo.f:4.62:
integer rank, size, ierror, tag, status(MPI_STATUS_SIZE)
1
Error: The module or main program array 'status' at (1) must have constant shape
Whereas if I uncomment the include 'mpif.h' line, it just compiles and works.
You should double-check your common_files.inc file.

Related

passing logical variables to Fortran subroutines [duplicate]

I am trying to write a very simple function in Fortran (first-time user):
program Main
implicit none
integer function k(n)
integer, intent(in) :: n
k=n
end function k
end program Main
I get a bunch of errors:
integer function k(n)
1
Error: Syntax error in data declaration at (1)
integer, intent(in) :: n
1
Error: Unexpected data declaration statement at (1)
end function k
1
Error: Expecting END PROGRAM statement at (1)
k=n
1
Error: Symbol ‘k’ at (1) has no IMPLICIT type
k=n
1
Error: Symbol ‘n’ at (1) has no IMPLICIT type
What am I doing wrong? I'm using the last version of gfortran.
Any declared functions and subroutines local to the program block should be put after a contains statement, for example
program Main
implicit none
contains
integer function k(n)
integer, intent(in) :: n
k=n
end function k
end program Main
To give an example of a program using this you could have
program Main
implicit none
integer :: myLocalN
myLocalN = 2
print*, "My local N is ", myLocalN
print*, "The value of this squared is", sq(myLocalN)
contains
integer function sq(n)
integer, intent(in) :: n
sq=n*n
end function sq
end program Main

Declaring variables in function (Fortran)

I am trying to write a very simple function in Fortran (first-time user):
program Main
implicit none
integer function k(n)
integer, intent(in) :: n
k=n
end function k
end program Main
I get a bunch of errors:
integer function k(n)
1
Error: Syntax error in data declaration at (1)
integer, intent(in) :: n
1
Error: Unexpected data declaration statement at (1)
end function k
1
Error: Expecting END PROGRAM statement at (1)
k=n
1
Error: Symbol ‘k’ at (1) has no IMPLICIT type
k=n
1
Error: Symbol ‘n’ at (1) has no IMPLICIT type
What am I doing wrong? I'm using the last version of gfortran.
Any declared functions and subroutines local to the program block should be put after a contains statement, for example
program Main
implicit none
contains
integer function k(n)
integer, intent(in) :: n
k=n
end function k
end program Main
To give an example of a program using this you could have
program Main
implicit none
integer :: myLocalN
myLocalN = 2
print*, "My local N is ", myLocalN
print*, "The value of this squared is", sq(myLocalN)
contains
integer function sq(n)
integer, intent(in) :: n
sq=n*n
end function sq
end program Main

Intel MPI_SIZEOF not working for Fortran complex type

Given the following fortran code:
integer, parameter :: double = kind(1.0d0)
integer :: integerTest
real(double) :: doubleTest
complex(double) :: complexTest
integer :: testSize
integer :: ierr
integerTest = 0
doubleTest = real(0.d0, kind=double)
complexTest = cmplx(0.d0, 0.d0, kind=double)
call MPI_SIZEOF(integerTest, testSize, ierr)
! ...
call MPI_SIZEOF(doubleTest, testSize, ierr)
! ...
call MPI_SIZEOF(complexTest, testSize, ierr)
When compiling with Intel MPI, I get the error:
error #6285: There is no matching specific subroutine for this generic subroutine call. [MPI_SIZEOF]
on the line
call MPI_SIZEOF(complexTest, testSize, ierr)
This code compiles and executes with no issue using OpenMPI. What is the cause of this error? It seems like it's looking for a specific match for the type of complexTest, but isn't the whole point of MPI_SIZEOF is to work generically with nearly any type?
Probably a bug in the MPI library, they might have forgotten to add this specific function to the module. BTW "nearly any type" is certainly wrong, MPI_SIZEOF is only intended to work for intrinsic types.
As a workaround you can use
testSize = storage_size(complexTest) / character_storage_size
(or just / 8)

Why does this sample code (f90, MPI, derived types) causes invalid read/write (valgrind or dmalloc)?

This is the incriminated code (it is related to another question I asked, here):
program foo
use mpi
implicit none
type double_st
sequence
real(kind(0.d0)) :: x,y,z
integer :: acc
end type double_st
integer, parameter :: n=8
INTEGER :: MPI_CADNA_DST
integer :: nproc, iprank
INTEGER :: IERR, STAT(MPI_STATUS_SIZE)
INTEGER :: MPI_CADNA_DST_TMP
INTEGER ::&
COUNT=4,&
BLOCKLENGTHS(4)=(/1,1,1,1/),&
TYPES(4)=(/MPI_DOUBLE_PRECISION,&
MPI_DOUBLE_PRECISION,&
MPI_DOUBLE_PRECISION,&
MPI_INTEGER/)
INTEGER(KIND=MPI_ADDRESS_KIND) :: DISPLS(4), LB, EXTENT
TYPE(DOUBLE_ST) :: DST
INTEGER :: I
type(double_st), allocatable :: bufs(:), bufr(:)
allocate(bufs(n), bufr(n))
CALL MPI_INIT(IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD, NPROC, IERR)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, IPRANK, IERR)
CALL MPI_GET_ADDRESS(DST%X, DISPLS(1))
CALL MPI_GET_ADDRESS(DST%Y, DISPLS(2))
CALL MPI_GET_ADDRESS(DST%Z, DISPLS(3))
CALL MPI_GET_ADDRESS(DST%ACC, DISPLS(4))
DO I=4,1,-1
DISPLS(I)=DISPLS(I)-DISPLS(1)
ENDDO
CALL MPI_TYPE_CREATE_STRUCT(4,BLOCKLENGTHS,DISPLS,TYPES, MPI_CADNA_DST_TMP,IERR)
CALL MPI_TYPE_COMMIT(MPI_CADNA_DST_TMP,IERR)
CALL MPI_TYPE_GET_EXTENT(MPI_CADNA_DST_TMP, LB, EXTENT, IERR)
CALL MPI_TYPE_CREATE_RESIZED(MPI_CADNA_DST_TMP, LB, EXTENT, MPI_CADNA_DST, IERR)
CALL MPI_TYPE_COMMIT(MPI_CADNA_DST,IERR)
bufs(:)%x=iprank
bufs(:)%y=iprank
bufs(:)%z=iprank
bufs(:)%acc=iprank
call mpi_send(bufs(1), n, mpi_cadna_dst, 1-iprank, 0, mpi_comm_world, ierr)
call mpi_recv(bufr(1), n, mpi_cadna_dst, 1-iprank, 0, mpi_comm_world, stat, ierr)
deallocate(bufs, bufr)
end program foo
Compiled with intelMPI, version 4.0 or 5.0, I get invalid read/write errors with valgrind or with dmalloc at the send. With openMPI, it is not that clear with that minimum example, but I got similar problems with this communication in the big code from which it is extracted.
Thanks for helping!
It looks like the use of sequence is the culprit here. Since your data are not aligned the same way, forcing the linear packing with the sequence keyword generate some unaligned accesses, probably while writing the one of the arrays. Removing it does the trick.
I think that he used derived-type definition with sequence(the guy who wrote the code).SEQUENCE cause the components of the derived type to be stored in the same sequence they are listed in the type definition. If SEQUENCE is specified, all derived types specified in component definitions must be sequence types.You should tell us more about compilation,are you on Linux or Windows also.

MPI-IO: write subarray

I am starting to use MPI-IO and tried to write a very simple example of the things I'd like to do with it; however, even though it is a simple code and I took some inspiration from examples I read here and there, I get a segmentation fault I do not understand.
The logic of the piece of code is very simple: each thread will handle a local array which is part of a globlal array I want to write. I create a subarray type using MPI_Type_Create_Subarray to do so. Then I just open the file, set a view and try to write the data. I get the segmentation fault during the MPI_File_Write_All.
Here is the code:
program test
implicit none
include "mpif.h"
integer :: myrank, nproc, fhandle, ierr
integer :: xpos, ypos
integer, parameter :: loc_x=10, loc_y=10
integer :: loc_dim
integer :: nx=2, ny=2
real(8), dimension(loc_x, loc_y) :: data
integer :: written_arr
integer, dimension(2) :: wa_size, wa_subsize, wa_start
call MPI_Init(ierr)
call MPI_Comm_Rank(MPI_COMM_WORLD, myrank, ierr)
call MPI_Comm_Size(MPI_COMM_WORLD, nproc, ierr)
xpos = mod(myrank, nx)
ypos = mod(myrank/nx, ny)
data = myrank
loc_dim = loc_x*loc_y
wa_size = (/ nx*loc_x, ny*loc_y /)
wa_subsize = (/ loc_x, loc_y /)
wa_start = (/ xpos, ypos /)*wa_subsize
call MPI_Type_Create_Subarray(2, wa_size, wa_subsize, wa_start &
, MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, written_arr, ierr)
call MPI_Type_Commit(written_arr, ierr)
call MPI_File_Open(MPI_COMM_WORLD, "file.dat" &
& , MPI_MODE_WRONLY + MPI_MODE_CREATE, MPI_INFO_NULL, fhandle, ierr)
call MPI_File_Set_View(fhandle, 0, MPI_DOUBLE_PRECISION, written_arr &
, "native", MPI_INFO_NULL, ierr)
call MPI_File_Write_All(fhandle, data, loc_dim, MPI_DOUBLE_PRECISION &
, MPI_INFO_NULL, ierr)
call MPI_File_Close(fhandle, ierr)
call MPI_Finalize(ierr)
end program test
Any help would be highly appreciated!
The last argument to MPI_FILE_WRITE_ALL before the error output argument is an MPI status object and not an MPI info object. Making the call with MPI_INFO_NULL is therefore erroneous. If you are not interested in the status of the write operation then you should pass MPI_STATUS_IGNORE instead. Making the call with MPI_INFO_NULL might work in some MPI implementations because of the specifics of how both constants are defined, but then fail in others.
For example, in Open MPI MPI_INFO_NULL is declared as:
parameter (MPI_INFO_NULL=0)
When passed instead of MPI_STATUS_IGNORE it causes the C implementation of MPI_File_write_all to be called with the status argument pointing to a constant (read-only) memory location that holds the value of MPI_INFO_NULL (that how Fortran implements passing constants by address). When the C function is about to finish, it tries to fill the status object, which results in an attempt to write to the constant memory and ultimately leads to the segmentation fault.
When writing new Fortran programs it is advisable to not use the very old mpif.h interface as it does not provide any error checking. Rather one should use the mpi module or even mpi_f08 when more MPI implementations become MPI-3.0 compliant. The beginning of your program should therefore look like:
program test
use mpi
implicit none
...
end program test
Once you use the mpi module instead of mpif.h, the compiler is able to perform parameter type checking for some MPI calls, including MPI_FILE_SET_VIEW, and spot an error:
test.f90(34): error #6285: There is no matching specific subroutine for this generic subroutine call. [MPI_FILE_SET_VIEW]
call MPI_File_Set_View(fhandle, 0, MPI_DOUBLE_PRECISION, written_arr &
-------^
compilation aborted for test.f90 (code 1)
The reason is that the second argument to MPI_FILE_SET_VIEW is of type INTEGER(KIND=MPI_OFFSET_KIND), which is 64-bit on most modern platforms. The constant 0 is simply of type INTEGER and is therefore 32-bit on most platforms. What happens is that with mpif.h the compiler passes a pointer to an INTEGER constant with value of 0, but the subroutine interprets this as a pointer to a larger integer and interprets the neighbouring values as part of the constant value. Thus the zero that you pass as an offset inside the file ends up being a non-zero value.
Replace the 0 in the MPI_FILE_SET_VIEW call with 0_MPI_OFFSET_KIND or declare a constant of type INTEGER(KIND=MPI_OFFSET_KIND) and a value of zero and then pass it.
call MPI_File_Set_View(fhandle, 0_MPI_OFFSET_KIND, MPI_DOUBLE_PRECISION, ...
or
integer(kind=MPI_OFFSET_KIND), parameter :: zero_off = 0
...
call MPI_File_Set_View(fhandle, zero_off, MPI_DOUBLE_PRECISION, ...
Both methods lead to an output file of size 3200 bytes (as expected).