Fortran error: rank mismatch in spread() - fortran

Here is a simple code that tests the spread intrinsic function:
program test_spread
implicit none
integer, allocatable, dimension(:) :: a
integer, allocatable, dimension(:,:) :: b
integer, allocatable, dimension(:,:,:) :: c
integer, allocatable, dimension(:,:,:) :: d
allocate(a(2), b(2,4), c(2,4,2), d(2,4,2))
a(:) = [1, 2]
print*, "a=", a
b(:,:) = spread((a + 1)**2, 2, size(b,2))
print*, "b=", b
c(:,:,:) = spread(b, 3, size(c,3))
print*, "c=", c
d(:,:,:) = spread(spread((a + 1)**2, 2, size(d,2)), 3, size(d,3))
print*, "d=", d
end program
It is compiled with gfortran 8.1.1:
gfortran -g -fcheck=all -Wall -Wtabs -fbacktrace -c test.f90
gfortran -g -fcheck=all -Wall -Wtabs -fbacktrace -o test_spread test.o
I get the following result:
a= 1 2
b= 4 9 4 9 4 9 4 9
c= 4 9 4 9 4 9 4 9 4 9 4 9 4 9 4 9
Fortran runtime error: rank mismatch in spread()
Error termination. Backtrace:
#0 0x55b46b14386e in test_spread
at /*****/test_spread/test.f90:20
#1 0x55b46b143966 in main
at /*****/test_spread/test.f90:22
If I remove the allocatable attribute, the code compiles and gives the correct result. Am I doing anything wrong or is it a compiler error?
The code compiles and gives the correct result with Intel Fortran 18.
PS: I am using GCC on Arch Linux on a Intel(R) Xeon(R) Silver 4114 CPU.
$ gfortran --version
GNU Fortran (GCC) 8.1.1 20180531
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

2018-06-01 Steven G. Kargl
PR fortran/85816
PR fortran/85975
* libgfortran.h: Remove the GFC_DTYPE_COPY_SETRANK macro.
* intrinsics/reshape_generic.c: Directly assign rank.
* intrinsics/spread_generic.c: Ditto.
...
* m4/spread.m4: Ditto.

Related

Any way to cycle between different messages with MPI_PROBE

I use MPI_Probe to determine the size of a dynamic array that i just pass as a tag. but i send two arrays. the basic structure is :
call MPI_Isend(message, destination, MPI_INT, size, COMM, status, error)
call MPI_Isend(message2, destination, MPI_DOUBLE, size*3, COMM, status, error)
...
call MPI_Probe(sender, MPI_ANY_TAG, COMM, status, error)
size1 = status(MPI_TAG)
call MPI_Probe(sender, MPI_ANY_TAG, COMM, status, error)
size2 = status(MPI_TAG)
actual_size = MIN(size1, size2)
call MPI_Recv(size)
call MPI_Recv(size*3)
So this doesn't work because MPI_Probe will just get same value twice. Any idea how to cycle through different probes or something?
If this doesn't work i plan to change my code around to just have send - recv - send - recv instead of send - send - recv - recv. Just checking if someone has better solutions
As stated in the comments you shouldn't use the tag to send size data from one process to another as the value the tag can take has an upper bound of MPI_TAG_UB that in theory could be quite small, potentially not allow you to communicate a large enough integer. In fact it's bad practice to use the tag to transmit information at all, you should use the message data, that's what it is for after. Victor Eijkhout has the right way, you should inquire of the status argument using MPI_GET_COUNT how many things are being transmitted and use that to allocate the dynamic array. Here is an example, which also uses the correct handles for datatypes, you are using the C rather than the Fortran variants:
ijb#ijb-Latitude-5410:~/work/stack$ cat probe.f90
Program probe
Use, Intrinsic :: iso_fortran_env, Only : stdout => output_unit
Use mpi_f08, Only : mpi_status, mpi_comm_world, mpi_integer, &
mpi_init, mpi_finalize, mpi_send, mpi_probe, mpi_get_count, mpi_recv
Implicit None
Type( mpi_status ) :: status
Real :: rnd
Integer, Dimension( : ), Allocatable :: stuff
Integer :: nprc, rank
Integer :: n
Integer :: error
Integer :: i
Call mpi_init( error )
Call mpi_comm_size( mpi_comm_world, nprc, error )
Call mpi_comm_rank( mpi_comm_world, rank, error )
If( rank == 0 ) Then
Write( stdout, * ) 'Running on ', nprc, ' procs'
End If
If( rank == 0 ) Then
! On rank zero generate a random sized array
Call Random_number( rnd )
n = Int( 10.0 * rnd + 1 )
Write( stdout, * ) 'Allocating ', n, ' elements on rank 0'
Allocate( stuff( 1:n ) )
stuff = [ ( i, i = 1, n ) ]
Write( stdout, '( "Data on proc 0: ", *( i0, 1x ) )' ) stuff
Call mpi_send( stuff, Size( stuff ), mpi_integer, 1, 10, &
mpi_comm_world, error )
Else If( rank == 1 ) Then
! On rank 1 probe the message to get the status
Call mpi_probe( 0, 10, mpi_comm_world, status, error )
! From the status find how many things are being sent
Call mpi_get_count( status, mpi_integer, n, error )
! Use that to allocate the array
Allocate( stuff( 1:n ) )
! And recv the date
Call mpi_recv( stuff, Size( stuff ), mpi_integer, 0, 10, &
mpi_comm_world, status, error )
Write( stdout, * ) 'Recvd ', n, ' integers on proc 1'
Write( stdout, '( "Data on proc 1: ", *( i0, 1x ) )' ) stuff
Else
Write( stdout, * ) 'Busy doing nothing ... ', rank
End If
Write( stdout, * ) 'done', rank
Call mpi_finalize( error )
End Program probe
ijb#ijb-Latitude-5410:~/work/stack$ mpif90 --version
GNU Fortran (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
ijb#ijb-Latitude-5410:~/work/stack$ mpif90 -std=f2018 -Wall -Wextra -pedantic -fcheck=all -fbacktrace -Wuse-without-only -Werror -g probe.f90
ijb#ijb-Latitude-5410:~/work/stack$ mpirun -np 2 ./a.out
Running on 2 procs
Allocating 8 elements on rank 0
Data on proc 0: 1 2 3 4 5 6 7 8
done 0
Recvd 8 integers on proc 1
Data on proc 1: 1 2 3 4 5 6 7 8
done 1
ijb#ijb-Latitude-5410:~/work/stack$ mpirun -np 2 ./a.out
Running on 2 procs
Allocating 5 elements on rank 0
Data on proc 0: 1 2 3 4 5
done 0
Recvd 5 integers on proc 1
Data on proc 1: 1 2 3 4 5
done 1
ijb#ijb-Latitude-5410:~/work/stack$ mpirun -np 2 ./a.out
Running on 2 procs
Allocating 2 elements on rank 0
Data on proc 0: 1 2
done 0
Recvd 2 integers on proc 1
Data on proc 1: 1 2
done 1
ijb#ijb-Latitude-5410:~/work/stack$ mpirun -np 2 ./a.out
Recvd 1 integers on proc 1
Data on proc 1: 1
done 1
Running on 2 procs
Allocating 1 elements on rank 0
Data on proc 0: 1
done 0
ijb#ijb-Latitude-5410:~/work/stack$ mpirun -np 2 ./a.out
Running on 2 procs
Allocating 3 elements on rank 0
Data on proc 0: 1 2 3
done 0
Recvd 3 integers on proc 1
Data on proc 1: 1 2 3
done 1
ijb#ijb-Latitude-5410:~/work/stack$

How to print in a single row in a loop?

Suppose I have a Fortran program which includes the following loop:
do i=1, 10
print *, i
enddo
The output of this will be like
1
2
...
10
How can I write these values to a single line, like in the following?
1 2 ... 10
There are a number of ways, two that come to mind immediately are shown in the following little program
$ cat loop.f90
Program loop
Implicit None
Integer :: i
Write( *, * ) 'First way - non-advancing I/O'
Do i = 1, 10
Write( *, '( i0, 1x )', Advance = 'No' ) i
End Do
Write( *, * ) ! Finish record
Write( *, * ) 'Second way - implied do loop'
Write( *, * ) ( i, i = 1, 10 )
End Program loop
$ gfortran -std=f2003 -Wall -Wextra -fcheck=all loop.f90
$ ./a.out
First way - non-advancing I/O
1 2 3 4 5 6 7 8 9 10
Second way - implied do loop
1 2 3 4 5 6 7 8 9 10
$
The first method. non-advancing I/O, suppresses the end of record marker being written, which is normally a new line, but does require an explicit format. The second, implied do loop, doesn't require a format, but is less flexible.
BTW in English they are normally called "loops"

Do loop is stuck at the first subarray in MPI SCATTER and GATHER

I have two arrays, array global has 8 values and it will be scatter among array local with 2 values. What I was trying to do is, take the big array, split into small arrays, do some work, then put it back together.
Problem:
Even though I successfully scattered the data, the do loop as written is only working for the first sub array local. What I want is all of the integers in the scattered local array should be multiplied by 2, then gathered into the global array.
Code for the do loop (some work has been done here):
do j = 1,2
local(j) = j*2
print *, j
end do
Here's the full code. If you go down below you'll notice the part which I need your help.
MODULE MPI
IMPLICIT NONE
INCLUDE 'mpif.h'
INTEGER :: MYID,TOTPS, IERR, MPISTTS
CONTAINS
SUBROUTINE MPIINIT
IMPLICIT NONE
CALL MPI_INIT( IERR )
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MYID,IERR)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,TOTPS,IERR)
RETURN
END SUBROUTINE MPIINIT
END MODULE MPI
PROGRAM SCATTER
USE MPI
IMPLICIT NONE
CALL MPIINIT
CALL TEST
CALL MPI_FINALIZE(IERR)
CONTAINS
SUBROUTINE TEST
USE MPI
IMPLICIT NONE
INTEGER :: I,J
INTEGER,DIMENSION(8) :: GLOBAL
INTEGER,DIMENSION(2) :: LOCAL
if (myid .eq. 0) then
do i = 1,8
global(i) = i
end do
end if
call mpi_scatter(global,2,mpi_integer,local,2,mpi_integer,0, &
mpi_comm_world,ierr)
print*,"task",myid,":",local
call mpi_barrier(mpi_comm_world,ierr)
!!!!!!! do some work here
do j = 1,2
local(j) = j*2
print*,j
end do
!!!!!! end work
call mpi_gather(local,2,mpi_integer,global,2,mpi_integer,0, &
mpi_comm_world,ierr)
if(myid .eq. 0) then
print*,"task",myid,":",global
end if
END SUBROUTINE TEST
END PROGRAM SCATTER
Notes:
(1) I've been reading & learning from this thread but it looks challenging for now.
(2) Run code mpif90 SCATTER.f90 .. mpirun -np 4 ./a.out
Output:
task 0 : 1 2
task 1 : 3 4
task 2 : 5 6
task 3 : 7 8
1
2
1
2
1
2
1
2
task 0 : 2 4 2 4 2 4 2 4
What I want to get is: task 0 : 2 4 6 8 10 12 14 16
You wrote
local(j) = j * 2
print*, j
I don't think that does what you think it does.
You probably meant to write
local(j) = local(j) * 2
print*, local(j)

Why these two MPI-IO code are not working the same way?

I am learning MPI-IO and following a tutorial (PDF download here).
For one exercise, the correct code is:
Program MPI_IOTEST
Use MPI
Implicit None
Integer :: wsize,wrank
Integer :: ierror
Integer :: fh,offset
Call MPI_Init(ierror)
Call MPI_Comm_rank(MPI_COMM_WORLD,wrank,ierror)
Call MPI_Comm_size(MPI_COMM_WORLD,wsize,ierror)
offset=4*wrank; ! because 4 bytes is one signed int
! --- open the MPI files using a collective call
Call MPI_File_Open(MPI_COMM_WORLD,'test.dat',MPI_MODE_RDWR+MPI_MODE_CREATE,MPI_INFO_NULL,fh,ierror);
Write(*,*)'rank',wrank
Call MPI_FILE_WRITE_AT(fh, offset, wrank,1,MPI_INTEGER,mpi_status_ignore,ierror);
Call MPI_File_close(fh,ierror)
Call MPI_Finalize(ierror)
End Program MPI_IOTEST
Then you just build and run it as 24 MPI tasks.
Then for validation, simply do
od -i test/dat
You will get the result exactly the same on the tutorial, which is given below.
0000000 0 1 2 3
0000020 4 5 6 7
0000040 8 9 10 11
0000060 12 13 14 15
0000100 16 17 18 19
0000120 20 21 22 23
0000140
But if I change 1 to num:
Call MPI_FILE_WRITE_AT(fh, offset, wrank,1,MPI_INTEGER,mpi_status_ignore,ierror);
into
Call MPI_FILE_WRITE_AT(fh, offset, wrank,num,MPI_INTEGER,mpi_status_ignore,ierror);
and before that define
integer :: num
num=1
After rm test.dat, then re-build the file and run it, you will get:
0000000 0 0 0 0
*
Your error is not actually in the specification or use of num but rather in the specification of offset.
If you read the man-page of MPI_File_write_at, you have to specify the offset as MPI_Offset kind.
So if you change your program to use:
integer(kind=MPI_OFFSET_KIND) :: offset
It works fine.
Did you not notice the size of the test.dat file generated?

Fortran matrix has dimensions 0,0 even though it is initialialized with dimensions 5,5

What is the cause of this?
27 recursive subroutine svd_jacobi(A, m, n, U, S, V)
28
29 implicit none
30
31 real(8) :: A(m,n)
32 real(8) :: B(n,n)
33 real(8) :: U(m,m)
34 real(8) :: S(m,n)
(gdb) p n
$25 = 5
(gdb) whatis B
type = real(kind=8) (0,0)
(gdb) whatis n
type = integer(kind=8)
(gdb)
Most likely your version of gdb doesn't fully understand the Fortran array descriptor. There's some improvements in the gdb Archer branch, which AFAIK isn't yet merged with trunk.