I have a problem, and I don't know what it is. I have a test program with MPI_INIT and MPI_FINALIZE in its body. I have a module that contains 5 subroutines: 3 subroutines are dependent, and independent from 2 other subroutines. I want to put the MPI code in the test program into this module. I put MPI_INIT in the module where the variables are declared and before the subroutine. I obtain a series of errors with the same error message:
This statement must not appear in the specification part of a module
How does "MPI_INIT and MPI_FINALIZE should be called only once" affect Fortran program, modules, and subroutines? Where should I put MPI functions and variables if there are multiple, independent programs, each calling this module's subroutines multiple number of times?
You need to call MPI subroutine in subroutine part of the module.
Generally i define an init_mpi subroutine that do the call to MPI_INIT and eventually call to MPI_COMM_RANK and MPI_COMM_SIZE. You could also use MPI_INITIALIZED in this init_mpi subroutine to avoid multiple initialization.
Related
I need an MPI_Init_Thread call example in fortran.
I tried it without MPI_Init before it and got the message:"MPI_Init_thread function was called before MPI_Init was invoked".
I called after MPI_init and got the message "Calling MPI_Init or MPI_Init_Thread twpce is erreneous." altough I had each called once.
I am confused.
Also are there three or 4 arguments? and what are they? I believe it is three and the second one I want is MPI_THREAD_MULTIPLE but program aborts.
It would be best if someone uses it with fortran could post an example call please.
For completeness I am posting a simple code I am tring to run in hybrid (MPI + OPENMP) mode.
My purpose is to run this job on two nodes , each with 32 threads by making use of openmp within each node.
program hello
Use mpi
integer ierr,np,pid,inull1,inul2,hug
call MPI_Init(ierr)
inull2=3
! call MPI_Init_thread(inull1,inull2,ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, pid,ierr)
call MPI_Comm_size(MPI_COMM_WORLD, np, ierr)
write(6,*) 'inull2,MPI_THREAD_MULTIPLE',MPI_THREAD_MULTIPLE
hug=huge(inull1)
!$OMP PARALLEL SHARED(inull1,inull2,hug,MPI_THREAD_MULTIPLE,np) &
!$OMP PRIVATE(pid)
!$OMP DO SCEDULE (STATIC)
do i=1,hug
write(6,*) 'program hello_world i,np,pid,inull2,mtm',i,np,pid,inull2,MPI_THREAD_MULTIPLE
enddo
!$OMP END DO
!$OMP END PARALLEL
call MPI_Finalize(ierr)
end program hello
I compile it with
mpif90 -o hello_world_openmp.exe hello_world_openmp.f90
and run it with the following command (for now)
mpirun -hostfile hostfile -pernode -bind-to none hello_world_openmp.exe
Changing the code as
integer ierr,np,pid,inull1,inul2,hug
! call MPI_Init(ierr)
call MPI_Init_thread(MPI_THREAD_FUNNELLED,inull2,ierr)
also gives the following error:
*** The MPI_Init_thread() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
The error message you quite is strange. But it has been reported before (https://www.mail-archive.com/devel#lists.open-mpi.org/msg19978.html https://github.com/TRIQS/cthyb/issues/122)
It points to some incompatibility somewhere. In your case it points to an incorrect call to MPI_Init_thread. The main point is:
Always use IMPLICIT NONE!
This cannot be over-stressed. Even in a very short code you must use it. You declared inul2 instead of inull2 and you used MPI_THREAD_FUNNELLED instead of MPI_THREAD_FUNNELED. After correcting that, your codes runs correctly for me.
One calls MPI_Init_thread instead of MPI_Init, not after.
You call it like
call MPI_Init_thread(required, provided, ie)
where all arguments are integers, required is an input argument saying which threading level you need and provided is an output argument and says which threading level you got.
Be aware that for many MPI implementations MPI_THREAD_MULTIPLE is either not supported at all or is very slow. I suggest designing your programs without the need for MPI_THREAD_MULTIPLE.
The usage can be something like
integer :: ie
integer :: required, provided
required = MPI_THREAD_SERIALIZED
call MPI_Init_thread(required, provided, ie)
if (ie/=0) ... error
if (provided<required) then
... different error
I have a large source code where subroutines in modules are defined outside the module ... end module statement (ie. not inside a contains statement). I've included a simplified module below:
module core
implicit none
type :: disc_status
sequence
real*8 :: alpha1, alpha2, alpha3
end type disc_status
end module core
subroutine tester(input)
use core
type(disc_status), intent(in) :: input
print *, input%alpha1, input%alpha2, input%alpha3
end subroutine tester
Here's an example program using the module and subroutine:
program flyingDiscSimulator
use core
implicit none
type(disc_status) :: disc
disc%alpha1 = 1.1D0
disc%alpha2 = 1.2D0
disc%alpha3 = 1.3D0
call tester(disc)
print *, 'it works'
end program flyingDiscSimulator
Normally, I end up seeing subroutines use the contains statement within a module:
module core
implicit none
type :: disc_status
sequence
real*8 :: alpha1, alpha2, alpha3
end type disc_status
contains
subroutine tester(input)
type(disc_status), intent(in) :: input
print *, input%alpha1, input%alpha2, input%alpha3
end subroutine tester
end module core
However, the program file referenced above doesn't require any changes to use either way of including a subroutine in a module (using gfortran anyways). Thus, there appears to be no difference in the usage of the module or it's subroutine between the two solutions. Are there any "under the hood" differences between the two styles?
The versions
module m
contains
subroutine s()
end subroutine s
end module m
and
module m
end module m
subroutine s()
end subroutine s
say completely different things, but in the case of the question the end results are much the same.
The first version here creates a module procedure s with host m; the second version has an external procedure s with no host.
Although the example of the question has the external procedure using the module, there is more generally a difference: the module procedure has access to all entities in the module (except when made inaccessible through an import statement or by being obscured by local names; the external procedure using the module has access only to those public entities.
However, come the main program the effects are different. The external subroutine and its name are global entities. Going to my second version, then
program main
call s
end program
is a valid program which calls the external subroutine s. This subroutine reference is valid because the implicit interface for s in the main program suffices. If the external subroutine s were such than an explicit interface were required then main program here would not be allowed. It is acceptable, but not required, to have an external s statement in the main program to reinforce to the reader that the subroutine is an external one (with implicit interface). (implicit none external would make external s necessary.)
The example of the question is such that an explicit interface is not required.
A module procedure always has an explicit interface available when accessible. However, a module procedure is not a global entity and its name is not a global identifier.
Under the hood, there are implementation differences (stemming from the above): most notably compilers will often "name mangle" module procedures.
In summary, there are differences between the two approaches of the question, but they won't be noticed by the programmer in this case.
The procedure inside the module will have an explicit interface (aka the compiler knows about the arguments' characteristics).
Whereas the procedure outside of the module will have an implicit interface (aka the compiler must make assumptions about the arguments).
Explicit interfaces help the compiler to find programming errors and are therefore favourable.
For a more indepth discussion of advantages see for example this answer.
I am not even sure if it is valid Fortran to call tester inside the program when it is defined outside of core?
The line use core should just make the module's public objects known and the tester procedure needs its own external tester line.
I have an existing Fortran code using MPI for parallel work.
I am interested in adding some of the PETSc solvers (KSP specifically), however when including the relevant .h or .h90 files (petsc, petscsys, petscksp, etc...) I get a problem with variables that share the same name as the MPI ones.
i.e.:
error #6405: The same named entity from different modules and/or program units cannot be referenced. [MPI_DOUBLE_PRECISION]
error #6405: The same named entity from different modules and/or program units cannot be referenced. [MPI_SUM]
error #6405: The same named entity from different modules and/or program units cannot be referenced. [MPI_COMM_WORLD]
and so on.
(using ics/composer_xe_2011_sp1.6.233 and ics/impi/4.0.3.008 and petsc 3.6.0, also tried older petsc version 3.5.4)
All of these are defined equally in both MPI and PETSc - is there a way to resolve this conflict and use both?
I'll point out that I DO NOT WANT to replace MPI calls with PETSc calls, as the code should have an option to run independent of PETSc.
As for minimal code, cleaning up the huge code is an issue apparently, so I've made the following simple example which includes the relevant parts:
program mpitest
implicit none
use mpi
! Try any of the following:
!!!#include "petsc.h"
!!!#include "petsc.h90"
!!!#include "petscsys.h"
! etc'
integer :: ierr, error
integer :: ni=256, nj=192, nk=256
integer :: i,j,k
real, allocatable :: phi(:,:,:)
integer :: mp_rank, mp_size
real :: sum_phi,max_div,max_div_final,sum_div_final,sum_div
call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world,mp_rank,ierr)
call mpi_comm_size(mpi_comm_world,mp_size,ierr)
allocate(phi(nj,nk,0:ni/mp_size+1))
sum_phi = 0.0
do i=1,ni/mp_size
do k=1,nk
do j=1,nj
sum_phi = sum_phi + phi(j,k,i)
enddo
enddo
enddo
sum_phi = sum_phi / real(ni/mp_size*nk*nj)
call mpi_allreduce(sum_div,sum_div_final,1,mpi_double_precision,mpi_sum, &
mpi_comm_world,ierr)
call mpi_allreduce(max_div,max_div_final,1,mpi_double_precision,mpi_max, &
mpi_comm_world,ierr)
call mpi_finalize(error)
deallocate(phi)
WRITE(*,*) 'Done'
end program mpitest
This happens directly when PETSc headers are included and vanishes when the include is removed.
Alright, so the answer has been found:
PETSc does not favor Fortran much and, therefore, does not function the same way as it does with C/C++ and uses different definitions.
For C/C++ one would use the headers in /include/petscXXX.h and everything will be fine, moreover the hierarchical structure already includes dependent .h files (i.e. including petscksp.h will include petscsys.h, petscvec.h and so on).
NOT IN FORTRAN.
First and foremost,for FORTRAN one needs to include headers in /include/petsc/finclude/petscXXXdef.h (or .h90 if PETSc is compiles with that flag). Note that the files are located in a different include folder and are petscxxxdef.h.
Then 'use petscXXX' will work along with MPI without conflicting.
I have a program like this
program
call one()
contains one()
some vars
contains two()
use the vars of one
define its own vars
contains three()
use the vars of both one and two
This doesn't compile because Fortran allows only the first contains statement.
I used this design to avoid passing and retyping all the variables of one() into two() and three().
How can I rewrite the program so that the variable sharing is achieved?
I cannot define all the variable before the statement call one().
The code will be to hard to manage, I need the feature :
parent subroutine cannot access internal subroutine variables.
Maybe a solution is to use pointer
call one(pointer_to_two)
then define the routine two() in its own module.
But I find this design complicate with my limited Fortran skills.
And I'm worried it will impact performance.
Should I use a common block?
I use the latest dialect of Fortran with the Intel compiler.
Below is 'compilable' example.
program nested_routines
call one()
contains
subroutine one()
integer :: var_from_one = 10
print *, 1
call b()
contains
subroutine two()
integer :: var_from_two = 20
print *, 2, var_from_one
call c()
contains
subroutine three()
print *, 3, var_from_one, var_from_two
end subroutine
end subroutine
end subroutine
end module
I recommend placing your procedures (subroutines and functions) into a module after a single "contains" and using that module from your main program. The local variables of each procedure will be hidden their callers. The way that this differs from your goals is that you have to redeclare variables. I dislike the inheritance of all variables in a subroutine contained in another because it is possible to mistakenly reuse a variable. If you have a few variables that are shared across many procedures, perhaps the appropriate design choice is to make them global. With Fortran >=90, a module variable is better method for a global variable than common blocks. If you have variables that are communicated between a limited number of procedures, it is generally better to use arguments because that makes the information flow clearer.
this might be possible if there is a separate module for each of the functions specific variables and a separate module for each function implementation
watch out for the order of the module compilation, according to the usage hierarchy necessity
also, i have no idea on the performance effect of doing this
module m1_var
implicit none
contains
end module m1_var
!========================================================================================================================
module m2_var
implicit none
contains
end module m2_var
!========================================================================================================================
module m3_var
implicit none
contains
end module m3_var
!========================================================================================================================
module m3_sub
implicit none
contains
subroutine s3()
use m2_var !to see varibles
!arguments
!local
!allocation
!implementation
end subroutine s3
end module m3_sub
!========================================================================================================================
module m2_sub
implicit none
contains
subroutine s2()
use m3_sub
use m1_var
!arguments
!local
!allocation
!implementation
call s3
end subroutine s2
end module m2_sub
!========================================================================================================================
module m1_sub
use m1_var
implicit none
contains
subroutine s1()
use m2_sub
!arguments
!local
!allocation
!implementation
! call s2
end subroutine s1
end module m1_sub
!========================================================================================================================
program nestmoduse
use m1_sub
implicit none
call s1
end program nestmoduse
Updated: I have a problem, and I don't know what it is. I have a test program with MPI_INIT and MPI_FINALIZE in its body. I have a module that contains 5 subroutines: 3 subroutines are dependent, and independent from 2 other subroutines. I want to put the MPI code in the test program into this module. I put MPI_INIT in the module where the variables are declared and before the subroutine. I obtain a series of errors with the same error message:
This statement must not appear in the specification part of a module
How does "MPI_INIT and MPI_FINALIZE should be called only once" affect Fortran program, modules, and subroutines? Where should I put MPI functions and variables if there are multiple, independent programs, each calling this module's subroutines multiple number of times?
~~~~~~~~~
I have a module that contains a series of subroutines, which contain do loops that I wish to parallelize. The subroutines are public that other programs use. Should I define MPI outside the subroutines:
module ...
call MPI_INIT
subroutine 1
... (MPI code)
subroutine 2
subroutine 3
MPI_GATHERV
call MPI_FINALIZE
module
or inside each subroutine?
module ...
subroutine 1
call MPI_INIT
... (MPI code)
MPI_GATHERV
call MPI_FINALIZE
subroutine 2
call MPI_INIT
... (MPI code)
MPI_GATHERV
call MPI_FINALIZE
subroutine 3
call MPI_INIT
... (MPI code)
MPI_GATHERV
call MPI_FINALIZE
module
I see the advantage of following the coarse grain principle for solution 1. If a program calls subroutine 1, will it also execute MPI codes outside the subroutine?
You should initialize and finalize MPI exactly once in your program. After calling MPI_Finalize you are not allowed to do further MPI actions. The standard says:
Once MPI_FINALIZE returns, no MPI routine (not even MPI_INIT) may be called, except for MPI_GET_VERSION, MPI_GET_LIBRARY_VERSION, MPI_INITIALIZED, MPI_FINALIZED, and any function with the prefix MPI_T_ (within the constraints for functions with this prefix listed in Section 14.3.4).
(MPI3, p361, l25) MPI3 PDF
Reply to the update:
You are not allowed to put executable statements into the declaration part of your code. The point, that there should be just one call to MPI_Init and MPI_Finalize in your execution means exactly that. Your application could read something like that:
program mini
use mpi
implicit none
integer :: iError
call mpi_init(iError)
call do_some_stuff()
call mpi_finalize(iError)
end program mini
If you have various initialization stuff you want to do in the beginning of the program, you can of course combine it in module subroutine and call mpi_init in there. If you use a test program for your module, use mpi_init and mpi_finalize there.
An example for the call of mpi_init and mpi_finalize in some subroutines can be found for example in the env_module of the treelm library which we use to set up very general stuff.
Where should I put MPI functions and variables if there are multiple, independent programs, each calling this module's subroutines multiple number of times?
Could you rephrase that? I don't get it. MPI functions and variables are supposed to be in the mpi module, if you have multiple independent programs calling them, they all have to "use" the mpi module. Independent programs are also fine to have the MPI_Init and MPI_Finalize each in itself. Maybe you could post a short code example, what you want to achieve and what your problem is.