Netcdf files created with fortran - fortran

I use fortran to create netcdf files. I have this problem: I have no choice than to use a loop to define some of my variables (and assign the attribute values). Then, when I want to provide the values of the variables (i.e, nf90_put_var), it only recalls the last variable that has been defined... I have tried many things to resolve the problem but I didn't succeed. Someone could help me ?
Here is a small part of my script:
DO IP=1,N(PTS)
Param_name='var1'
params(I,IPTS)=INT(I,IPTS,IP)
! Define Netcdf Variable
IERREU = nf90_def_var(ncid, Param_name, nf90_real, dimid, ParVarID)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF
ENDDO
! End define mode
IERREU = nf90_enddef(ncid)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF
! Write the data in netcdf
IERREU = nf90_put_var(ncid,parvarID, params)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF

You must store the parVarId for each variable separately. Perhaps store it in an array. You now overwrite it with each call to nf90_def_var.
integer ParVarIds(N(PTS))
DO IP=1,N(PTS)
...
IERREU = nf90_def_var(ncid, Param_name, nf90_real, dimid, ParVarIds(IP))
...
ENDDO
DO IP=1,N(PTS)
...
IERREU = nf90_put_var(ncid,parVarIds(IP), something)
...
ENDDO

Related

MPI_Bcast non-root nodes not receiving all data

So, I'm currently writing a small scale code that does output using CGNS and adaptive mesh refinement (AMR) with Amrex. This is all being done with Fortran 95, though CGNS is C with Fortran interfaces and Amrex is C++ with Fortran interfaces (those are not in the sample code). I'm using OpenMPI 1.10.7.
This will eventually go into a full CFD code, but I wanted to test it small scale to work the bugs out before putting in the larger code. The program below seems to works every time, but it was originally a subroutine that did not.
I'm having an issue where not all of the data from MPI_Bcast is being received by every process... sometimes. I can hit execute on the same code, twice in a row and sometimes is bombs out (segfault from CGNS elsewhere in the code, and sometimes it works. As far as I can tell, the program bombs when not all of the data from MPI_Bcast is received in time to start work elsewhere. Despite MPI_wait and MPI_barrier, the writes at the bottom in the subroutine will spit out junk on lvl=1 for the last six indices of all the arrays. Printing information to the screen seems to help, but more processors seem to lower the likelihood of the code working.
I've currently got it as MPI_ibcast with an MPI_wait, but I've also tried MPI_Bcast with MPI_barriers after. Changing the communicator from one defined by Amrex to MPI_COMM_WORLD doesn't help.
...
program so_bcast
!
!
!
!
use mpi
implicit none
integer :: lvl,i,a,b,c,ier,state(MPI_STATUS_SIZE),d
integer :: n_elems,req,counter,tag,flavor
integer :: stat(MPI_STATUS_SIZE)
integer :: self,nprocs
type :: box_zones
integer,allocatable :: lower(:,:),higher(:,:),little_zones(:)
double precision,allocatable :: lo_corner(:,:),hi_corner(:,:)
integer :: big_zones
integer,allocatable :: zone_start(:),zone_end(:)
end type
type(box_zones),allocatable :: zone_storage(:)
call MPI_INIT(ier)
call MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ier)
call MPI_COMM_RANK(MPI_COMM_WORLD,self,ier)
lvl = 1
! Allocate everything, this is done elsewhere in the actual code, but done here
! for simplification reasons
allocate(zone_storage(0:lvl))
zone_storage(0)%big_zones = 4
zone_storage(1)%big_zones = 20
do i = 0,lvl
allocate(zone_storage(i)%lower(3,zone_storage(i)%big_zones))
allocate(zone_storage(i)%higher(3,zone_storage(i)%big_zones))
allocate(zone_storage(i)%lo_corner(3,zone_storage(i)%big_zones))
allocate(zone_storage(i)%hi_corner(3,zone_storage(i)%big_zones))
zone_storage(i)%lower = self
zone_storage(i)%higher = self*2+1
zone_storage(i)%lo_corner = self*1.0D0
zone_storage(i)%hi_corner = self*1.0D0+1.0D0
allocate(zone_storage(i)%zone_start(0:nprocs-1))
allocate(zone_storage(i)%zone_end(0:nprocs-1))
zone_storage(i)%zone_start(self) = zone_storage(i)%big_zones/nprocs*self+1
zone_storage(i)%zone_end(self) = zone_storage(i)%zone_start(self)+zone_storage(i)%big_zones/nprocs-1
if (zone_storage(i)%zone_end(self)>zone_storage(i)%big_zones) zone_storage(i)%zone_end(self) = zone_storage(i)%big_zones
end do
do i = 0,lvl
write(*,*) 'lower check 0',self,'lower',zone_storage(i)%lower
write(*,*) 'higher check 0',self,'high',zone_storage(i)%higher
write(*,*) 'lo_corner check 0',self,'lo_corner',zone_storage(i)%lo_corner
write(*,*) 'hi_corner check 0',self,'hi_corner',zone_storage(i)%hi_corner
write(*,*) 'big_zones check 0',self,'big_zones',zone_storage(i)%big_zones
write(*,*) 'zone start/end 0',self,'lvl',i,zone_storage(i)%zone_start,zone_storage(i)%zone_end
end do
!
! Agglomerate the appropriate data to processor 0 using non-blocking receives
! and blocking sends
!
do i = 0,lvl
do a = 0,nprocs-1
call mpi_bcast(zone_storage(i)%zone_start(a),1,&
MPI_INT,a,MPI_COMM_WORLD,ier)
call mpi_bcast(zone_storage(i)%zone_end(a),1,&
MPI_INT,a,MPI_COMM_WORLD,ier)
end do
end do
call MPI_BARRIER(MPI_COMM_WORLD,ier)
counter = 0
do i = 0,lvl
n_elems = 3*zone_storage(i)%big_zones
write(*,*) 'number of elements',n_elems
if (self == 0) then
do a = 1,nprocs-1
do c = zone_storage(i)%zone_start(a),zone_storage(i)%zone_end(a)
tag = c*100000+a*1000+1!+d*10
call mpi_irecv(zone_storage(i)%lower(1:3,c),3,MPI_INT,a,&
tag,MPI_COMM_WORLD,req,ier)
tag = tag + 1
call mpi_irecv(zone_storage(i)%higher(1:3,c),3,MPI_INT,a,&
tag,MPI_COMM_WORLD,req,ier)
tag = tag +1
call mpi_irecv(zone_storage(i)%lo_corner(1:3,c),3,MPI_DOUBLE_PRECISION,a,&
tag,MPI_COMM_WORLD,req,ier)
tag = tag +1
call mpi_irecv(zone_storage(i)%hi_corner(1:3,c),3,MPI_DOUBLE_PRECISION,a,&
tag,MPI_COMM_WORLD,req,ier)
end do
end do
else
do b = zone_storage(i)%zone_start(self),zone_storage(i)%zone_end(self)
tag = b*100000+self*1000+1!+d*10
call mpi_send(zone_storage(i)%lower(1:3,b),3,MPI_INT,0,&
tag,MPI_COMM_WORLD,ier)
tag = tag + 1
call mpi_send(zone_storage(i)%higher(1:3,b),3,MPI_INT,0,&
tag,MPI_COMM_WORLD,ier)
tag = tag + 1
call mpi_send(zone_storage(i)%lo_corner(1:3,b),3,MPI_DOUBLE_PRECISION,0,&
tag,MPI_COMM_WORLD,ier)
tag = tag +1
call mpi_send(zone_storage(i)%hi_corner(1:3,b),3,MPI_DOUBLE_PRECISION,0,&
tag,MPI_COMM_WORLD,ier)
end do
end if
end do
write(*,*) 'spack'
!
call mpi_barrier(MPI_COMM_WORLD,ier)
do i = 0,lvl
write(*,*) 'lower check 1',self,'lower',zone_storage(i)%lower
write(*,*) 'higher check 1',self,'high',zone_storage(i)%higher
write(*,*) 'lo_corner check 1',self,'lo_corner',zone_storage(i)%lo_corner
write(*,*) 'hi_corner check 1',self,'hi_corner',zone_storage(i)%hi_corner
write(*,*) 'big_zones check 1',self,'big_zones',zone_storage(i)%big_zones
write(*,*) 'zone start/end 1',self,'lvl',i,zone_storage(i)%zone_start,zone_storage(i)%zone_end
end do
!
! Send all the data out to all the processors
!
do i = 0,lvl
n_elems = 3*zone_storage(i)%big_zones
req = 1
call mpi_ibcast(zone_storage(i)%lower,n_elems,MPI_INT,&
0,MPI_COMM_WORLD,req,ier)
call mpi_wait(req,stat,ier)
write(*,*) 'spiffy'
req = 2
call mpi_ibcast(zone_storage(i)%higher,n_elems,MPI_INT,&
0,MPI_COMM_WORLD,req,ier)
call mpi_wait(req,stat,ier)
req = 3
call mpi_ibcast(zone_storage(i)%lo_corner,n_elems,MPI_DOUBLE_PRECISION,&
0,MPI_COMM_WORLD,req,ier)
call mpi_wait(req,stat,ier)
req = 4
call mpi_ibcast(zone_storage(i)%hi_corner,n_elems,MPI_DOUBLE_PRECISION,&
0,MPI_COMM_WORLD,req,ier)
call mpi_wait(req,stat,ier)
call mpi_barrier(MPI_COMM_WORLD,ier)
end do
write(*,*) 'lower check 2',self,'lower',zone_storage(lvl)%lower
write(*,*) 'higher check 2',self,'high',zone_storage(lvl)%higher
write(*,*) 'lo_corner check ',self,'lo_corner',zone_storage(lvl)%lo_corner
write(*,*) 'hi_corner check ',self,'hi_corner',zone_storage(lvl)%hi_corner
write(*,*) 'big_zones check ',self,'big_zones',zone_storage(lvl)%big_zones
call MPI_FINALIZE(ier)
end program
...
As I said, this code works, but the larger version does not always work. OpenMPI throws several warnings akin to this:
mca: base: component_find: ess "mca_ess_pmi" uses an MCA interface that is not recognized (component MCA v2.1.0 != supported MCA v2.0.0) -- ignored
mca: base: component_find: grpcomm "mca_grpcomm_direct" uses an MCA interface that is not recognized (component MCA v2.1.0 != supported MCA v2.0.0) -- ignored
mca: base: component_find: rcache "mca_rcache_grdma" uses an MCA interface that is not recognized (component MCA v2.1.0 != supported MCA v2.0.0) -- ignored
etc. etc. But the program can still complete even with those warnings.
-Is there a way to ensure that MPI_bcast has emptied its buffer into the correct region of memory before moving on? It seems to miss this sometimes.
-Is there a different/better method to distribute the data? The sizes have to be able to vary unlike the test program.
Thank you ahead of time.
The most straightforward answer was to use MPI_allgatherv. As little as I wanted to mess with displacements, it was the best setup to share the information and reduce overall code length.
I believe a MPI_waitall solution would work too, as the data was not being fully received before being broadcast.

How to write the formatted matrix in a lines with fortran77?

Suppose I have the matrix c(i,j). I want to write it on the screen on oldest Fortran77 language with three signs after comma. I write
do i=1,N
write(*,"(F8.3)") ( c(i,j), j=1,N )
end do
but the output is in the form
c(1,1)
c(1,2)
...
c(1,10) c(2,1)
c(2,2)
...
Finally, I may simply write
do i=1,N
write(*,*) ( c(i,j), j=1,N )
end do
and then the output is like the matrix, but, of course, it is not formatted.
How to get the correct output in Fortran77?
An edit. It seems that one of solutions is to write
do i=1, N
do j=1, N
write(*,'(F9.3,A,$)') c(i,j), ' '
end do
write(*,*) ' '
end do
Your format only specifies a single float but you actually want to write N per line.
A fairly general solution for this simple case would be something like
program temp
implicit none
integer, parameter :: N=3
real, dimension(N,N) :: c
integer :: i,j
character(len=20) :: exFmt
c = 1.0
write(exFmt,'("(",I0,"(F8.3))")') N
do i=1,N
write(*,exFmt) (c(i,j), j=1,N)
end do
end program
This will make exFmt be '(3(F8.3))', which specifies printing three floats (note you probably really want '(3(F8.3," "))' to explicitly include some spacing).
Note some compilers will allow for exFmt to be just '(*(F8.3))'. This is part of the fortran 2008 specification so may not be provided by all compilers you have access to. See here for a summary of compiler support (see Unlimited format item, thanks to HighPerformanceMark for this)
Finally an easy bodge is to use a format statment like '(1000(F8.3))' where 1000 is larger than you will ever need.

Writing a scalar variable along an unlimited dimension in NetCDF

I'm trying to write a time variable from a hydrodynamic model into a netcdf file (unlimited dimension variable). I've attached a simplified code example in Fortran90 that highlights my issue.
The subroutine to write the netcdf file is called multiple times during a simulation depending on a user specified output interval (10 times for this example). I can create the file and add attributes for the first time the subroutine is called.
I can't get the start and count variables correct to write the time variable to the file during the subsequent calls of the subroutine. This is the error, at the writing the model time variable, I receive when trying to compile the code: Error: There is no specific function for the generic 'nf90_put_var'
PROGRAM test_netcdf
IMPLICIT NONE
INTEGER :: N
REAL :: time_step = 2.
! Call efdc_netcdf 10 times
DO N=1,10
CALL efdc_netcdf(N, time_step)
time_step=time_step + 1.
ENDDO
END PROGRAM test_netcdf
************************************
! Create NetCDF file and write variables
SUBROUTINE efdc_netcdf(N, time_step)
USE netcdf
IMPLICIT NONE
LOGICAL,SAVE::FIRST_NETCDF=.FALSE.
CHARACTER (len = *), PARAMETER :: FILE_NAME = "efdc_test.nc"
INTEGER :: ncid, status
INTEGER :: time_dimid
INTEGER :: ts_varid, time_varid
INTEGER :: start(1), count(1)
INTEGER :: deltat
INTEGER :: N
REAL :: time_step
start=(/N/)
count=(/1/)
! Create file and add attributes during first call of efdc_netcdf
IF(.NOT.FIRST_NETCDF)THEN
status=nf90_create(FILE_NAME, NF90_CLOBBER, ncid)
! Define global attributes once
status=nf90_put_att(ncid, NF90_GLOBAL, 'format', 'netCDF-3 64bit offset file')
status=nf90_put_att(ncid, NF90_GLOBAL, 'os', 'Linux')
status=nf90_put_att(ncid, NF90_GLOBAL, 'arch', 'x86_64')
! Define deltat variable
status=nf90_def_var(ncid,'deltat',nf90_int,ts_varid)
! Define model time dimension
status=nf90_def_dim(ncid,'efdc_time',nf90_unlimited,time_dimid)
! Define model time variable
status=nf90_def_var(ncid,'efdc_time',nf90_real,time_dimid,time_varid)
status=nf90_enddef(ncid)
! Put deltat during first call
deltat=7
status=nf90_put_var(ncid, ts_varid, deltat)
FIRST_NETCDF=.TRUE.
ENDIF
! Put model time variable
status=nf90_put_var(ncid, time_varid, time_step, start=start, count=count)
! Close file at end of DO loop
IF(N.EQ.10) THEN
status=nf90_close(ncid)
ENDIF
RETURN
END SUBROUTINE efdc_netcdf
The issue is in the line the compiler flags:
status=nf90_put_var(ncid, time_varid, time_step, start=start, count=count)
You are (correctly) trying to write a scalar variable, time_step, into a specific index (start) along variable time_varid, which is defined on a 1-d, infinite-extent dimension. However, in this case, the optional argument count isn't meaningful; you are writing the scalar, and count can only ever be 1. As a result, the fortran bindings for a nf90_put_var() taking a single scalar for input don't have the optional argument defined for count, and that's why you're getting the "no specific function for the generic' nf90_put_var" error from the compiler. This is all perfectly reasonable, but neither the error message nor the docs are super helpful in figuring out how to solve the problem.
You can fix your code by putting the time_step data into a real, dimension(1) variable, and putting that, instead; but easiest is to just get rid of the count specification, which isn't necessary here anyway:
status=nf90_put_var(ncid, time_varid, time_step, start=start)

Is there a way to call the field of a derived type using a string?

Is there a way to call the field of a derived type via string argument in fortran?
something like...
subroutine set(car, fieldName, value)
type(Car_T) :: car
character*(*) :: fieldName
character*(*) :: value
car%[fieldName] = value
end subroutine set
I know you can do stuff like this in javascript, c#, ect., but this could really help me from having a ton of duplicate code if fortran allows it.
No. You will need to write the executable code (perhaps a SELECT CASE construct) that maps the value of the string across to the relevant component.
You only need to write this once for each unique set of component names.
You can do something similar with namelists but it is for items known to the program: not new items.
integer:: inin
real:: rere
namelist /info/ inin, rere
inin = 0 ! default
rere = 20.4 ! default
read (*, nml=info)
print *, 'inin=', inin
print *, 'rere=', rere
stop
end
On the input
&info inin=2 rere=40.0 /
Or if you wish to input one value only
&info rere=3.162 /

How to go to the end of the file?

I have opened a file to write a number. I have to write the number at the end of the file so
how to go to the last line to write on it?
You should open the file with
open(..., position="append",...)
Alternatively, you can inquire for the size of the file
inquire(...,size=some_integer_variable,...)
then if the file is a direct access file, you can use this size to calculate the record number of the final record. Alternatively, if the access mode is "stream", you can use
write(..., pos=some_integer_variable)
to write starting at the end of the file.
I've been using the same trick for years, and would be interested in a more elegant way but I can propose you the following method. Note that it is less and less efficient as the file increases in number of lines. Note also that this part of code could endup in an elegant module dedicated to playing with input/output.
Open your file
open(11, file='monfichier')
Compute how many lines there are in your file
nbline = 0 ! should have been declared as an integer
do while(.true.)
read(11,*,iostat=ios) ! ios should have been declared as an integer
if( ios > 0 ) then
stop 'problem somewhere'
else if( ios < 0 ) then ! end of file is reached
exit
else
nbline = nbline + 1
end if
end do
close(11)
at this step, you have the total number of lines stored in variable nbline.
If you want to print something at the Nth line before the last line, then
open(11, file='monfichier')
do i = 1, nbline - N ! see my nota bene at the end of my answer; i and N are integers
read(11,*)
end do
write(11,*)'hello world'
Et voilĂ  !
N.B. : Please be carefull in the way you count for nbline-N or nbline-(N-1), depending on exactly what you want.
subroutine to_last_rec (luout)
! purpose: position to last record of file
implicit none
integer :: luout
logical :: ende
! first executable statement
ende = .FALSE.
do while ( .NOT. ende)
read (luout,*,end=100)
enddo
100 return
end subroutine to_last_rec
PROGRAM example
IMPLICIT NONE
INTEGER :: ierr
OPEN(UNIT=13,FILE="ex.dat")
CALL FSEEK(13, 0, 2, ierr)
! DO WHATEVER YOU WANT THEN
CLOSE(13)
END PROGRAM example
the call to fseek goes to the end of the file ( used like that, check the usage http://docs.oracle.com/cd/E19957-01/805-4942/6j4m3r8ti/index.html)