Error related to EOF command in fortran code - fortran

I am a beginner to Fortran and am trying to compile a Fixed-Term Fortran Code using gfortran. I got a bunch of errors, which I could fix them. However, there is an Error related to "EOF" which I could not solve it. Is there any way to fix this problem? (The two "EOF" lines are lines 40 and 121.)
37 OPEN(4,FILE="ABCE.Pn")
38
39 OPEN(5,FILE="../sta.txt")
40 DO WHILE (.not.EOF(5))
41 N=N+1
42 READ(5,*)STA(N)%COD,STA(N)%NAME,STA(N)%LAT,
43 $ STA(N)%LON,STA(N)%H
44 ENDDO
45 NSTA=N
46 CLOSE(5)`
......
121 DO WHILE (.not.EOF(1))
122 READ(1,'(A60)',ERR=999) TIT
123 C IF(IYEAR.GE.2008.OR.
(IYEAR.EQ.2007.AND.MONTH.GE.11))
124 C $ TIT=TIT(2:60)
125 IF(TIT(1:60).EQ.'')THEN ! NEW EARTHQUAKE`
The error:
DO WHILE (.not.EOF(5))
1
Error: Operand of .not. operator at (1) is REAL(4)
ReadP2Pn.for:121.21:
DO WHILE (.not.EOF(1))
1
Error: Operand of .not. operator at (1) is REAL(4)

EOF(5) is non-standard. You should check for EOF in the read statement (which sadly looks like a goto) :
40 DO WHILE (.true.)
41 N=N+1
42 READ(5,*,end=990)STA(N)%COD,STA(N)%NAME,STA(N)%LAT,
43 $ STA(N)%LON,STA(N)%H
44 ENDDO
45 990 NSTA=N

Related

Why these two MPI-IO code are not working the same way?

I am learning MPI-IO and following a tutorial (PDF download here).
For one exercise, the correct code is:
Program MPI_IOTEST
Use MPI
Implicit None
Integer :: wsize,wrank
Integer :: ierror
Integer :: fh,offset
Call MPI_Init(ierror)
Call MPI_Comm_rank(MPI_COMM_WORLD,wrank,ierror)
Call MPI_Comm_size(MPI_COMM_WORLD,wsize,ierror)
offset=4*wrank; ! because 4 bytes is one signed int
! --- open the MPI files using a collective call
Call MPI_File_Open(MPI_COMM_WORLD,'test.dat',MPI_MODE_RDWR+MPI_MODE_CREATE,MPI_INFO_NULL,fh,ierror);
Write(*,*)'rank',wrank
Call MPI_FILE_WRITE_AT(fh, offset, wrank,1,MPI_INTEGER,mpi_status_ignore,ierror);
Call MPI_File_close(fh,ierror)
Call MPI_Finalize(ierror)
End Program MPI_IOTEST
Then you just build and run it as 24 MPI tasks.
Then for validation, simply do
od -i test/dat
You will get the result exactly the same on the tutorial, which is given below.
0000000 0 1 2 3
0000020 4 5 6 7
0000040 8 9 10 11
0000060 12 13 14 15
0000100 16 17 18 19
0000120 20 21 22 23
0000140
But if I change 1 to num:
Call MPI_FILE_WRITE_AT(fh, offset, wrank,1,MPI_INTEGER,mpi_status_ignore,ierror);
into
Call MPI_FILE_WRITE_AT(fh, offset, wrank,num,MPI_INTEGER,mpi_status_ignore,ierror);
and before that define
integer :: num
num=1
After rm test.dat, then re-build the file and run it, you will get:
0000000 0 0 0 0
*
Your error is not actually in the specification or use of num but rather in the specification of offset.
If you read the man-page of MPI_File_write_at, you have to specify the offset as MPI_Offset kind.
So if you change your program to use:
integer(kind=MPI_OFFSET_KIND) :: offset
It works fine.
Did you not notice the size of the test.dat file generated?

MPI partition and gather 2D array in Fortran

I have a 2D array where I'm running some computation on each process. Afterwards, I need to gather all the computed columns back to the root processes. I'm currently partitioning in a first come first serve manner. In pseudo code, the main loop looks like:
DO i = mpi_rank + 1, num_columns, mpi_size
array(:,i) = do work here
After this is completed, I need to gather these columns into the correct indices back in the root process. What is the best way to do this? It looks like MPI_GATHERV could do what I want if the partitioning scheme was different. However, I'm not sure what the best way to partition that would be since num_columns and mpi_size are not necessarily evenly divisible.
I suggest the following approach:
Cut the 2D array into chunks of "almost equal" size, i.e. with local number of columns close to num_columns / mpi_size.
Gather chunks with mpi_gatherv, which operates with chunks of different size.
To get "almost equal" number of columns, set local number of columns to integer value of num_columns / mpi_size and increment by one only for first mod(num_columns,mpi_size) mpi tasks.
The following table demonstrates the partitioning of (10,12) matrix on 5 MPI processes:
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
01 02 03 11 12 13 21 22 31 32 41 42
Here the first digit is an id of the process, the second digit is a number of local columns.
As you can see, processes 0 and 1 got 3 columns each, while all other processes got only 2 columns each.
Below you can find working example code that I wrote.
The trickiest part would be the generation of rcounts and displs arrays for MPI_Gatherv. The discussed table is an output of the code.
program mpi2d
implicit none
include 'mpif.h'
integer myid, nprocs, ierr
integer,parameter:: m = 10 ! global number of rows
integer,parameter:: n = 12 ! global number of columns
integer nloc ! local number of columns
integer array(m,n) ! global m-by-n, i.e. m rows and n columns
integer,allocatable:: loc(:,:) ! local piece of global 2d array
integer,allocatable:: rcounts(:) ! array of nloc's (for mpi_gatrherv)
integer,allocatable:: displs(:) ! array of displacements (for mpi_gatherv)
integer i,j
! Initialize
call mpi_init(ierr)
call mpi_comm_rank(MPI_COMM_WORLD, myid, ierr)
call mpi_comm_size(MPI_COMM_WORLD, nprocs, ierr)
! Partition, i.e. get local number of columns
nloc = n / nprocs
if (mod(n,nprocs)>myid) nloc = nloc + 1
! Compute partitioned array
allocate(loc(m,nloc))
do j=1,nloc
loc(:,j) = myid*10 + j
enddo
! Build arrays for mpi_gatherv:
! rcounts containes all nloc's
! displs containes displacements of partitions in terms of columns
allocate(rcounts(nprocs),displs(nprocs))
displs(1) = 0
do j=1,nprocs
rcounts(j) = n / nprocs
if(mod(n,nprocs).gt.(j-1)) rcounts(j)=rcounts(j)+1
if((j-1).ne.0)displs(j) = displs(j-1) + rcounts(j-1)
enddo
! Convert from number of columns to number of integers
nloc = m * nloc
rcounts = m * rcounts
displs = m * displs
! Gather array on root
call mpi_gatherv(loc,nloc,MPI_INT,array,
& rcounts,displs,MPI_INT,0,MPI_COMM_WORLD,ierr)
! Print array on root
if(myid==0)then
do i=1,m
do j=1,n
write(*,'(I04.2)',advance='no') array(i,j)
enddo
write(*,*)
enddo
endif
! Finish
call mpi_finalize(ierr)
end
What about gathering in chunks of size mpi_size?
To shorten this here, I'll assume that num_columns is a multiple of mpi_size. In your case the gathering should look something like (lda is the first dimension of array):
DO i = 1, num_columns/mpi_size
IF (rank == 0) THEN
CALL MPI_GATHER(MPI_IN_PLACE, lda, [TYPE], array(1,(i-1)*mpi_size+1), lda, [TYPE], 0, MPI_COMM_WORLD, ierr)
ELSE
CALL MPI_GATHER(array(1, rank + (i-1)*mpi_size + 1), lda, [TYPE], array(1,(i-1)*mpi_size+1), lda, [TYPE], 0, MPI_COMM_WORLD, ierr)
END IF
ENDDO
I'm not so sure with the indices and if this actually works, but I think, you should get the point.

Compilation error in CRC encoding program

I wrote the following code for a program that performs CRC encoding. I modeled it on a C program that we were taught in class. It gives some compilation errors that I can't correct. I have mentioned them after the code.
I wrote the following code for a program that performs CRC encoding. I modeled it on a C program that we were taught in class. It gives some compilation errors that I can't correct. I have mentioned them after the code.
1 #include<iostream>
2 #include<string.h>
3
4 using namespace std;
5
6 class crc
7 {
8 char message[128],polynomial[18],checksum[256];
9 public:
10 void xor()
11 {
12 for(int i=1;i<strlen(polynomial);i++)
13 checksum[i]=((checksum[i]==polynomial[i])?'0':'1');
14 }
15 crc() //constructor
16 {
17 cout<<"Enter the message:"<<endl;
18 cin>>message;
19 cout<<"Enter the polynomial";
20 cin>polynomial;
21 }
22 void compute()
23 {
24 int e,i;
25 for(e=0;e<strlen(polynomial);e++)
26 checksum[e]=message[e];
27 do
28 {
29 if(checksum[0]=='0')
30 xor();
31 for(i=0;i<(strlen(polynomial)-1);i++)
32 checksum[i]=checksum[i+1];
33 checksum[i]=message[e++];
34 }while(e<=strlen(message)+strlen(checksum)-1);
35 }
36 void gen_proc() //general processing
37 {
38 int mesg_len=strlen(message);
39 for(int i=mesg_len;i<mesg_len+strlen(polynomial)-1;i++)
40 message[i]='0';
41 message[i]='\0'; //necessary?
42 cout<<"After appending zeroes message is:"<<message;
43 compute();
44 cout<<"Checksum is:"<<checksum;
45 for(int i=mesg_len;i<mesg_len+strlen(polynomial);i++)
46 message[i]=checksum[i-mesg_len];
47 cout<<"Final codeword is:"<<message;
48 }
49 };
50 int main()
51 {
52 crc c1;
53 c1.gen_proc();
54 return 0;
55 }
The compilation errors are:
crc.cpp:10: error: expected unqualified-id before ‘^’ token
crc.cpp: In member function ‘void crc::compute()’:
crc.cpp:30: error: expected primary-expression before ‘^’ token
crc.cpp:30: error: expected primary-expression before ‘)’ token
crc.cpp: In member function ‘void crc::gen_proc()’:
crc.cpp:41: warning: name lookup of ‘i’ changed for ISO ‘for’ scoping
crc.cpp:39: warning: using obsolete binding at ‘i’
I have been checking online for these errors and the only thing I have been seeing is errors caused by incorrect array handling. I have double checked my code but I don't seem to be performing any incorrect array access.
xor is a reserved keyword in C++. You should rename the function to something else.
The compiler isn't actually "seeing" an identifier, but a keyword. If, in your code snippet, you'd replace xor with ^ the obvious syntactic error becomes clear.

generate a sequence array in fortran

Is there an intrinsic in Fortran that generates an array containing a sequence of numbers from a to b, similar to python's range()
>>> range(1,5)
[1, 2, 3, 4]
>>> range(6,10)
[6, 7, 8, 9]
?
No, there isn't.
You can, however, initialize an array with a constructor that does the same thing,
program arraycons
implicit none
integer :: i
real :: a(10) = (/(i, i=2,20, 2)/)
print *, a
end program arraycons
If you need to support floats, here is a Fortran subroutine similar to linspace in NumPy and MATLAB.
! Generates evenly spaced numbers from `from` to `to` (inclusive).
!
! Inputs:
! -------
!
! from, to : the lower and upper boundaries of the numbers to generate
!
! Outputs:
! -------
!
! array : Array of evenly spaced numbers
!
subroutine linspace(from, to, array)
real(dp), intent(in) :: from, to
real(dp), intent(out) :: array(:)
real(dp) :: range
integer :: n, i
n = size(array)
range = to - from
if (n == 0) return
if (n == 1) then
array(1) = from
return
end if
do i=1, n
array(i) = from + range * (i - 1) / (n - 1)
end do
end subroutine
Usage:
real(dp) :: array(5)
call linspace(from=0._dp, to=1._dp, array=array)
Outputs the array
[0., 0.25, 0.5, 0.75, 1.]
Here dp is
integer, parameter :: dp = selected_real_kind(p = 15, r = 307) ! Double precision
It is possible to create a function that reproduces precisely the functionality of range in python:
module mod_python_utils
contains
pure function range(n1,n2,dn_)
integer, intent(in) :: n1,n2
integer, optional, intent(in) :: dn_
integer, allocatable :: range(:)
integer ::dn
dn=1; if(present(dn_))dn=dn_
if(dn<=0)then
allocate(range(0))
else
allocate(range(1+(n2-n1)/dn))
range=[(i,i=n1,n2,dn)]
endif
end function range
end module mod_python_utils
program testRange
use mod_python_utils
implicit none
integer, allocatable :: v(:)
v=range(51,70)
print"(*(i0,x))",v
v=range(-3,30,2)
print"(*(i0,x))",v
print"(*(i0,x))",range(1,100,3)
print"(*(i0,x))",range(1,100,-3)
end program testRange
The output of the above is
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70
-3 -1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100
Notice that :
the last line is empty: Fortran treats graciously zero-length arrays.
allocated variables get automatically deallocated once out of scope.

position.hh:46: error: expected unqualified-id before ‘namespace’

Here's my code:
34
35 /**
36 ** \file position.hh
37 ** Define the example::position class.
38 */
39
40 #ifndef BISON_POSITION_HH
41 #define BISON_POSITION_HH
42
43 #include <iostream>
44 #include <string>
45
46 namespace example
47 {
48 /// Abstract a position.
49 class position
50 {
51 public:
52
53 /// Construct a position.
54 position ()
55 : filename (0), line (1), column (0)
56 {
Thanks, speeder, that's great. Necrolis, thank you as well. Both of you guys are onto the same track on the compilation units. Here's the full error report:
In file included from location.hh:45,
from parser.h:64,
from scanner.h:25,
from scanner.ll:8:
position.hh:46: error: expected unqualified-id before ‘namespace’
location.hh looks like this:
35 /**
36 ** \file location.hh
37 ** Define the example::location class.
38 */
39
40 #ifndef BISON_LOCATION_HH
41 # define BISON_LOCATION_HH
42
43 # include <iostream>
44 # include <string>
45 # include "position.hh"
46
47 namespace example
48 {
49
50 /// Abstract a location.
51 class location
52 {
53 public:
I should also add that these files are being generated by bison. it's when i try to compile the c++ scanner class generated by flex++ that I get to this stage. I get the .cc code by issuing flex --c++ -o scanner.cc scanner.ll.
this happen when a ; or some other closing thing is lacking before the namespace. Are you sure that the lines before 34 have no code? If they have code (even if that code is other #include) the error is there.
EDIT: Or in case all 34 lines have no code, the error is on the file that includes this header, most likely there are a code without a ending ; or } or ) or some other ending character, and right after it (ignoring comments, of course) there are the #include position.hh
Or if there are two includes in a row, one before position.hh, the last lines of the header included before position.hh are with the error, usually a structure without a ; after the closing }
The error might be occuring in a file other than the file its reported in(due to the compilation units), namely at or near the end of that 'other' file(such as a missing '}' or ';' or '#endif' etc)