write output on a text file in Fortran code - fortran

I have a matrix A(3,4) in Fortran , I want to write it on a text file like this:
A(1,1) A(2,1) A(3,1)
A(1,2) A(2,2) A(3,2)
A(1,3) A(2,3) A(3,3)
A(1,4) A(2,4) A(3,4)
I use below code. It has two problems at first it is overwritten for each i and it is written in rows. I would be gratful to guide me to solve it. Thanks
do i=1,4
open (unit=10,file="out.txt",action="write")
write (10,*) A(1,i) , A(2,i) , A(3,i)
close (10)

As mentioned by Ian, your file is overwritten for each i because your open statement is inside the loop. Fortran is reopening the file fresh for each i. Move the open statement to before the loop so it is only opened once.
Of course it is written in rows because the first index in a 2-D array is the row index. You can switch the indices if you wish. On the other hand, according to your first box, it appears as though you want the rows across the columns.
You say you need to write just some elements. As long as they are in a contiguous block, you will want to use an implied do loop in the write statement. It is much more concise and you can write large blocks without typing out a lot of variables specifically. It would look like this:
open (unit=10,file="out.txt",action="write")
do i=1,4
write (10,*) (A(j,i), j=1,3)
end do
close (10)
Again, this reverses rows and columns, if you want traditional representation, switch the i and j.

Related

Writing a fortran file in columns

i'm trying to write a Fortran file with arrays of 2000 elements each, every 1000 program steps. At first, I tried to write it in the following way:
if(i.eq.1000)
open (21,file='eedf.res',status='unknown',position='append')
write(21,121) (eedf(le),le=1,2000)
close (21)
i=0
.
.
.
(eedf is then put equal to zero and the array is rebuilt in the following 1000 steps; we are inside a do-loop).
It works,producing a file with the arrays printed on rows, but the program that i use to plot these functions tells me that there are too much columns, and so the last half of columns are lost...
So, I want to write eedf in columns, the first with eedf after the first 1000 program step, the second with eedf after the following 1000 program step, and so on. How can i do that?
eedf(1) eedf(1)
eedf(2) eedf(2)
.
.
eedf(2000) eedf(2000)
Sorry if i've been verbose, i tried to put it in the clearest way.
Thanks a lot!

Reading text file and ordering data into two columns in Fortran

I've been trying to solve the problem in the title. Specifically, I have a .txt file with a few hundred real numbers between 0 and 100, and I need to:
Read the file
Separate the numbers in two groups (one for numbers >= 50, other for numbers < 50)
Write two parallel, compact (no white spaces or zeroes) columns so that each one contains one list.
I've been trying to do that by using WRITE(*,*) statements and using the advance="no" parameter because using arrays didn't work for me. The thing is, I can't get the two columns to be parallel. How can that be done? I don't need the code, just a guideline on how to proceed.

How do I read data from a file with description and blank lines with Fortran 77?

I am new to Fortran 77. I need to read the data from a given text file into two arrays, but there are some lines that either are blank or contain descriptive information on the data set before the lines containing the data I need to read. How do I skip those lines?
Also, is there a way my code can count the number of lines containing the data I'm interested in in that file? Or do I necessarily have to count them by hand to build my do-loops for reading the data?
I have tried to find examples online and in Schaum's Programming with Fortran 77, but couldn't find anything too specific on that.
Part of the file I need to read data from follows below. I need to build an array with the entries under each column.
Data from fig. 3 in Klapdor et al., MPLA_17(2002)2409
E(keV) counts_in_bin
2031.5 5.4
2032.5 0
2033.5 0
I am assuming this question is very basic, but I've been fighting with this for a while now, so I thought I would ask.
If you know where the lines are that you don't need/want to read, you can advance the IO with a call to read with no input items.
You can use:
read(input-unit,*)
to read a line from your input file, discard its contents and advance IO to the next line.
It has been a long time since I have looked at F77 code, but in general if your read statement in a DO loop can deal with finding empty lines, or even a record that contains only blanks, then you could write logic to trap that condition and go to a break or continue statement. I just don't recall if read can deal with the situation intelligently.
Alternatively, if you are using a UNIX shell and coreutils, you can use sed to remove empty line, /^$/
or /^ *$/ to preprocess the file before you send it onto F77
Something like
$ sed infile -e 'd/^$/;d/^ *$/' > outfile
It should look something like this:-
C Initialise
integer i
character*80 t1,t2,t3
real*8 x,y
open(unit=1,file='qdata.txt')
C Read headers
read(1,100)t1
100 format(A80)
write(6,*) t1
read(1,100)t2
write(6,*) t2
read(1,100)t3
write(6,*) t3
write(6,*)
C Read data
do 10 i=1,10
read(1,*,end=99) x,y
write(6,*) x,y
10 continue
99 continue
end
So I've used a classic formatted read to read in the header lines, then free-format to read the numbers. The free-format read with the asterisk skips white space including blank lines so it does what you want, and when there is no more data it will go to statement 99 and finish.
The output looks like this:-
Data from fig. 3 in Klapdor et al., MPLA_17(2002)2409
E(keV) counts_in_bin
2031.5000000000000 5.4000000000000004
2032.5000000000000 0.0000000000000000
2033.5000000000000 0.0000000000000000

delete duplicate rows in Fortran77

I have a file which is a table of 119 columns (separated by spaces) and around 50000 rows (lines). I would like to remove the duplicated entries, i.e. those rows which have all identical columns (119). I sketched this code:
PROGRAM deldup
IMPLICIT NONE
DOUBLE PRECISION PAR(119),PAR2(119)
INTEGER I,J,K,LINE,TREP
CHARACTER filename*40
c Get the input file name
CALL getarg(1,filename)
c File where the results will be stored.
OPEN(29, FILE="result.dat", STATUS='UNKNOWN')
c Current line number
LINE=0
c counting repeated points
TREP=0
101 LINE=LINE+1
OPEN(27, FILE=filename, STATUS='OLD')
c Verifying that we are not in the first line... and we read the
c corresponding one
IF (LINE.NE.1) THEN
DO K=1,LINE-1
READ(27,11,ERR=103,END=9999)
END DO
ENDIF
READ(27,11,ERR=103,END=9999) (PAR(I),I=1,119)
c Start comparing line by line looking for matches. If a match is
c found , close the
c file and open it again to read the next line. If the end of file is
c reached and not iqual rows found, write the line in "results.dat"
102 READ(27, 11,END=104, ERR=102) (PAR2(I),I=1,119)
DO J=1,119
IF ( PAR(J).NE.PAR2(J) ) THEN
GOTO 102
ELSEIF (J.EQ.119) THEN
TREP=TREP+1
GOTO 103
ENDIF
END DO
104 WRITE(29,11) (PAR(I),I=1,119)
103 CLOSE(27)
GOTO 101
9999 WRITE(*,*) "DONE!,", TREP, "duplicated points found!"
CLOSE(27)
CLOSE(28)
CLOSE(29)
11 FORMAT(200E14.6)
END
which actually works it is just super slow. Why? Is there any library that I can use? Sorry for my ignorance, I am completely new with Fortran77.
For each line you open and close the original file, which is very slow! To speed things up, you could just use rewind.
The main issue, though, is the complexity of your algorithm: O(n^2) [You compare each line to every other line]. As a start, I would keep a list of unique line, and compare against that list. If a new row is already listed, discard it - if not, it is a new unique row. This would reduce the complexity to O(n*m), with (hopefully) m << n (m is the number of unique rows). Sorting the rows will probably speed up the comparison.
The next remark would be to move from I/O to memory! Read in the complete file into an array, or at least keep the list of unique rows in memory. A 50,000x119 double precision array requires ~45MB of RAM, so I think this should be feasible ;-)
Write the result back in one piece in a final step.
First question: Why stick with Fortran 77? Since g95 and gfortran have come along, there is no real reason to use a standard that has been obsolete for more than twenty years.
The canonical way to remove duplicate is to sort them, remove duplicates, and then output them in the original order. If you use a good sorting algorithm such as quicksort or heapsort, this will give you O(n log n) performance.
One additional remark: It is also a good idea to put magic numbers such as 119 in your program into PARAMETER statements.

Fortran randomely writing data in file

How to write a text or dat file in FORTRAN like a 2D array of integers and each time to enter a value, if in any row there is no value just insert in the start but if some values exists insert to the end of values. This insertion of values can be random, i.e. may be line number 100 first then 80 then 101 then 2. The number of entries in each line is also different.
I also need to use this file at the end but I think that will be easy as need line by line information.
Edit (what he ment possibly) :: How to write a text file in Fortran, like a 2D array of integers, each time adding one value? If there is an empty row with no values, insert one at the beginning of a row, but if there are already some values in that row, append the new value to the end of the row.
Have no idea what he was getting at with those random values and line numbers.
If you want to make decisions based on the input, read the line into a string. Then examine the contents of the string and decide which case of input. If you have numbers that you want to read, use an "internal read" to read them from the string. This question has a code example: Reading comment lines correctly in an input file using Fortran 90