Fortran using write(6,*) statement - fortran

I'm studying Fortran programs.
When I use the write statements below, my code builds fine but it doesn't run as expected, especially READ(6,*).
What could be the problem and how can I fix it? Thanks!
OPEN(UNIT= 5, FILE='inp.dat')
OPEN(UNIT= 10, FILE='apr1400.dat')
OPEN(UNIT= 11, FILE='ulpu2001.dat')
OPEN(UNIT= 12, FILE='aprslice.dat')
OPEN(UNIT= 6, FILE='HEATFX.dat')
OPEN(UNIT= 13, FILE='HEATFX1.dat')
OPEN(UNIT= 14, FILE='HEATFX2.dat')
OPEN(UNIT= 15, FILE='HEATFX3.dat')
OPEN(UNIT= 7, FILE='out.dat')
OPEN(UNIT= 8, FILE='check.dat')
OPEN(UNIT= 9, FILE='checkout.dat')
READ (5, *)IPLANT
IF(IPLANT.EQ.1)IIP=10
IF(IPLANT.EQ.2)IIP=11
IF(IPLANT.EQ.3)IIP=12
READ (IIP, 250) TITLE
250 FORMAT(A20)
READ (IIP, 300) ISLICE
300 FORMAT(I1)
READ (IIP, 400) RADIUS, XLCYL, DIACYL, DEPTH, GAP, AINLET
IF(ISLICE.EQ.0)READ (IIP, 400) POWER
READ (6, *)HEATFX
IF(HEATFX.EQ.1)llk=13
IF(HEATFX.EQ.2)llk=14
IF(HEATFX.EQ.3)llk=15
READ(llk, 400) HEATFX
READ (IIP, 400) PSYS
READ (IIP, 400) DTSUBI
READ (IIP, 400) XKLOSSI, XKLOSSC
READ (IIP, 405) IPARA

Unit numbers 0, 5, and 6 are associated with the standard error, standard input, and standard output files
Unit 6 is a special one, that might well be the problem here.
In general, try to use larger numbers for file units. I typically use 100, 101, 102, etc.

Related

Why does nulls_last=False not put the nulls first in Django?

I'm finding that while nulls_last=True works, nulls_last=False doesn't. Example below is in a Django shell.
In [10]: [x.date for x in Model.objects.all().order_by(F('date').asc(nulls_last=True))]
Out[10]:
[datetime.datetime(2020, 3, 10, 16, 58, 7, 768401, tzinfo=<UTC>),
datetime.datetime(2020, 3, 10, 17, 4, 51, 601980, tzinfo=<UTC>),
None,
]
[ins] In [11]: [x.last_run_created_at for x in Model.objects.all().order_by(F('date').asc(nulls_last=False))]
Out[11]:
[datetime.datetime(2020, 3, 10, 16, 58, 7, 768401, tzinfo=<UTC>),
datetime.datetime(2020, 3, 10, 17, 4, 51, 601980, tzinfo=<UTC>),
None,
]
In [12]:
I've tried this with both desc() and asc().
The mistake is assuming that the opposite of nulls_last=True is nulls_last=False. It isn't.
nulls_last=True does the following to the query:
SELECT ... ORDER BY ... ASC NULLS LAST
Whereas nulls_last=False just means use the DB default:
SELECT ... ORDER BY ... ASC
What you want instead is to use nulls_first=True OR nulls_last=True to explicitly get the order you want.
This is mentioned in the docs, but perhaps not as explicitly as it could be:
Using F() to sort null values
Use F() and the nulls_first or
nulls_last keyword argument to Expression.asc() or desc() to control
the ordering of a field’s null values. By default, the ordering
depends on your database.

How does the indexing of the box in this gltf file work?

So from my understanding if I want to render a box using indexed triangles, I would need 8 vertices (for the 8 corner points) and 36 indices (the box has 6 sides, with 2 triangles per side and 3 indices per triangle, 6*2*3=36).
So consider the gltf file found here. It is a correct file and I can see the right amount of vertices and indices. However the indices are:
[0, 1, 2, 3, 2, 1, 4, 5, 6, 7, 6, 5, 8, 9, 10, 11, 10, 9, 12, 13, 14, 15, 14, 13, 16, 17, 18, 19, 18, 17, 20, 21, 22, 23, 22, 21]
if i read them correctly. I thought these numbers would never rise above 7 (as there are only 8 vertices to index). Did I read the file incorrectly or how does this indexing work?
You did read the file correctly. Except the cube doesn't have 8 vertices. It has 24. This is the case because, apart from storing position data, they also store normals. OpenGL allows for single-indexing, that is positions, normals, tangets etc. cannot be indexed separately. This means that some vertices need to be duplicated, to be able to be indexed properly. This is explained well here.

"how to fix MathJax linebreaking?"

I'm using doubleslash(\\) for line-breaking ,the cursor is pointing to the next line but a single slash(\) is appending with my data.
This is the input I am giving:
Find the median of the given data:"\\ "13, 16, 12, 14, 19, 12, 14, 13, 14"
The output is:
Find the median of the given data: \13, 16, 12, 14, 19, 12, 14, 13, 14.
Single slash is appending to the data.
Try using \\\\. Your content management system may be using \ as a special character, an that may turn \\ into \ in the resulting HTML. For example, Markdown usually does that.

Remove duplicates based on specific column

I do have a big text file in the following format on my Linux CentOS 7.
430004, 331108, 075, 11, 19, Chunsuttiwat Nattika
431272, 331108, 075, 11, 19, Chunsuttiwat Nattika
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
I would like to remove duplicate lines including the origin if it matches on second column.
Expected Output:
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
Update:
sort + uniq + awk pipeline:
sort -k2,2 file | uniq -f1 -c -w7 | awk '$1==1{ sub(/[[:space:]]*[0-9]+[[:space:]]*/,"",$0); print}'
sort -k2 -n file - sort the file by the 2nd field numerically
uniq -f1 -c - output lines with the number of occurrences (-f1 - skips the 1st field in the file)
awk '$1==1{ $1=""; print}' - print the lines which occur only once ($1==1 - check for count value from uniq command)
Using awk
#Input
awk '{R[i++]=$0;f=$1;$1="";N[$0]++;}
END{for(i=0;i<NR;i++){
temp=R[i];sub(/^[[:digit:]]*\, /,"",R[i]);
if(N[" "R[i]]==1){print temp}}}' filename
#Output
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
This is all you need:
$ awk 'NR==FNR{c[$2]++;next} c[$2]==1' file file
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua

Is there a way to automate the creation of a new text file that contains a dictionary from a different file?

I'm using Python 2.7
Here I create a set of dictionaries:
day0 = 0
day1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25]
month0 = 0
december = [day0, day1]
calendar = [month0, december]
Then what I want to do is this:
file = open("calendarScript.py", "w")
file.write(calendar) ## Trying to create the calendar in a new doc
file.close()
But I get this error:
TypeError: expected a string or other character buffer object
Is there a way to recreate a dictionary in a new document?
Thank you for your help :)
P.s., I just tried this:
import shutil
shutil.copy(calendar, newFolder)
And got back this error:
TypeError: coercing to Unicode: need string or buffer, list found
Trying to find a way to copy a dict to a new file.
The answer to my problem was "dump". What I was trying to do was "dump" to a text file. Thanks to #KFL for this response (link below):
Writing a dict to txt file and reading it back?
>>> import json
>>> d = {"one":1, "two":2}
>>> json.dump(d, open("text.txt",'w'))
He also answered what was going to be my next problem:
>>> d2 = json.load(open("text.txt"))
>>> print d2
{u'two': 2, u'one': 1}