Problems with nested loop, recurrence in second loop - fortran

The problem I have encountered is: I have a nested loop needed to do parallel. To simplify, code is like:
do i=1,nelectron
trigger=0
do step=1,nt
if(step==1)then
call maxwellian(vx1_new,vy1_new,vz1_new,vt)
end if
vx1_old=vx1_new
vy1_old=vy1_new
vz1_old=vz1_new
x_old=x_new
y_old=y_new
call moveparticle(vx1_old,vy1_old,vz1_old,dt,step)
call boundary(x_new,y_new,step+1)
if(mod(step-1,100)==0)then
ntrap((step-1)/100)=ntrap((step-1)/100)+1
endif
if(trigger==1)then
energy_out=energy_out+energy
go to 10
endif
call energy_track(x_new,y_new,vx1_new,vy1_new,vz1_new,step)
call track_current(x_new,y_new,vx1_new,vy1_new,vz1_new,step)
call collision(melectron,mh2,vx1_new,vy1_new,vz1_new,step+1)
end do
10 call datadump(step)
enddo
How do assign i to different threads, while on each threads j loop is done orederedly.

Related

Compare the speed of '.eq.' command between integer real and logical in fortran

Intuitively, I think to compare two value the speed should be logical>integer>real, So I made this test program:
program t
logical :: a=.true.
real :: b=1.021341
integer :: c=1
real(kind=16) :: start, finish,test
call cpu_time(start)
test=0.0
do while (test<1000000.)
if (a.eqv.a) then
test=test+0.01
endif
enddo
call cpu_time(finish)
print '("Time for compare logical= ",f6.3," seconds.")',finish-start
call cpu_time(start)
test=0.0
do while (test<1000000.)
if (b.eq.b) then
test=test+0.01
endif
enddo
call cpu_time(finish)
print '("Time for compare real= ",f6.3," seconds.")',finish-start
call cpu_time(start)
test=0.0
do while (test<1000000.)
if (c.eq.c) then
test=test+0.01
endif
enddo
call cpu_time(finish)
print '("Time for compare integer= ",f6.3," seconds.")',finish-start
end program
but the output is
Time for compare logical= 2.228 seconds.
Time for compare real= 2.200 seconds.
Time for compare integer= 2.200 seconds.
It seems they are no significant differences. Why I think the logical should be fastest. I use gfortran 5.4.0. My OS is ubuntu 16.04.

Netcdf files created with fortran

I use fortran to create netcdf files. I have this problem: I have no choice than to use a loop to define some of my variables (and assign the attribute values). Then, when I want to provide the values of the variables (i.e, nf90_put_var), it only recalls the last variable that has been defined... I have tried many things to resolve the problem but I didn't succeed. Someone could help me ?
Here is a small part of my script:
DO IP=1,N(PTS)
Param_name='var1'
params(I,IPTS)=INT(I,IPTS,IP)
! Define Netcdf Variable
IERREU = nf90_def_var(ncid, Param_name, nf90_real, dimid, ParVarID)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF
ENDDO
! End define mode
IERREU = nf90_enddef(ncid)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF
! Write the data in netcdf
IERREU = nf90_put_var(ncid,parvarID, params)
IF (IERREU.NE.0) THEN
CALL check_err (IERREU)
STOP
ENDIF
You must store the parVarId for each variable separately. Perhaps store it in an array. You now overwrite it with each call to nf90_def_var.
integer ParVarIds(N(PTS))
DO IP=1,N(PTS)
...
IERREU = nf90_def_var(ncid, Param_name, nf90_real, dimid, ParVarIds(IP))
...
ENDDO
DO IP=1,N(PTS)
...
IERREU = nf90_put_var(ncid,parVarIds(IP), something)
...
ENDDO

Groups of MPI do not assign correct rank of processors

I have the following MPI/fortran code to create two groups one which
contains the first 2/3 of the total number of processors and the second
which includes the rest 1/3. It compiles without problem but when I
print out the new rank (in the recently created group), only the second
group displays the correct ranks, the processes in the first group
shows negative numbers.
Do you have any comment about this issue?
Thanks.
program test
implicit none
include "mpif.h"
integer, allocatable :: rs_use(:),ks_use(:)
integer numnodes,myid,mpi_err
integer ijk,new_group,old_group,num_used,used_id
integer proc_rs,proc_ks
integer RSPA_COMM_WORLD !Real Space communicator
integer KSPA_COMM_WORLD !Recip. Space communicator
! initialize mpi
call MPI_INIT( mpi_err )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numnodes, mpi_err )
call MPI_Comm_rank(MPI_COMM_WORLD, myid, mpi_err)
proc_rs = 2*numnodes/3 !Nr. of processors for Real Space
proc_ks = numnodes - proc_rs !Nr. of processors for Recip. Space
write(6,*) 'processors rs',proc_rs,'ks',proc_ks
! get our old group from MPI_COMM _WORLD
call MPI_COMM_GROUP(MPI_COMM_WORLD,old_group,mpi_err)
! Real Space group that will contain 2*N/3 processors
allocate(rs_use(0:proc_rs-1))
do ijk=0,proc_rs-1
rs_use(ijk)=ijk
enddo
call MPI_GROUP_INCL(old_group,proc_rs,rs_use,new_group,mpi_err)
! create the new communicator
call MPI_COMM_CREATE(MPI_COMM_WORLD,new_group,RSPA_COMM_WORLD, mpi_err)
! test to see if I am part of new_group.
call MPI_GROUP_RANK(new_group,used_id, mpi_err)
! Recip. Space group that will contain N/3 processors
allocate(ks_use(proc_rs:numnodes-1))
do ijk=proc_rs,numnodes-1
ks_use(ijk)=ijk
enddo
call MPI_GROUP_INCL(old_group,proc_ks,ks_use,new_group,mpi_err)
! create the new communicator
call MPI_COMM_CREATE(MPI_COMM_WORLD,new_group,KSPA_COMM_WORLD, mpi_err)
! test to see if I am part of new_group.
call MPI_GROUP_RANK(new_group,used_id, mpi_err)
if(used_id==0) write(6,*) 'group ',used_id,myid
end program test
The problem is that only processes that belong to the group have an id in that group. What you have to do is to set new_group only in the appropriate group of processes and check the new id after you have include each process in its new group. For example, use a temp variable tmp_group to call MPI_COMM_CREATE and assign it only to the process of the group. For the first call to MPI_COMM_CREATE, you do this:
call MPI_GROUP_RANK(tmp_group,used_id, mpi_err)
if(myid<proc_rs) new_group = tmp_group
For the second call to MPI_COMM_CREATE, you do do this:
call MPI_GROUP_RANK(tmp_group,used_id, mpi_err)
if(myid>=proc_rs) new_group = tmp_group
After all of this, you can check the new rank for all:
call MPI_GROUP_RANK(new_group,used_id, mpi_err)
If you choose to check the rank in the group right after you create the group, make sure that only processes belonging to the group call. But this is not a good idea, as you will possibly not save the new_group.

returning to a loop when it doesn't meet a certain condition

So I have a while loop that calculates the value of a variable for me.
If that variable does not reach the condition that I want to, is it possible to return to the while loop and insert new variables, and keep returning to the loop until I get the condition that i want.
balance=3329
annualInterestRate=0.2/12
month=0
min_pay=10
while month<12:
new_bal=balance+(balance*annualInterestRate)-min_pay
balance=new_bal
month=month+1
if balance>0:
min_pay+=10
So if by the end of the loop the balance>0 then I want to add 10 to min_pay and go through the loop with the original values. And I want it to keep going until balance<=0
Yes, you can use a nested while loop:
min_pay=10
while True:
balance=3329
annualInterestRate=0.2/12
month=0
while month<12:
new_bal=balance+(balance*annualInterestRate)-min_pay
balance=new_bal
month=month+1
if balance>0:
min_pay+=10
else:
break

Breaking from nested while loop

I have two nested while loops in my script like in the code below,
while next_page is not None:
while 1:
try:
#Do Something
break
except:
pass
Now when I use the break statement, it breaks both the while loop.
I just want to break from while 1 and keep while next_page is not None: running until the next_page value is not None.
Is this possible? If yes, could someone please advise how to do that.
Thank you.
That break statement only exits that inner loop. A concrete example:
while True:
print "In outer loop"
i = 0
while True:
print "In inner loop"
if i >= 5: break
i += 1
print "Got out of inner loop, still inside outer loop"
break
That outputs the following:
In outer loop
In inner loop
In inner loop
In inner loop
In inner loop
In inner loop
In inner loop
Got out of inner loop, still inside outer loop
This leads me to believe there is something else causing your execution to leave the outer loop - either next_page got assigned to something, or perhaps there is another break floating around.