fortran compiler optimization of do cycle in include statement - fortran

I have a large Fortran code for HPC where several equations are solved over a computational grid. So in many places in the source code are placed nested do cycle to loop over the entair computational domain, shuch as
do k = 1,number_of_point_in_z
do j = 1,number_of_point_in_y
do i = 1,number_of_point_in_x
... some operations ...
end do
end do
end do
I would like to replace these lines with a single statment and I found the only solution to be the use of include statment, as
include 'start_domain_cycle'
... some operations ...
include 'end_domain_cycle'
with two files named start_domain_cycle and end_domain_cycle defined as follow
block
integer :: i, j, k
do k = 1,number_of_point_in_z
do j = 1,number_of_point_in_y
do i = 1,number_of_point_in_x
and
end do
end do
end do
end block
My questions are:
will the compiler be able to optimize this code when compiling with optimizations? (for example -O3 with gfortran)
are there better solutions to avoid hard coding every time the do loop?

The source code present in the include files is first inserted where the include are, then the complete code is compiled. So from an optimisation point of view it makes no difference.
A matter of personal preference... I find it more readable to have the whole code together, but this is just my opinion. You could also append the 3 do on a single line (with shorter variable names for readability)
do k = 1,nz ; do j = 1,ny ; do i = 1,nx

Related

Parallel simulation gives different results after some time steps when compared with serial and additional parallel runs

I am trying to run a code on vortex simulations in parallel using OpenMP. These are similar to particle simulations where at each time step, the position of a vortex at the next time step has to be computed from its velocity which is determined by the positions of all the other vortices at the current time step. The vortices are deleted once they leave the domain. I compare the number of vortices at each time step for the parallel version of code with the serial version of code, and run each version multiple times.
For the serial versions, vortex counts match exactly at every time step. For the parallel case, all the runs match with the serial case for a few tens of time steps, post which, each parallel run shows a difference but remains within a 7-10% error bound with the serial case (as can be seen in the result link below). I know that this may be because of the round off errors in the parallel case owing from the difference in the order of computational steps due to distribution among the different threads, but should the error really be so high as 10%?
I have only used the reduction clause in a parallel do construct. The only parallel region in the whole code is within a function vblob(), which is inside a module, which I call from a main code. All function calls within vblob() are ixi(), fxi() are outside this module.
function vblob(blobs,xj,gj)
complex(8), intent(in) :: blobs(:,:), xj
complex(8) :: delxi, delxic, di, gvic, xi
real(8), intent(in) :: gj
real(8) :: vblob(2)
integer :: p
gvic = 0.0; delxi = 0.0; delxic = 0.0; di = 0.0; xi = 0.0
!$omp parallel do private(xi,delxic,delxi,di) shared(xj) reduction(+:gvic)
do p = 1, size(blobs,1)
xi = ixi(blobs(p,1))
delxic = xj-conjg(xi)
delxi = xj-xi
di = del*fxi(xi)
gvic = gvic + real(blobs(p,2))*1/delxic
if (abs(delxi) .gt. 1.E-4) then
gvic = gvic + (-1)*real(blobs(p,2))*1/delxi
end if
end do
!$omp end parallel do
gvic = j*gvic*fxi(xj)/(2*pi)
vblob(1) = real(gvic)
vblob(2) = -imag(gvic)
end function vblob
If the way I have constructed the parallel code is wrong, then errors should show up from the first few time steps itself, right?
(As can be seen in this result, the 'blobs' and 'sheets' are just types of vortex elements, the blue line is the total elements. P and S stand for Parallel and serial respectively and R stands for runs. THe solid plot markers are the serial code and the hollow ones are the three runs of the parallel code)
EDIT: When i change the numerical precision of my variables to real(4) instead, the divergenec in results happens at an earlier time step than the real(8) case above. SO its clearly a round off error issue.
TLDR: I want to clarify this with anyone else who has seen such a result over a range of time steps, where the parallel code matches for the first few time steps and then diverges?
Your code essentially sums up a lot of terms in gvic. Floating-point arithmetic is not associative, that is, (a+b)+c is not the same as a+(b+c) due to rounding. Also, depending on the values and the signs on the terms, there may be a serious loss of precision in each operation. See here for a really mandatory read on the subject.
While the sequential loop computes (given no clever compiler optimisations):
gvic = (...((((g_1 + g_2) + g_3) + g_4) + g_5) + ...)
where g_i is the value added to gvic by iteration i, the parallel version computes:
gvic = t_0 + t_1 + t_2 + ... t_(#threads-1)
where t_i is the accumulated private value of gvic in thread i (threads in OpenMP are 0-numbered even in Fortran). The order in which the different t_is are reduced is unspecified. The OpenMP implementation is free to choose whatever it deems fine. Even if all t_is are summed in order, the result will still differ from the one computed by the sequential loop. Unstable numerical algorithms are exceptionally prone to producing different results when parallelised.
This is something you can hardly avoid completely, but instead learn to control or simply live with its consequences. In many cases, the numerical solution to a problem is an approximation anyway. You should focus on conserved or statistical properties. For example, an ergodic molecular dynamics simulation may produce a completely different phase trajectory in parallel, but values such as the total energy or the thermodynamic averages will be pretty close (unless there is some serious algorithmic error or really bad numerical instability).
A side note - you are actually lucky to enter this field now, when most CPUs use standard 32- and 64-bit floating-point arithmetic. Years ago, when x87 was a thing, floating-point operations were done with 80-bit internal precision and the end result would depend on how many times a value leaves and re-enters the FPU registers.

Trying to use netlib code (QUADPACK). What is xerror?

I'm trying to figure out how to use quadpack.
In a single folder, I located the contents of "qag.f plus dependencies" and the code blow as qag_test.f:
(maybe this code itself is not very important. This is in fact just a snippet from the quadpack document)
REAL A,ABSERR,B,EPSABS,EPSREL,F,RESULT,WORK
INTEGER IER,IWORK,KEY,LAST,LENW,LIMIT,NEVAL
DIMENSION IWORK(100),WORK(400)
EXTERNAL F
A = 0.0E0
B = 1.0E0
EPSABS = 0.0E0
EPSREL = 1.0E-3
KEY = 6
LIMIT = 100
LENW = LIMIT*4
CALL QAG(F,A,B,EPSABS,EPSREL,KEY,RESULT,ABSERR,NEVAL,
* IER,LIMIT,LENW,LAST,IWORK,WORK)
C INCLUDE WRITE STATEMENTS
STOP
END
C
REAL FUNCTION F(X)
REAL X
F = 2.0E0/(2.0E0+SIN(31.41592653589793E0*X))
RETURN
END
Using gfortran *.f (installed as MinGW 64bit), I got:
C:\Users\username\AppData\Local\Temp\ccIQwFEt.o:qag.f:(.text+0x1e0): undefined re
ference to `xerror_'
C:\Users\username\AppData\Local\Temp\cc6XR3D0.o:qage.f:(.text+0x83): undefined re
ference to `r1mach_'
(and a lot more of the same r1mach_ error)
It seems r1mach is a part of BLAS (and I don't know why it's not packaged in here but obtained as "auxiliary"), but what is xerror?
How do I properly compile this snippet in my environment, Win7 64bit (hopefully without Cygwin)?
Your help is very much appreciated.
xerror is an error reporting routine. Looking at the way it is called, it appears to use Hollerith constants (the ones where "foo" is written as 3hfoo).
if(ier.ne.0) call xerror(26habnormal return from qag ,
* 26,ier,lvl)
xerror in turn calls xerrwv, passing along the arguments (plus a few more).
This was definitely written before Fortran 77 became widespread.
Your best bet would be to use a compiler which still supports Hollerith constants, pull in all the dependencies (xeerwv has a few more, I don't know why you didn't get them from netlib) and run it through the compiler of your choice. Most compilers, including gfortran, support Hollerith; just ignore the warnings :-)
You will possibly need to modify one routine, that is xerprt. With gfortran, you could write this one as
subroutine xerprt(c,n)
character(len=1), dimension(n) :: c
write (*,'(500A)') c
end subroutine xerprt
and put this one into a separate file so that the compiler doesn't catch the rank violation (I know, I know...)

Using OpenMP critical and ordered

I've quite new to Fortran and OpenMP, but I'm trying to get my bearings. I have a piece of code for calculating variograms which I'm attempting to parallelize. However, I seem to be getting race conditions, as some of the results are off by a thousandth or so.
The problem seems to be the reductions. Using OpenMP reductions work and give the correct results, but they are not desirable, because the reductions actually happen in another subroutine (I copied the relevant lines into the OpenMP loop for the test). Therefore I put the reductions inside a CRITICAL section but without success. Interestingly, the problem only occurs for reals, not integers. I have thought about whether or not the order of the additions make any difference, but they should not produce errors this big.
Just to check, I put everything in the parallel do in an ORDERED block, which (of course) gave the correct results (albeit without any speedup). I also tried putting everything inside a CRITICAL section, but for some reason that did not give the correct results. My understanding is that OpenMP will flush the shared variables upon entering/exiting CRITICAL sections, so there shouldn't be any cache problems.
So my question is: why doesn't a critical section work in this case?
My code is below. All shared variables except np, tm, hm, gam are read-only.
EDIT: I tried to simulate the randomness induced by multiple threads by replacing the do loops with random integers in the same range (i.e. generate a pair i,j in the of the loops; if they are "visited", generate new ones) and to my surprise the results matched. However, upon further inspection it was revealed that I had forgotten to seed the RNG, and the results were correct by coincidence. How embarrassing!
TL;DR: The discrepancies in the results were caused by the ordering of the floating point values. Using double precision instead helps.
!$OMP PARALLEL DEFAULT(none) SHARED(nd, x, y, z, nzlag, nylag, nxlag, &
!$OMP& dzlag, dylag, dxlag, nvarg, ivhead, ivtail, ivtype, vr, tmin, tmax, np, tm, hm, gam) num_threads(512)
!$OMP DO PRIVATE(i,j,zdis,ydis,xdis,izl,iyl,ixl,indx,vrh,vrt,vrhpr,vrtpr,variogram_type) !reduction(+:np, tm, hm, gam)
DO i=1,nd
!$OMP CRITICAL (main)
! Second loop over the data:
DO j=1,nd
! The lag:
zdis = z(j) - z(i)
IF(zdis >= 0.0) THEN
izl = INT( zdis/dzlag+0.5)
ELSE
izl = -INT(-zdis/dzlag+0.5)
END IF
! ---- SNIP ----
! Loop over all variograms for this lag:
DO cur_variogram=1,nvarg
variogram_type = ivtype(cur_variogram)
! Get the head and tail values:
indx = i+(ivhead(cur_variogram)-1)*maxdim
vrh = vr(indx)
indx = j+(ivtail(cur_variogram)-1)*maxdim
vrt = vr(indx)
IF(vrh < tmin.OR.vrh >= tmax.OR. vrt < tmin.OR.vrt >= tmax) CYCLE
! ----- PROBLEM AREA -------
np(ixl,iyl,izl,1) = np(ixl,iyl,izl,1) + 1. ! <-- This never fails
tm(ixl,iyl,izl,1) = tm(ixl,iyl,izl,1) + vrt
hm(ixl,iyl,izl,1) = hm(ixl,iyl,izl,1) + vrh
gam(ixl,iyl,izl,1) = gam(ixl,iyl,izl,1) + ((vrh-vrt)*(vrh-vrt))
! ----- END OF PROBLEM AREA -----
!CALL updtvarg(ixl,iyl,izl,cur_variogram,variogram_type,vrt,vrh,vrtpr,vrhpr)
END DO
END DO
!$OMP END CRITICAL (main)
END DO
!$OMP END DO
!$OMP END PARALLEL
Thanks very much in advance!
If you are using 32-bit floating-point numbers and arithmetic the difference between 84.26539 and 84.26538, that is a difference of 1 in the least-significant digit, is entirely explicable by the non-determinism of parallel floating-point arithmetic. Bear in mind that a 32-bit f-p number only has about 7 decimal digits to play with.
Ordinary floating-point arithmetic is not strictly associative. For real (in the mathematical not Fortran sense) numbers (a+b)+c==a+(b+c) but there is no such rule for floating-point numbers. This is nicely explained in the Wikipedia article on floating-point arithmetic.
The non-determinism arises because, in using OpenMP you surrender control over the ordering of operations to the run-time. A summation of values across threads (such as a reduction on +) leaves the bracketing of the global sum expression to the run-time. It is not even necessarily true that 2 executions of the same OpenMP program will produce the same-to-the-last-bit results.
I suspect that even running an OpenMP program on one thread may produce different results from the equivalent non-OpenMP program. Since knowledge of the number of threads available to an OpenMP executable may be deferred until run-time the compiler will have to create a parallelised executable whether it is eventually run in parallel or not.
High Performance Mark makes an interesting point about floating point and associativity. This can easily be tested (in C).
float a = -1.0E8f, b = 1.0E8f, c = 1.23456f;
printf("sum %f\n", a+b+c); //output 1.234560
printf("sum %f\n", a+(b+c)); //output 0.000000
But I would like to point out it is possible to preserve order in OpenMP. I discussed this here C++ OpenMP: Split for loop in even chunks static and join data at the end
Edit:
Actually, I confused commutativity and associativity. If you have an operator which is associative but not commuative than it's possible to preserve the order with OpenMP as I did in the post above. However, IEEE floating point is commutative but NOT asssociative so the only thing that came be done is to break IEEE and let it be associative.

Replace do loop with array notation in fortran

I would like to replace the following do loop with FORTRAN's intrinsic functions and array notations.
do i=2, n
do j=2, n
a=b(j)-b(j-1)
c(i,j)=a*c(i-1,j)+d(i,j)
end do
end do
However, as c(i,j) depends on c(i-1,j) none of following trials worked. Because they do not update c(i,j)
!FORALL(i = 2:n , j = 2:n ) c(i,j)=c(i-1,j)*(b(j)-b(j-1))+d(i,j)
!FORALL(i = 2:n) c(i,2:n)=c(i-1,2:n)*(b(2:n)-b(1:n-1))+d(i,2:n)
!c(2:n,2:n)=RESHAPE( (/(c(i-1,2:n)*(b(2:n)-b(1:n-1))+d(i,2:n),i=2,n)/), (/n-1, n-1/))
!c(2:n,2:n)=RESHAPE((/(((b(j)-b(j-1)) *c(i-1,j)+d(i,j) ,j=2,n),i=2,n)/), (/n-1, n-1/))
!c(2:n,2:n)=spread(b(2:n)-b(1:n-1),ncopies = n-1,dim=1) * c(1:n-1,2:n) +d(2:n,2:n)
This is the best I can get. But it still has a do loop
do i=2, n
c(i,2:n)=c(i-1,2:n)*(b(2:n)-b(1:n-1))+d(i,2:n)
end do
Could all do loops be replaced by intrinsic functions and array notation. Or could this one be replaced somehow ?
In my experience, nothing beats the traditional do-loop. All the extension intrinsics create memory and CPU overhead by copying stuff to temporary space (on the stack usually), reshaping and the sort. If you're manipulating large arrays, you may encounter out-of-memory issues with the intrinsic functions.
Your best option is to stick to a 2-d loop that has the indexes correctly laid out:
do i=2, n
e = c(1:n,i-1)
do j=2, n
a=b(j)-b(j-1)
c(j,i)=a*e(j)+d(j,i)
end do
end do
By replacing the indexes (and making sure your dimension declarations follow), you are saving on the memory paging. c(j,i) and d(j,i) references travel column-wise inside memory while c(j,i-1) would have cut across columns (and produced paging overhead). So we copy it to a temporary e array.
I think this will be the fastest....
Starting with
do i=2, n
do j=2, n
a=b(j)-b(j-1)
c(i,j)=a*c(i-1,j)+d(i,j)
end do
end do
we can quickly eliminate the do loops with some nifty use of SIZE, SPREAD and EOSHIFT:
res = SPREAD(b - EOSHIFT(b,-1),2,SIZE(c,2))*EOSHIFT(c,-1) + d
Aha, turns out that the error I was receiving (V1) was due to my using RESHAPE rather than SPREAD. I fixed this in the current version (V2) and it compiles & works with both ifort and gfortran.

High Performance Fortran (HPF) without directives?

In High Performance Fortran (HPF), I could specify the distribution of arrays involved in a parallel calculation using the DISTRIBUTE directive. For example, the following minimal subroutine will sum two arrays in parallel:
subroutine mysum(x,y,z)
integer, intent(in) :: y(10000), z(10000)
integer, intent(out) :: x(10000),
!HPF$ DISTRIBUTE x(BLOCK), y(BLOCK), z(BLOCK)
x = y + z
end subroutine mysum
My question is, is the DISTRIBUTE directive necessary? I know in practise this is of little interest, but I'm curious as to whether an unadorned, directive-free, Fortran program could also be a valid HPF program?
I do not believe DISTRIBUTE statement is necessary, and I never used it.
You can achieve this implicitly by using FORALL statements instead of DO loops where applicable. Originally, DO loops would give explicit order of operation on array elements, whereas FORALL would allow the processor to determine an optimal order at runtime. I do not think this makes much difference nowadays, because modern compilers are able to optimize/vectorize/parallelize DO loops where possible. I cannot tell for sure for other compilers, but I remember using Intel Fortran Compiler to compile and run a program on 2 and 4 processors in parallel without using DISTRIBUTE.
However, depending on the processor architecture and compiler, it is best to try out what you have and see what gives you optimal results or efficiency.