OpenACC loop private clause and race condition - fortran

I'm trying to use a worker-private array with OpenACC, but i keep getting wrong results. I guess there is some kind race condition issue going on, but I can't find where.
I'm using the PGI compiler (18.10, OpenPower) and compile with :
pgf90 -gopt -O3 -Minfo=all -Mcuda=ptxinfo -acc -ta=tesla:cc35 main.F90
Here is a minimal example of what i'm trying to achieve:
#define lx 7000
#define ly 500
program test
implicit none
integer :: tmp(ly,1), a(lx,ly), b(lx,ly)
integer :: x,y,i
do x=1,lx
do y=1,ly
a(x,y) = x+y
end do
end do
!$acc parallel num_gangs(1)
!$acc loop worker private(tmp)
do x=1,lx
!$acc loop vector
do y=1,ly
tmp(y,1) = -a(x,y)
end do
!$acc loop vector
do y=1,ly
b(x,y) = -tmp(y,1)
end do
end do
!$acc end parallel
print *, "check"
do x=1,lx
do y=1,ly
if(b(x,y) /= x+y) print *, x, y, b(x,y), x+y
end do
end do
print*, "end"
end program
What I expected was to get b == a, but it's not the case.
Please note that I defined tmp(ly,1) because i get the expected result when I define tmp(ly) as a 1D array. Even if it works with a 1D array, i'm not sure it fully respects the OpenACC standard.
Am I missing something here?
EDIT: The last loop checks if a==b and prints the values that are wrong. The expected output (that I get with OpenACC disabled) is :
check
end
What I get with OpenACC enabled is something like this (changes between runs):
check
1 1 5 2
1 2 6 3
1 3 7 4
[...]
end

Looks like a compiler issue where "tmp" is being shared by the workers instead of each worker getting a private copy. This in turn causes a race condition in your code.
I've filed a problem report with PGI (TPR#27025) and sent it to our engineers for further investigation.
The work around is to use "gang" instead of "worker" on the outer loop or as you note, make "tmp" as single dimension array.
Update: TPR #27025 was fixed in the PGI 19.7 release.

These two acc loop
!$acc loop vector
do y=1,ly
tmp(y,1) = -a(x,y)
end do
!$acc loop vector
do y=1,ly
b(x,y) = -tmp(y,1)
end do
will be executed on gpu at the same time. That is, they are executed in parallel. To ensure tmp is assgined to correct values in the first loop before it is used in the second loop, they have to be on different acc parallel construct.
The correct code will look like:
do x=1,lx
!$acc parallel loop
do y=1,ly
tmp(y,1) = -a(x,y)
end do
!$acc parallel loop
do y=1,ly
b(x,y) = -tmp(y,1)
end do
end do

Related

A problem in calling several gpu subroutines sequentially: OpenACC - Fortran

I have the following problem. I have a main subroutine, let us call it main_function (for 3D BSplines). It takes as input several tensors.
This function contains only IF-conditions. If a condition is satisfied, other functions are called. Let us call these functions: function_a, function_b, and function_c which are parallelizable.
The structure is as follows
subroutine main_function(paras)
if(1) then
call function_a
else if (2)
call function_b
else if (3)
call function_c
end if
end subroutine main_function
with
subroutine function_a(paras)
!$acc parallel loop present(....)
do
heavy parallel calcs
end do
output: eta
end subroutine function_a
subroutine function_b(paras)
!$acc parallel loop present(....)
do
heavy parallel calcs
end do
output: eta
end subroutine function_b
subroutine function_c(paras)
!$acc parallel loop present(....)
do
heavy parallel calcs
end do
output: eta
end subroutine function_c
The subroutines function_a, function_b, and function_c have a B-spline tensor (eta) as an output calculated on GPU. I don't want to move this tensor to the host since it is not needed there. However, after calculating eta on GPU using main_function, an interpolation subroutine interpolate3D is called to interpolate the function. The definition of interpolate3D is something like
subroutine interpolate3D(eta, x, y, z, fAtxyz)
!$acc routine seq
interpolate ...
end subroutine interpolate3D
To summarize the the pseudo-code is something like
call main_function(paras)
!$acc parallel loop present(x, y, eta, fAtxyz)
do i = 1, N
call interpolate3D(eta, x(i), y(i), z(i), fAtxyz(i))
end do
My problems and questions are:
1)- When I don't use '!$acc update self (eta)' before the loop, the results are completely wrong. Does this mean that 'present clause' doesn't find correctly eta, calculated by main_function, on GPU. Therefore, one needs to update the host, and then recopy it back to the GPU?
2)- How to ensure that interpolate3D is working on GPU? For example, if I don't have the above loop, does only adding '!$acc routine seq' ensure that it works on GPU and searches for different quantities there?
3)- In fact, when there is no loop, adding '!$acc update self (eta)' is required to have correct results. Does this mean that in this case the subroutine is executed on CPU?
3)- To summarize, If I have two subroutines: the first choses between different subroutines based on if-conditions to calculate a vector or tensor and keep it on GPU (I don't want to update the host), while the second will use this vector to perform some calculations on GPU, how to do this correctly with openACC?
Sorry for being long and thank you very much for your help,
In fact, I have tried different strategies. However, all of them requires copying eta to the host before interpolating, even though it is only calculated on the device. There is something I don't understand since I'm also new to openacc
Cross-posted on NVIDIA's Forum: https://forums.developer.nvidia.com/t/b-splines-on-gpus-openacc-fortran/233053
Issue was an error in the user's code where a "parallel loop" was missing, hence the loop was not being run on the host.

Could the compiler vectorize the looping with an array which consists of an array inside?

I would like to vectorize this code below (just for an example), just assume somehow I should write an array inside an array.
PROGRAM TEST
IMPLICIT NONE
REAL, DIMENSION(2000):: A,B,C !100000
INTEGER, DIMENSION(2000):: E
REAL(KIND=8):: TIME1,TIME2
INTEGER::I
DO I=1, 2000 !Actually only this loop could be vectorized
B(I)=100.00 !by the compiler
C(I)=200.00
E(I)=I
END DO
!Computing computer's running time (start)
CALL CPU_TIME (TIME1)
DO I=1, 2000 !This is the problem, somehow I should put
A(E(I))=B(E(I))*C(E(I)) !an integer array E(I) inside an array
END DO !I would like to vectorize this loop also, but it didn't work
PRINT *, 'Results =', A(2000)
PRINT *, ' '
!Computing computer's running time (finish)
CALL CPU_TIME (TIME2)
PRINT *, 'Elapsed real time = ', TIME2-TIME1, 'second(s)'
END PROGRAM TEST
I thought at first time, that compiler could understand what I want which somehow be vectorized like this:
DO I=1, 2000, 4 !Unrolled 4 times
A(E(I))=B(E(I))*C(E(I))
A(E(I+1))=B(E(I+1))*C(E(I+1))
A(E(I+2))=B(E(I+2))*C(E(I+2))
A(E(I+3))=B(E(I+3))*C(E(I+3))
END DO
but I was wrong. I used: gfortran -Ofast -o -fopt-info-optimized Tes.F95 and I got the information that only the first looping was successfully to be vectorized.
Do you have any idea how I could vectorize it? Or can't it be vectorized at all?
If E hase equal values for different I, then you would be manipulating the same elements of A multiple times, in which case the order could matter. (Though not in your case.) Also, if you have multiple index arrays, like E1, E2 and E3, and
DO I=1, 2000
A(E3(I))=B(E1(I))*C(E2(I))
END DO
the order could matter too. So I think this kind of indexing is not in general allowed in parallel loops.
With ifort one can use !DIR$ IVDEP which is "ignore Vector dependence". It only works when E(I) is linear as in the example...
Assuming that one wants to do all the indexes then just replace (E(i)) with (I) and work out the obvious E(I) order later...

Results of parallel program with nested loops differ from serial program

I would like to use OpenMP for this single thread code:
PROGRAM SINGLE
INTEGER, DIMENSION(30000)::SUMGRM
INTEGER, DIMENSION(90000)::GRI,H
REAL*8::HSTEP1X,HSTEP2X
REAL*8::TIME1,TIME2
!Just intiial value
DO I=1, 30000
SUMGRM(I)=I*3
END DO
DO I=1, 90000
GRI(I)=I
H(I)=0.5*I/10000
END DO
!Computing computer's running time (start) : for serial programming
CALL CPU_TIME(TIME1)
DO K=1, 50000
DO I=2, 30000
HSTEP1X=0.0
DO J=SUMGRM(I-1)+1, SUMGRM(I)-1
HSTEP2X=H(GRI(J))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
HSTEP2X=H(GRI(SUMGRM(I)))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
END DO
PRINT *, 'Results =', HSTEP1X
PRINT *, ' '
!Computing computer's running time (finish) : for serial programming
CALL CPU_TIME(TIME2)
PRINT *, 'Elapsed real time = ', TIME2-TIME1, 'second(s)'
END PROGRAM SINGLE
As you can see, the main problem is located at the most inner side looping (J) which is also a function of most outer side looping (I). I've tried to parallelize this program like this:
PROGRAM PARALLEL
INTEGER, DIMENSION(30000)::SUMGRM
INTEGER, DIMENSION(90000)::GRI,H
REAL*8::HSTEP1X,HSTEP2X
REAL*8::TIME1,TIME2,OMP_GET_WTIME
INTEGER::Q2,P2
!Just intiial value
DO I=1, 30000
SUMGRM(I)=I*3
END DO
DO I=1, 90000
GRI(I)=I
H(I)=0.5*I/10000
END DO
!Computing computer's running time (start) : for parallel programming
TIME1= OMP_GET_WTIME()
DO K=1, 50000
!$OMP PARALLEL DO PRIVATE (HSTEP1X,Q2,P2)
DO I=2, 30000
HSTEP1X=0.0
Q2=SUMGRM(I-1)+1
P2=SUMGRM(I)-1
DO J=Q2, P2
HSTEP2X=H(GRI(J))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
HSTEP2X=H(GRI(SUMGRM(I)))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
!$OMP END PARALLEL DO
END DO
PRINT *, 'Results =', HSTEP1X
PRINT *, ' '
!Computing computer's running time (finish) : for parallel programming
TIME2= OMP_GET_WTIME()
PRINT *, 'Elapsed real time = ', TIME2-TIME1, 'second(s)'
END PROGRAM PARALLEL
I'm using gfortran with -O3 -fopenmp and then export OMP_NUM_THREADS=... The parallel program runs faster but the result is different with the single thread code. By the serial program I got 12.1212 (which it is the correct one) and by parallel I got 0.000 (there must be something wrong).
What did I do wrong?
Firstly we can note that by default you're likely to find that both j and hstep2x are going to be shared between threads. I don't think this is really what you want as it will lead to some very odd behaviour were multiple threads are using the same iteration index but are trying to loop over different ranges.
Next let's note that your serial code actually just prints the result for the i=30000 iteration as the value of hstep1x is reset to 0 at the start of each iteration. As such to get the "correct" answer in the openmp code we could just focus on reproducing the final iteration -- this completely negates the point of using openmp here I think. I'm guessing this is just a simple case you're trying to use to represent your real problem -- I think you may have missed some of the real problem in producing this.
Nevertheless the below code produces the "correct" answer on my machine. I'm not sure how flexible it is but it works here.
PROGRAM PARALLEL
INTEGER, DIMENSION(30000)::SUMGRM
INTEGER, DIMENSION(90000)::GRI,H
REAL*8::HSTEP1X,HSTEP2X
REAL*8::TIME1,TIME2,OMP_GET_WTIME
INTEGER::Q2,P2
!Just intiial value
DO I=1, 30000
SUMGRM(I)=I*3
END DO
DO I=1, 90000
GRI(I)=I
H(I)=0.5*I/10000
END DO
!Computing computer's running time (start) : for parallel programming
TIME1= OMP_GET_WTIME()
DO K=1, 50000
!$OMP PARALLEL DO PRIVATE (Q2,P2,J,HSTEP2X) DEFAULT(SHARED) LASTPRIVATE(HSTEP1X)
DO I=2, 30000
HSTEP1X=0.0
Q2= SUMGRM(I-1)+1
P2= SUMGRM(I)-1
DO J=Q2,P2
HSTEP2X=H(GRI(J))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
HSTEP2X=H(GRI(SUMGRM(I)))/0.99
HSTEP1X=HSTEP1X+HSTEP2X
END DO
!$OMP END PARALLEL DO
END DO
PRINT *, 'Results =', HSTEP1X
PRINT *, ' '
!Computing computer's running time (finish) : for parallel programming
TIME2= OMP_GET_WTIME()
PRINT *, 'Elapsed real time = ', TIME2-TIME1, 'second(s)'
END PROGRAM PARALLEL
I've done three things here:
Make sure j and hstep2x are private to each thread.
Explicitly declared the default behaviour to be shared (not needed here but never mind).
Specified that hstep1x is lastprivate. This means that after exiting the parallel region the value of hstep1x is that taken from the thread which executed the last iteration. (see here for details).
Have you tried using
!$OMP PARALLEL DO DEFAULT(PRIVATE) REDUCTION(+:HSTEP1X)

Two openmp ordered blocks with no resulting parallelization

I am writing a Fortran program that needs to have reproducible results (for publication). My understanding of the following code is that it should be reproducible.
program main
implicit none
real(8) :: ybest,xbest,x,y
integer :: i
ybest = huge(0d0)
!$omp parallel do ordered private(x,y) shared(ybest,xbest) schedule(static,1)
do i = 1,10
!$omp ordered
!$omp critical
call random_number(x)
!$omp end critical
!$omp end ordered
! Do a lot of work
call sleep(1)
y = -1d0
!$omp ordered
!$omp critical
if (y<ybest) then
ybest = y
xbest = x
end if
!$omp end critical
!$omp end ordered
end do
!$omp end parallel do
end program
In my case, there is a function in place of "sleep" that takes long time to compute, and I want it done in parallel. According to OpenMP standards, should sleep in this example execute in parallel? I thought it should be (based on this How does the omp ordered clause work?), but with gfortran 5.2.0 (mac) and gfortran 5.1.0 (linux) it is not executing in parallel (at least, there is no speedup from it). The timing results are below.
Also, my guess is the critical statements are not necessary, but I wasn't completely sure.
Thanks.
-Edit-
In response to Vladmir's comments, I added a full working program with timing results.
#!/bin/bash
mpif90 main.f90
time ./a.out
mpif90 main.f90 -fopenmp
time ./a.out
The code runs as
real 0m10.047s
user 0m0.003s
sys 0m0.003s
real 0m10.037s
user 0m0.003s
sys 0m0.004s
BUT, if you comment out the ordered blocks, it runs with the following times:
real 0m10.044s
user 0m0.002s
sys 0m0.003s
real 0m3.021s
user 0m0.002s
sys 0m0.004s
Edit -
In response to innoSPG, here are the results for a non-trivial function in place of sleep:
real(8) function f(x)
implicit none
real(8), intent(in) :: x
! local
real(8) :: tmp
integer :: i
tmp = 0d0
do i = 1,10000000
tmp = tmp + cos(sin(x))/real(i,8)
end do
f = tmp
end function
real 0m2.229s --- no openmp
real 0m2.251s --- with openmp and ordered
real 0m0.773s --- with openmp but ordered commented out
This program is non-conforming to the OpenMP standard. Specifically, the problem is that you have more than one ordered region and every iteration of your loop will execute both of them. The OpenMP 4.0 standard has this to say (2.12.8, Restrictions, line 16, p 139):
During execution of an iteration of a loop or a loop nest within a loop region, a thread must not execute more than one ordered region that binds to the same loop
region.
If you have more than one ordered region, you must have conditional code paths such that only one of them can be executed for any loop iteration.
It is also worth noting the position of your ordered region seems to have performance implications. Testing with gfortran 5.2, it appears everything after the ordered region is executed in order for each loop iteration, so having the ordered block at the beginning of the loop leads to serial performance while having the ordered block at the end of the loop does not have this implication as the code before the block is parallelized. Testing with ifort 15 is not as dramatic but I would still recommend structuring your code so your ordered block occurs after any code than needs parallelization in a loop iteration rather than before.

Thread issues when writing to files with OpenMP in Fortran

The number of files that are getting written is always less than the number of threads. Logically for me, when I can have 4 threads and the CPU is working at 400%, I was expecting the number of files to be 4 (one each corresponding to every single thread). I don't know if there is a problem with my code or this is how it is supposed to work. The code is as follows:
!!!!!!!! module
module common
use iso_fortran_env
implicit none
integer,parameter:: dp=real64
real(dp):: aa,bb
contains
subroutine evolve(y,yevl)
implicit none
integer(dp),parameter:: id=2
real(dp),intent(in):: y(id)
real(dp),intent(out):: yevl(id)
yevl(1)=y(2)+1.d0-aa*y(1)**2
yevl(2)=bb*y(1)
end subroutine evolve
end module common
use common
implicit none
integer(dp):: iii,iter,i
integer(dp),parameter:: id=2
real(dp),allocatable:: y(:),yt(:)
integer(dp):: OMP_GET_THREAD_NUM, IXD
allocate(y(id)); allocate(yt(id)); y=0.d0; yt=0.d0; bb=0.3d0
!$OMP PARALLEL PRIVATE(iii,iter,y,i,yt) SHARED(bb)
IXD=OMP_GET_THREAD_NUM()
!$OMP DO
do iii=1,20000; print*,iii !! EXPECTED THREADS TO BE OF 5000 ITERATIONS EACH
aa=1.d0+dfloat(iii-1)*0.4d0/80000.d0
loop1: do iter=1,10 !! THE INITIAL CONDITION LOOP
call random_number(y)!! RANDOM INITIALIZATION OF THE VARIABLE
loop2: do i=1,70000 !! ITERATION OF THE SYSTEM
call evolve(y,yt)
y=yt
enddo loop2 !! END OF SYSTEM ITERATION
write(IXD+1,*)aa,yt !!! WRITING FILE CORRESPONDING TO EACH THREAD
enddo loop1 !!INITIAL CONDITION ITERATION DONE
enddo
!$OMP ENDDO
!$OMP END PARALLEL
end
Is this behavior resulting from some race issue in the code? The code compiles and executes just fine without any warnings or errors with ifort version 13.1.0 on ubuntu. Thanks a bunch for any comments or suggestions.
The variable IXD should be explicitely declared as private to make sure every thread has an own copy of it. Changing the line(s)
!$OMP PARALLEL PRIVATE(iii,iter,y,i,yt) SHARED(bb)
IXD=OMP_GET_THREAD_NUM()
to
!$OMP PARALLEL PRIVATE(iii,iter,y,i,yt,ixd) SHARED(bb)
IXD=OMP_GET_THREAD_NUM()
solves the problem.