OpenMP: how to protect an array from race condition - fortran

This is a follow up to question 36182486, 41421437 and several others. I want to speed up the assembly of skewness and mass matrices for a FEM calculation by using multiple processors to deal with individual elements in parallel. This little MWE shows the guts of the operation.
!! compile with gfortran -fopenmp -o FEMassembly FEMassembly.f90
Program FEMassembly
use, intrinsic :: iso_c_binding
implicit none
real (c_double) :: arrayM(3,3)=reshape((/2.d0,1.d0,1.d0,1.d0,&
&2.d0,1.d0,1.d0,1.d0,2.d0/),(/3,3/)) ! contrib from one element
integer (c_int) :: ke,ne=4,kx,nx=6,nodes(3)
real (c_double) :: L(6,6)
integer (c_int) :: t(4,3)=reshape((/1,2,5,6,2,3,4,5,4,5,2,3/),(/4,3/))
!! first, no OMP
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
L(nodes,nodes)=L(nodes,nodes)+arrayM
end do
print *,'L no OMP'
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
L=0
!$omp parallel do private (nodes)
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
!! !$omp atomic
L(nodes,nodes)=L(nodes,nodes)+arrayM
!! !$omp end atomic
end do
!$omp end parallel do
print *,'L with OMP and race'
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
End Program FEMassembly
With the atomic directives commented out, the array L contains several wrong values, presumably because of the race condition I was trying to avoid with the atomic directives. The results are:
L no OMP
2. 1. 0. 1. 0. 0.
1. 6. 1. 2. 2. 0.
0. 1. 4. 0. 2. 1.
1. 2. 0. 4. 1. 0.
0. 2. 2. 1. 6. 1.
0. 0. 1. -0. 1. 2.
L with OMP and race
2. 1. 0. 1. 0. 0.
1. 6. 1. 2. 2. 0.
0. 1. 2. 0. 2. 1.
1. 2. 0. 4. 1. 0.
0. 2. 2. 1. 6. 1.
0. 0. 1. 0. 1. 2.
If the "atomic" directives are uncommented, the compiler return the error:
Error: !$OMP ATOMIC statement must set a scalar variable of intrinsic type at (1)
where (1) points to arrayM in the line L(nodes,nodes).....
What I am hoping to achieve is have the time consuming contributions from each element (here the trivial arrayM) happen in parallel, but since several threads address the same matrix element, something has to be done to have the sum occur in an orderly fashion. Can anyone suggest a way to do this?

In Fortran the simplest way is to use a reduction. This is because OpenMP for Fortran supports reductions on arrays. Below is what I think you are trying to do, but take it with a pinch of salt because
You don't provide the correct output so it's difficult to test
With such a small array sometimes race conditions are difficult to find
!! compile with gfortran -fopenmp -o FEMassembly FEMassembly.f90
Program FEMassembly
use, intrinsic :: iso_c_binding
Use omp_lib, Only : omp_get_num_threads
implicit none
real (c_double) :: arrayM(3,3)=reshape((/2.d0,1.d0,1.d0,1.d0,&
&2.d0,1.d0,1.d0,1.d0,2.d0/),(/3,3/)) ! contrib from one element
integer (c_int) :: ke,ne=4,nodes(3)
real (c_double) :: L(6,6)
integer (c_int) :: t(4,3)=reshape((/1,2,5,6,2,3,4,5,4,5,2,3/),(/4,3/))
! Not declared in original program
Integer :: nx, kx
! Not set in original program
nx = Size( L, Dim = 1 )
!$omp parallel default( none ) private ( ke, nodes ) shared( ne, t, L, arrayM )
!$omp single
Write( *, * ) 'Working on ', omp_get_num_threads(), ' threads'
!$omp end single
!$omp do reduction( +:L )
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
L(nodes,nodes)=L(nodes,nodes)+arrayM
end do
!$omp end do
!$omp end parallel
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
End Program FEMassembly

Related

OpenMP parallelization of two do-loops with condition on indexes

I've tried to parallelize a code contains such a double do-loop. It's not efficient for sure, but that's not a big problem now.
The output tauv is NaN. That is the first problem.
The second problem is that Intel compiler gives fatal error with number of threads less than maximum number of threads (equals 8 for my machine).
How could I treat those problems?
!$omp parallel do private(i,j, ro11,ro21,ro12,ro22, &
u11,u21,u12,u22, &
v11,v21,v12,v22, &
es11,es21,es12,es22, &
p11,p21,p12,p22, &
te11,te21,te12,te22, &
emu11,emu21,emu12,emu22) &
shared(i1l, i2l, j1l, j2l, emumax, tauv, tauvij, ro, u, v, es)
do i=i1l+2,i2l-2,2
do j=j1l+2,j2l-2,2
if (i.le.niii.and.i.ge.0.and.j.ge.0.and.j.le.nj.or.&
i.le.ni.and.i.ge.niik.and.j.gt.njjv.and.j.le.nj.or.&
i.le.ni.and.i.ge.niik.and.j.ge.0.and.j.lt.njjn&
.or.i.gt.niii.and.i.lt.niik.and.j.gt.njj0+i-niii&
.or.i.gt.niii.and.i.lt.niik.and.j.lt.njj0-i+niii) then
ro11=ro(i-1,j-1)
ro21=ro(i+1,j-1)
ro12=ro(i-1,j+1)
ro22=ro(i+1,j+1)
u11=u(i-1,j-1)
u21=u(i+1,j-1)
u12=u(i-1,j+1)
u22=u(i+1,j+1)
v11=v(i-1,j-1)
v21=v(i+1,j-1)
v12=v(i-1,j+1)
v22=v(i+1,j+1)
es11=es(i-1,j-1)
es21=es(i+1,j-1)
es12=es(i-1,j+1)
es22=es(i+1,j+1)
p11=(es11-0.5*ro11*(u11*u11+v11*v11))*ga1
p21=(es21-0.5*ro21*(u21*u21+v21*v21))*ga1
p12=(es12-0.5*ro12*(u12*u12+v12*v12))*ga1
p22=(es22-0.5*ro22*(u22*u22+v22*v22))*ga1
te11=p11/ro11
te21=p21/ro21
te12=p12/ro12
te22=p22/ro22
emu11=te11**1.5*(1.0+s1)/(te11+s1)
emu21=te21**1.5*(1.0+s1)/(te21+s1)
emu12=te12**1.5*(1.0+s1)/(te12+s1)
emu22=te22**1.5*(1.0+s1)/(te22+s1)
emumax=emu11
if (emu21.gt.emumax) then
emumax=emu21
end if
if (emu12.gt.emumax) then
emumax=emu12
end if
if (emu22.gt.emumax) then
emumax=emu22
end if
tauvij=re*flkv*hx*hx/emumax
if (tauvij .le. tauv) then
tauv=tauvij
endif
endif
enddo
enddo
!$omp end parallel do
The thing is that it executes without error, but OpenMP do-loop computes more slowly than sequential one...
From your reproducible example:
1.) Your code is only using 1 thread (?) in OpenMP region:
! Set number of threads
nthreads = 1
call omp_set_num_threads(nthreads)
print *, 'The number of threads are used is ', omp_get_max_threads ( )
I would avoid the call omp_set_num_threads(). Insted, specify number of threads with environmental variable OMP_NUM_THREADS. For unix machine: export OMP_NUM_THREADS=<number of threads>
2.) In your "reproducible" example, the parallelized loop (line 312) is missing private/shared declarations? From what you wrote above, fix to:
!$omp parallel do default(private) shared(i1l, i2l, j1l, j2l, emumax, tauv, tauvij, ro, u, v, es)
With all of the above, the result I get from my machine (4c/4t) using GNU Fortran compiler is:
...
Executed time in SEQ code is 60.2720146
...
Executed time in OMP code is 27.1342430

Thread Segmentation fault when calling function in loop with OpenMP

I'm trying to use OpenMP in Fortran 90 to parallelize a do loop with function call inside. The code listed first runs fine. The code listed next does not. I receive a segmentation fault.
First program: $ gfortran -O3 -o output -fopenmp OMP10.f90
program OMP10
!$ use omp_lib
IMPLICIT NONE
integer, parameter :: n = 100000
integer :: i
real(kind = 8) :: sum,h,x(0:n),f(0:n),ZBQLU01
!$ call OMP_set_num_threads(4)
h = 2.d0/dble(n)
!$OMP PARALLEL DO PRIVATE(i)
do i = 0,n
x(i) = -1.d0+dble(i)*h
f(i) = 2.d0*x(i)
end do
!$OMP END PARALLEL DO
sum = 0.d0
!$OMP PARALLEL DO PRIVATE(i) REDUCTION(+:SUM)
do i = 0,n-1
sum = sum + h*f(i)
end do
!$OMP END PARALLEL DO
write(*,*) "The integral is ", sum
end program OMP10
Second program: $ gfortran -O3 -o output -fopenmp randgen.f OMP10.f90
program OMP10
!$ use omp_lib
IMPLICIT NONE
integer, parameter :: n = 100000
integer :: i
real(kind = 8) :: sum,h,x(0:n),f(0:n),ZBQLU01
!$ call OMP_set_num_threads(4)
h = 2.d0/dble(n)
!$OMP PARALLEL DO PRIVATE(i)
do i = 0,n
x(i) = ZBQLU01(0.d0)
end do
!$OMP END PARALLEL DO
sum = 0.d0
!$OMP PARALLEL DO PRIVATE(i) REDUCTION(+:SUM)
do i = 0,n-1
sum = sum + h*f(i)
end do
!$OMP END PARALLEL DO
write(*,*) "The integral is ", sum
end program OMP10
In the above command, randgen.f is a library that contains the function ZBQLU01.
You cannot just call any function from a parallel region. The function must be thread safe. See What is meant by "thread-safe" code? and https://en.wikipedia.org/wiki/Thread_safety .
Your function is quite the opposite of thread safe as is quite typical for random number generators. Just notice the SAVE statements in the source code for many local variables and for a common block.
The solution is to use a good parallel random number generator. The site is not for software recommendation, but as a pointer just search the web for "parallel prng" or "parallel random number generator". I personally use a library which I already pointed to in https://stackoverflow.com/a/38263032/721644 A simple web search reveals another simple possibility in https://jblevins.org/log/openmp . And then there are many larger and more complex libraries.

What is the best way to reduce an array of arrays using OpenMP?

I am using OpenMP with Fortran. I have boiled down my use case to a very simple example. I have an array of objects with a custom derived type, and each object contains an array with a different size. I want to make sure that whatever happens in the loop, I apply a reduction to all the values array components of the vector objects:
program main
implicit none
integer :: i
type vector
real,allocatable :: values(:)
end type vector
type(vector) :: vectors(3)
allocate(vectors(1)%values(3))
vectors(1)%values = 0
allocate(vectors(2)%values(6))
vectors(2)%values = 0
allocate(vectors(3)%values(9))
vectors(3)%values = 0
!$OMP PARALLEL REDUCTION(+:vectors%values)
!$OMP DO
do i=1,1000
vectors(1)%values = vectors(1)%values + 1
vectors(2)%values = vectors(2)%values + 2
vectors(3)%values = vectors(3)%values + 3
end do
!$OMP END DO
!$OMP END PARALLEL
print*,sum(vectors(1)%values)
print*,sum(vectors(2)%values)
print*,sum(vectors(3)%values)
end program main
In this case, REDUCTION(+:vectors%values) doesn't work because I get the following errors:
test2.f90(22): error #6159: A component cannot be an array if the encompassing structure is an array. [VALUES]
!$OMP PARALLEL REDUCTION(+:vectors%values)
-------------------------------------^
test2.f90(22): error #7656: Subobjects are not allowed in this OpenMP* clause; a named variable must be specified. [VECTORS]
!$OMP PARALLEL REDUCTION(+:vectors%values)
-----------------------------^
compilation aborted for test2.f90 (code 1)
I tried overloading the meaning of + for the vector type and then specifying REDUCTION(+:vectors), but then I still get:
test.f90(43): error #7621: The data type of the variable is not defined for the operator or intrinsic specified on the OpenMP* REDUCTION clause. [VECTORS]
!$OMP PARALLEL REDUCTION(+:vectors)
-----------------------------^
What is the recommended way to deal with derives types such as these and getting the reduction to work?
Just for reference, the correct output when compiling without OpenMP is
3000.000
12000.00
27000.00
This is not just OpenMP problem, you cannot reference vectors%values as a one entity if values is an allocatable array component because rules of Fortran 2003 forbid this. That is because such an array would not have any regular strides in memory, the allocatable components are stored at random adresses.
If the number of elements of the encompassing array is small you can do
!$OMP PARALLEL REDUCTION(+:vectors(1)%values,vectors(2)%values,vectors(3)%values)
!$OMP DO
do i=1,1000
vectors(1)%values = vectors(1)%values + 1
vectors(2)%values = vectors(2)%values + 2
vectors(3)%values = vectors(3)%values + 3
end do
!$OMP END DO
!$OMP END PARALLEL
otherwise you must make another loop, let's say j and make the reduce just vectors(j)%values.
If the compiler does not accept structure components in the reduction clause (have to study the latest standard to see if it hasn't been relaxed), you can make a workaround
!$OMP PARALLEL
do j = 1, size(vectors)
call aux(vectors(j)%values)
end do
!$OMP END PARALLEL
contains
subroutine aux(v)
real :: v(:)
!$OMP DO REDUCTION(+:v)
do i=1,1000
v = v + j
end do
!$OMP END DO
end subroutine
Associate or pointers would be simpler, but they are not allowed either.
As an alternative to Vladimir's answer, you can always implement your own reduction using a temporary array and a critical section:
program main
implicit none
integer :: i
type vector
real,allocatable :: values(:)
end type vector
type(vector) :: vectors(3)
type(vector),allocatable :: tmp(:)
allocate(vectors(1)%values(3))
vectors(1)%values = 0
allocate(vectors(2)%values(6))
vectors(2)%values = 0
allocate(vectors(3)%values(9))
vectors(3)%values = 0
!$OMP PARALLEL PRIVATE(TMP)
! Use a temporary array to hold the local sum
allocate( tmp(size(vectors)) )
do i=1,size(tmp)
allocate( tmp(i)%values( size(vectors(i)%values )) )
tmp(i)%values = vectors(i)%values
enddo ! i
!$OMP DO
do i=1,1000
tmp(1)%values = tmp(1)%values + 1
tmp(2)%values = tmp(2)%values + 2
tmp(3)%values = tmp(3)%values + 3
end do
!$OMP END DO
! Get the global sum one thread at a time
!$OMP CRITICAL
vectors(1)%values = vectors(1)%values + tmp(1)%values
vectors(2)%values = vectors(2)%values + tmp(2)%values
vectors(3)%values = vectors(3)%values + tmp(3)%values
!$OMP END CRITICAL
deallocate(tmp)
!$OMP END PARALLEL
print*,sum(vectors(1)%values)
print*,sum(vectors(2)%values)
print*,sum(vectors(3)%values)
end program main
This snippet could be arranged more efficiently by a loop over all elements of vectors. Then, tmp could be a scalar.

Understanding the correct use of !$omp parallel do reduction(...)

I am trying to write a program that counts the number of primes between 1 and some number n in Fortran 90 utilizing OpenMP. The nested loop just counts the numbers that are not prime. I want to use an omp parallel do to speed this up. As far as I understand, since I am just counting numbers that are not prime, it is appropriate to just use something like !$omp parallel do reduction(+:not_primes). When I run the code below in serial without the !$omp lines I get the following output
Primes: 5134
OpenMP time elapsed 0.49368596076965332
but when I include the !$omp lines I get
Primes: -1606400834
OpenMP time elapsed 0.37933206558227539
Have I used the parallel do correctly here? (apparently not, but why?) Thanks!
program prime_counter
integer n, not_primes, i, j
real*8 :: ostart,oend, omp_get_wtime
ostart = omp_get_wtime()
n=50000
!$omp parallel do reduction(+:not_primes)
do i=2,n
do j=2,i-1
if(mod(i,j)==0) then
not_primes= not_primes+1
exit
end if
end do
end do
!$omp end parallel do
print*, 'Primes:', n-not_primes
oend = omp_get_wtime()
write(*,*) 'OpenMP time elapsed', oend-ostart
end program
You do not initialize not_primes anywhere, it is undefined. The usage of the OpenMP reduction is OK. The index j should be marked as private, I normally mark all indexes as private, but that is not necessary.
not_primes = 0
!$omp parallel do reduction(+:not_primes) private(i,j)

Loop in fortran with openmp and allocatable arrays

I like to do this:
program main
implicit none
integer l
integer, allocatable, dimension(:) :: array
allocate(array(10))
array = 0
!$omp parallel do private(array)
do l = 1, 10
array(l) = l
enddo
!$omp end parallel do
print *, array
deallocate(array)
end
But I am running into error messages:
* glibc detected * ./a.out: munmap_chunk(): invalid pointer: 0x00007fff25d05a40 *
This seems to be a bug in ifort according to some discussions at intel forums but should be resolved in the version I am using (11.1.073 - Linux). This is a MASSIVE downscaled version of my code! I unfortunately can not use static arrays to have a workaround.
If I put the print into the loop, I get other errors:
* glibc detected ./a.out: double free or corruption (out): 0x00002b22a0c016f0 **
I didn't get the errors you're getting, but you have an issue with privatizing array in your OpenMP call.
[mjswartz#666-lgn testfiles]$ vi array.f90
[mjswartz#666-lgn testfiles]$ ifort -o array array.f90 -openmp
[mjswartz#666-lgn testfiles]$ ./array
0 0 0 0 0 0
0 0 0 0
[mjswartz#666-lgn testfiles]$ vi array.f90
[mjswartz#666-lgn testfiles]$ ifort -o array array.f90 -openmp
[mjswartz#666-lgn testfiles]$ ./array
1 2 3 4 5 6
7 8 9 10
First run is with private array, second is without.
program main
implicit none
integer l
integer, allocatable, dimension(:) :: array
allocate(array(10))
!$omp parallel do
do l = 1, 10
array(l) = l
enddo
print*, array
deallocate(array)
end program main
I just ran your code with ifort and openmp and it spewed 0d0's. I had to manually quit the execution. What is your expected output? I'm not a big fan of unnecessarily dynamically allocating arrays. You know what you're going to allocate your matrices as, so just make parameters and statically do it. I'll mess with some stuff and edit this response in a few.
Ok, so here's my edits:
program main
implicit none
integer :: l, j
integer, parameter :: lmax = 15e3
integer, parameter :: jmax = 25
integer, parameter :: nk = 300
complex*16, dimension(9*nk) :: x0, xin, xout
complex*16, dimension(lmax) :: e_pump, e_probe
complex*16 :: e_pumphlp, e_probehlp
character*25 :: problemtype
real*8 :: m
! OpenMP variables
integer :: myid, nthreads, omp_get_num_threads, omp_get_thread_num
x0 = 0.0d0
problemtype = 'type1'
if (problemtype .ne. 'type1') then
write(*,*) 'Problem type not specified. Quitting'
stop
else
! Spawn a parallel region explicitly scoping all variables
!$omp parallel
myid = omp_get_thread_num()
if (myid .eq. 0) then
nthreads = omp_get_num_threads()
write(*,*) 'Starting program with', nthreads, 'threads'
endif
!$omp do private(j,l,m,e_pumphlp,e_probehlp,e_pump,e_probe)
do j = 1, jmax - 1
do l = 1, lmax
call electricfield(0.0d0, 0.0d0, e_pumphlp, &
e_probehlp, 0.0d0)
! print *, e_pumphlp, e_probehlp
e_pump(l) = e_pumphlp
e_probe(l) = e_probehlp
print *, e_pump(l), e_probe(l)
end do
end do
!$omp end parallel
end if
end program main
Notice I removed your use of a module since it was unnecessary. You have an external module containing a subroutine, so just make it an external subroutine. Also, I changed your matrices to be statically allocated. Case statements are a fancy and expensive version of if statements. You were casing 15e3*25 times rather than once (expensive), so I moved those outside. I changed the OpenMP calls, but only semantically. I gave you some output so that you know what OpenMP is actually doing.
Here is the new subroutine:
subroutine electricfield(t, tdelay, e_pump, e_probe, phase)
implicit none
real*8, intent(in) :: t, tdelay
complex*16, intent(out) :: e_pump, e_probe
real*8, optional, intent (in) :: phase
e_pump = 0.0d0
e_probe = 0.0d0
return
end subroutine electricfield
I just removed the module shell around it and changed some of your variable names. Fortran is not case sensitive, so don't torture yourself by doing caps and having to repeat it throughout.
I compiled this with
ifort -o diffeq diffeq.f90 electricfield.f90 -openmp
and ran with
./diffeq > output
to catch the program vomiting 0's and to see how many threads I was using:
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
Starting program with 32 threads
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
(0.000000000000000E+000,0.000000000000000E+000)
Hope this helps!
It would appear that you are running into a compiler bug associated with the implementation of OpenMP 3.0.
If you can't update your compiler, then you will need to change your approach. There are a few options - for example you could make the allocatable arrays shared, increase their rank by one and have one thread allocate them such that the extent of the additional dimension is the number of workers in the team. All subsequent references to those arrays then need to be have the subscript for that additional rank be the omp team number (+ 1, depending on what you've used for the lower bound).
Explicit allocation of the private allocatable arrays inside the parallel construct (only) may also be an option.