I am trying to write a program that counts the number of primes between 1 and some number n in Fortran 90 utilizing OpenMP. The nested loop just counts the numbers that are not prime. I want to use an omp parallel do to speed this up. As far as I understand, since I am just counting numbers that are not prime, it is appropriate to just use something like !$omp parallel do reduction(+:not_primes). When I run the code below in serial without the !$omp lines I get the following output
Primes: 5134
OpenMP time elapsed 0.49368596076965332
but when I include the !$omp lines I get
Primes: -1606400834
OpenMP time elapsed 0.37933206558227539
Have I used the parallel do correctly here? (apparently not, but why?) Thanks!
program prime_counter
integer n, not_primes, i, j
real*8 :: ostart,oend, omp_get_wtime
ostart = omp_get_wtime()
n=50000
!$omp parallel do reduction(+:not_primes)
do i=2,n
do j=2,i-1
if(mod(i,j)==0) then
not_primes= not_primes+1
exit
end if
end do
end do
!$omp end parallel do
print*, 'Primes:', n-not_primes
oend = omp_get_wtime()
write(*,*) 'OpenMP time elapsed', oend-ostart
end program
You do not initialize not_primes anywhere, it is undefined. The usage of the OpenMP reduction is OK. The index j should be marked as private, I normally mark all indexes as private, but that is not necessary.
not_primes = 0
!$omp parallel do reduction(+:not_primes) private(i,j)
Related
I've tried to parallelize a code contains such a double do-loop. It's not efficient for sure, but that's not a big problem now.
The output tauv is NaN. That is the first problem.
The second problem is that Intel compiler gives fatal error with number of threads less than maximum number of threads (equals 8 for my machine).
How could I treat those problems?
!$omp parallel do private(i,j, ro11,ro21,ro12,ro22, &
u11,u21,u12,u22, &
v11,v21,v12,v22, &
es11,es21,es12,es22, &
p11,p21,p12,p22, &
te11,te21,te12,te22, &
emu11,emu21,emu12,emu22) &
shared(i1l, i2l, j1l, j2l, emumax, tauv, tauvij, ro, u, v, es)
do i=i1l+2,i2l-2,2
do j=j1l+2,j2l-2,2
if (i.le.niii.and.i.ge.0.and.j.ge.0.and.j.le.nj.or.&
i.le.ni.and.i.ge.niik.and.j.gt.njjv.and.j.le.nj.or.&
i.le.ni.and.i.ge.niik.and.j.ge.0.and.j.lt.njjn&
.or.i.gt.niii.and.i.lt.niik.and.j.gt.njj0+i-niii&
.or.i.gt.niii.and.i.lt.niik.and.j.lt.njj0-i+niii) then
ro11=ro(i-1,j-1)
ro21=ro(i+1,j-1)
ro12=ro(i-1,j+1)
ro22=ro(i+1,j+1)
u11=u(i-1,j-1)
u21=u(i+1,j-1)
u12=u(i-1,j+1)
u22=u(i+1,j+1)
v11=v(i-1,j-1)
v21=v(i+1,j-1)
v12=v(i-1,j+1)
v22=v(i+1,j+1)
es11=es(i-1,j-1)
es21=es(i+1,j-1)
es12=es(i-1,j+1)
es22=es(i+1,j+1)
p11=(es11-0.5*ro11*(u11*u11+v11*v11))*ga1
p21=(es21-0.5*ro21*(u21*u21+v21*v21))*ga1
p12=(es12-0.5*ro12*(u12*u12+v12*v12))*ga1
p22=(es22-0.5*ro22*(u22*u22+v22*v22))*ga1
te11=p11/ro11
te21=p21/ro21
te12=p12/ro12
te22=p22/ro22
emu11=te11**1.5*(1.0+s1)/(te11+s1)
emu21=te21**1.5*(1.0+s1)/(te21+s1)
emu12=te12**1.5*(1.0+s1)/(te12+s1)
emu22=te22**1.5*(1.0+s1)/(te22+s1)
emumax=emu11
if (emu21.gt.emumax) then
emumax=emu21
end if
if (emu12.gt.emumax) then
emumax=emu12
end if
if (emu22.gt.emumax) then
emumax=emu22
end if
tauvij=re*flkv*hx*hx/emumax
if (tauvij .le. tauv) then
tauv=tauvij
endif
endif
enddo
enddo
!$omp end parallel do
The thing is that it executes without error, but OpenMP do-loop computes more slowly than sequential one...
From your reproducible example:
1.) Your code is only using 1 thread (?) in OpenMP region:
! Set number of threads
nthreads = 1
call omp_set_num_threads(nthreads)
print *, 'The number of threads are used is ', omp_get_max_threads ( )
I would avoid the call omp_set_num_threads(). Insted, specify number of threads with environmental variable OMP_NUM_THREADS. For unix machine: export OMP_NUM_THREADS=<number of threads>
2.) In your "reproducible" example, the parallelized loop (line 312) is missing private/shared declarations? From what you wrote above, fix to:
!$omp parallel do default(private) shared(i1l, i2l, j1l, j2l, emumax, tauv, tauvij, ro, u, v, es)
With all of the above, the result I get from my machine (4c/4t) using GNU Fortran compiler is:
...
Executed time in SEQ code is 60.2720146
...
Executed time in OMP code is 27.1342430
I'm trying to use OpenMP in Fortran 90 to parallelize a do loop with function call inside. The code listed first runs fine. The code listed next does not. I receive a segmentation fault.
First program: $ gfortran -O3 -o output -fopenmp OMP10.f90
program OMP10
!$ use omp_lib
IMPLICIT NONE
integer, parameter :: n = 100000
integer :: i
real(kind = 8) :: sum,h,x(0:n),f(0:n),ZBQLU01
!$ call OMP_set_num_threads(4)
h = 2.d0/dble(n)
!$OMP PARALLEL DO PRIVATE(i)
do i = 0,n
x(i) = -1.d0+dble(i)*h
f(i) = 2.d0*x(i)
end do
!$OMP END PARALLEL DO
sum = 0.d0
!$OMP PARALLEL DO PRIVATE(i) REDUCTION(+:SUM)
do i = 0,n-1
sum = sum + h*f(i)
end do
!$OMP END PARALLEL DO
write(*,*) "The integral is ", sum
end program OMP10
Second program: $ gfortran -O3 -o output -fopenmp randgen.f OMP10.f90
program OMP10
!$ use omp_lib
IMPLICIT NONE
integer, parameter :: n = 100000
integer :: i
real(kind = 8) :: sum,h,x(0:n),f(0:n),ZBQLU01
!$ call OMP_set_num_threads(4)
h = 2.d0/dble(n)
!$OMP PARALLEL DO PRIVATE(i)
do i = 0,n
x(i) = ZBQLU01(0.d0)
end do
!$OMP END PARALLEL DO
sum = 0.d0
!$OMP PARALLEL DO PRIVATE(i) REDUCTION(+:SUM)
do i = 0,n-1
sum = sum + h*f(i)
end do
!$OMP END PARALLEL DO
write(*,*) "The integral is ", sum
end program OMP10
In the above command, randgen.f is a library that contains the function ZBQLU01.
You cannot just call any function from a parallel region. The function must be thread safe. See What is meant by "thread-safe" code? and https://en.wikipedia.org/wiki/Thread_safety .
Your function is quite the opposite of thread safe as is quite typical for random number generators. Just notice the SAVE statements in the source code for many local variables and for a common block.
The solution is to use a good parallel random number generator. The site is not for software recommendation, but as a pointer just search the web for "parallel prng" or "parallel random number generator". I personally use a library which I already pointed to in https://stackoverflow.com/a/38263032/721644 A simple web search reveals another simple possibility in https://jblevins.org/log/openmp . And then there are many larger and more complex libraries.
This is a follow up to question 36182486, 41421437 and several others. I want to speed up the assembly of skewness and mass matrices for a FEM calculation by using multiple processors to deal with individual elements in parallel. This little MWE shows the guts of the operation.
!! compile with gfortran -fopenmp -o FEMassembly FEMassembly.f90
Program FEMassembly
use, intrinsic :: iso_c_binding
implicit none
real (c_double) :: arrayM(3,3)=reshape((/2.d0,1.d0,1.d0,1.d0,&
&2.d0,1.d0,1.d0,1.d0,2.d0/),(/3,3/)) ! contrib from one element
integer (c_int) :: ke,ne=4,kx,nx=6,nodes(3)
real (c_double) :: L(6,6)
integer (c_int) :: t(4,3)=reshape((/1,2,5,6,2,3,4,5,4,5,2,3/),(/4,3/))
!! first, no OMP
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
L(nodes,nodes)=L(nodes,nodes)+arrayM
end do
print *,'L no OMP'
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
L=0
!$omp parallel do private (nodes)
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
!! !$omp atomic
L(nodes,nodes)=L(nodes,nodes)+arrayM
!! !$omp end atomic
end do
!$omp end parallel do
print *,'L with OMP and race'
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
End Program FEMassembly
With the atomic directives commented out, the array L contains several wrong values, presumably because of the race condition I was trying to avoid with the atomic directives. The results are:
L no OMP
2. 1. 0. 1. 0. 0.
1. 6. 1. 2. 2. 0.
0. 1. 4. 0. 2. 1.
1. 2. 0. 4. 1. 0.
0. 2. 2. 1. 6. 1.
0. 0. 1. -0. 1. 2.
L with OMP and race
2. 1. 0. 1. 0. 0.
1. 6. 1. 2. 2. 0.
0. 1. 2. 0. 2. 1.
1. 2. 0. 4. 1. 0.
0. 2. 2. 1. 6. 1.
0. 0. 1. 0. 1. 2.
If the "atomic" directives are uncommented, the compiler return the error:
Error: !$OMP ATOMIC statement must set a scalar variable of intrinsic type at (1)
where (1) points to arrayM in the line L(nodes,nodes).....
What I am hoping to achieve is have the time consuming contributions from each element (here the trivial arrayM) happen in parallel, but since several threads address the same matrix element, something has to be done to have the sum occur in an orderly fashion. Can anyone suggest a way to do this?
In Fortran the simplest way is to use a reduction. This is because OpenMP for Fortran supports reductions on arrays. Below is what I think you are trying to do, but take it with a pinch of salt because
You don't provide the correct output so it's difficult to test
With such a small array sometimes race conditions are difficult to find
!! compile with gfortran -fopenmp -o FEMassembly FEMassembly.f90
Program FEMassembly
use, intrinsic :: iso_c_binding
Use omp_lib, Only : omp_get_num_threads
implicit none
real (c_double) :: arrayM(3,3)=reshape((/2.d0,1.d0,1.d0,1.d0,&
&2.d0,1.d0,1.d0,1.d0,2.d0/),(/3,3/)) ! contrib from one element
integer (c_int) :: ke,ne=4,nodes(3)
real (c_double) :: L(6,6)
integer (c_int) :: t(4,3)=reshape((/1,2,5,6,2,3,4,5,4,5,2,3/),(/4,3/))
! Not declared in original program
Integer :: nx, kx
! Not set in original program
nx = Size( L, Dim = 1 )
!$omp parallel default( none ) private ( ke, nodes ) shared( ne, t, L, arrayM )
!$omp single
Write( *, * ) 'Working on ', omp_get_num_threads(), ' threads'
!$omp end single
!$omp do reduction( +:L )
do ke=1,ne ! for each triangular element
nodes=t(ke,:)
L(nodes,nodes)=L(nodes,nodes)+arrayM
end do
!$omp end do
!$omp end parallel
write(*,fmt="(6(1x,f3.0))")(L(kx,1:6),kx=1,nx)
End Program FEMassembly
In the code I am attempting to port to OpenMP, I have a parallelized loop nested in an outer loop. Depending on the iteration of the outer loop, I would like a particular array to be either shared or reduction(+). Is there a way to do this in Fortran?
Here's a mockup of what I want:
do i = 1, 2
!$omp if(i.eq.1) parallel do reduction(+:foo)
!$omp if(i.eq.2) parallel do shared(foo)
do j = 1,j_max
work on foo
enddo
!$omp end parallel
enddo
The discussion in openMP conditional pragma "if else" suggests that scheduling cannot be modified during execution. Is that also the case for shared/private/reduction/etc.?
One obvious course of action is to create foo_1 (reduction:+) and foo_2 (shared), copy foo_1 to foo_2 after the first iteration on i, and then have if statements within the loop over j to refer to the proper array. But that's not terribly elegant. I'm hoping there's a better/cleverer/cleaner way to do this.
Edit: for the unimaginative, here's the pseudocode version of my alternative
do i = 1, 2
!$omp parallel do reduction(+:foo_1), shared(foo_2)
do j = 1,j_max
if( i .eq. 1 ) then
work on foo_1
else
work on foo_2
endif
enddo
!$omp end parallel
foo_2 = foo_1
enddo
As you don't mind having two parallel regions you could use orphaned directives - I find these great for organising the overall structure of large OpenMP codes. I mean something like
i = 1
!$omp parallel shared( i, foo, ... )
Call do_the_work( i, foo, ... )
!$omp end parallel
i = 2
!$omp parallel shared( i, ... ) reduction( +:foo )
Call do_the_work( i, foo, ... )
!$omp end parallel
...
Subroutine do_the_work( i, foo, ... )
!$omp do
do j = 1,j_max
work on foo
enddo
End Subroutine do_the_work
If the parallel region is as big as you say it probably wants to be in one or more routines by itself anyway.
Problem:
I have a some code that myself and a few others have been writing, I took the code and made it use mpi and openmp with great results (helps that I am running it on a Blue Gene/Q).
One thing I am not a fan of is that now I cannot compile the code without the -openmp directive because to get the speedup I needed I used reduction variables.
Example:
!$OMP parallel do schedule(DYNAMIC, 4) reduction(min:min_val)
....
min_val = some_expression(i)
....
!$OMP end parallel do
result = sqrt(min_val)
I am looking for something like:
!$OMP if OMP:
!$OMP min_val = some_expression(i)
!$OMP else:
if ( min_val .gt. some_expression(i) ) min_val = some_expression(i)
!$OMP end else
Anybody know of something like this? Notice that without -openmp the !$OMP lines are ignored and the code runs normally with the correct, er same, answer.
Thanks,
(Yes it is FORTRAN code, but its almost identical to C and C++)
To your exact question:
!$ whatever_statement
will use that statement only when compiled with OpenMP.
Otherwise, in your specific case, can't you just use:
!$OMP parallel do schedule(DYNAMIC, 4) reduction(min:min_val)
....
min_val = min(min_val, some_expression(i))
....
!$OMP end parallel do
result = sqrt(min_val)
?
I'm using this normally with and without -openmp quite often.
If you are willing to use pre-processed FORTRAN source file, you can always rely on the macro _OPENMP to be defined when using OpenMP. The simplest example is:
program pippo
#ifdef _OPENMP
print *, "OpenMP program"
#else
print *, "Non-OpenMP program"
#endif
end program pippo
Compiled with:
gfortran -fopenmp main.F90
the program will give the following output:
OpenMP program
If you are unwilling to use pre-processed source files, then you can set a variable using FORTRAN conditional compilation sentinel:
program pippo
implicit none
logical :: use_openmp = .false.
!$ use_openmp = .true.
!$ print *, "OpenMP program"
if( .not. use_openmp) then
print *, "Non-OpenMP program"
end if
end program pippo