MATMUL result not equal with explicit calculation for double precision? - fortran

sorry for a seemingly stupid question. I was testing the computational efficiency when replacing for-loop operations on matrices with intrinsic functions. When I check the matrices product results of the two methods, it confused me that the two outputs were not the same. Here is the simplified code I used
program matmultest
integer,parameter::nx=64,ny=32,nz=16
real*8::mat1(nx,ny),mat2(ny,nz)
real*8::result1(nx,nz),result2(nx,nz),diff(nx,nz)
real*8::localsum
integer::i,j,m
do i=1,ny
do j=1,nx
mat1(j,i)=dble(j)/7d0+2.65d0*dble(i)
enddo
enddo
do i=1,nz
do j=1,ny
mat2(j,i)=5d0*dble(j)-dble(i)*0.45d0
enddo
enddo
do j=1,nz
do i=1,nx
localsum=0d0
do m=1,ny
localsum=localsum+mat1(i,m)*mat2(m,j)
enddo
result1(i,j)=localsum
enddo
enddo
result2=matmul(mat1,mat2)
diff=result2-result1
print*,sum(abs(diff)),maxval(diff)
end program matmultest
And the result gives
1.6705598682165146E-008 5.8207660913467407E-011
The difference is non-zero for real8 but zero when I tested for integer later. I wonder if it is because of my code's faults somewhere or the numerical accuracy of MATMUL() is single precision?
And the compiler I am using is GNU Fortran (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Thanks!

francescalus explained that reordering of operations causes these differences. Let's try to find out how it actually happened.
A few words about matrix product
Consider matrices A(n,p), B(p,q), C(n,q) and C = A*B.
The naive approach, a variant of which you used, involves the following nested loops:
c = 0
do i = 1, n
do j = 1, p
do k = 1, q
c(i, j) = c(i, j) + a(i, k) * b(k, j)
end do
end do
end do
These loops can be executed in any of 6 orders, depending on the variable that you choose at each level. In the example above, the loop is named "ijk", and the other variants "ikj", "jik", etc. are all correct.
There is a speed difference, due to the memory cache: when the inner loop runs across contiguous memory elements, the loop is faster. That's the jki or kji cases.
Indeed, since Fortran matrices are stored in column major order, if the innermost loop runs on i, in the instruction c(i, j) = c(i, j) + a(i, k) * c(k, j), the value c(k, j) is constant, and the operation is equivalent to v(i) = v(i) + x * u(i), where the elements of vectors v and u are contiguous.
However, regarding the order of operations, there shouldn't be a difference: you can check for yourself that all elements of C are computed in the same order. At least at the "higher level": the compiler might optimize things differently, and it's where it becomes really interesting.
What about MATMUL? I believe it's usually a naive matrix product, based on the nested loops above, say a jki loop.
There are other ways to multiply matrices, that involve the Strassen algorithm to improve the algorithm complexity or blocking (i.e. computed products of submatrices) to improve cache use. Other methods that could change the result are OpenMP (i.e. multithread), or using FMA instructions. But here we are not going to delve into these methods. It's really only about the nested loops. If you are interested, there are many resources online, check this.
A few words about optimization
Three remarks first:
On a processor without SIMD instructions, you would get the same result as MATMUL (i.e. you would print zero in the end).
If you had implemented the loops as above, you would also get the same result. There is a tiny but significant difference in your code.
If you had implemented the loops as a subroutine, you would also get the same result. Here I suspect the compiler optimizer is doing some reordering, as I can't reproduce your "accumulator" code with a subroutine, at least with Intel Fortran.
Here is your implementation:
do i = 1, n
do j = 1, p
s = 0
do k = 1, q
s = s + a(i, k) * b(k, j)
end do
c(i, j) = s
end do
end do
It's also correct of course. Here, you are using an accumulator, and at the end of the innermost loop, the value of the accumulator is written in the matrix C.
Optimization is typically relevant on the innermost loop mainly. For our purpose, two "basic" instructions in the innermost loop are relevant, if we get rid of all other details:
v(i) = v(i) + x*u(i) (the jki loop)
s = s + x(k)*y(k) (the accumulator loop where y is contiguous in memory, but not x)
The first is usually called a "daxpy" (from the name of a BLAS routine), for "A X Plus Y", the "D" meaning double precision. The second one is just an accumulator.
On an old sequential processor, there is not much to be done to optimize. On a modern processor with SIMD, registers can hold several values, and computations can be done on all of them at once, in parallel. For instance, on x86, an XMM register (from SSE instruction set) can hold two double precision floating-point numbers. A YMM register (from AVX2) can hold four numbers, and a ZMM register (AVX512, found on Xeon) can hold eight numbers.
For instance, on YMM the innermost loop will be "unrolled" to deal with four vector elements at a time (or even more if using several registers).
Here is what the basic loop block is then roughly doing:
daxpy case:
Read 4 numbers from u into register YMM1
Read 4 numbers from v into register YMM2
x is constant and is kept in another register
Multiply in parallel x with YMM1, add in parallel to YMM2, put the result in YMM2
Write back the result to corresponding elements of v
The read/write part is faster if the elements are contiguous in memory, but if they are not it's still worth doing this in parallel.
Note that here, we haven't changed the execution order of additions of the high level Fortran loop.
accumulator case
For the parallelism to be useful, there will be a trick: accumulate four values in parallel in a YMM register, and then add the four accumulated values.
The basic loop block is thus doing this:
The accumulator is kept in YMM3 (four numbers)
Read 4 numbers from X into register YMM1
Read 4 numbers from Y into register YMM2
Multiply in parallel YMM1 with YMM2, add in parallel to YMM3
At the end of the innermost loop, add the four components of the accumulator, and write this back as the matrix element.
It's like if we had computed:
s1 = x(1)*y(1) + x(5)*y(5) + ... + x(29)*y(29)
s2 = x(2)*y(2) + x(6)*y(6) + ... + x(30)*y(30)
s3 = x(3)*y(3) + x(7)*y(7) + ... + x(31)*y(31)
s4 = x(4)*y(4) + x(8)*y(8) + ... + x(32)*y(32)
And then the matrix element written is c(i,j) = s1+s2+s3+s4.
Here the order of additions has changed! And then, since the order is different, the result is very likely different.

I can replicate the results when using fast math (I have Intel Fortran), and when I compile with the default /fp:fast I get the following max error and speed
! Error Loops Matmul
! 0.58208E-10 107526.9 140056.0 FAST
The error is just maxval(abs(diff)) speed measured is in # of matrix operations per second.
But when I compile with /fp:strict then I get no error, but a slowdown with the loops
! Error Loops Matmul
! 0.0000 43140.6 141844.0 STRICT
I see a -60% slowdown in the loops with strict floating-point handling, but surprisingly no slowdown with the matmul() function.
Source Code for completeness
program Console1
use iso_fortran_env
implicit none
integer,parameter :: nr = 100000
integer,parameter::nx=64,ny=32,nz=16
real(real64)::mat1(nx,ny),mat2(ny,nz)
real(real64)::result1(nx,nz),result2(nx,nz),diff(nx,nz)
real(real64)::localsum
integer::i,j,r
integer(int64) :: tic, toc, rate
real(real64) :: dt1, dt2
do i=1,ny
do j=1,nx
mat1(j,i)=dble(j)/7d0+2.65d0*dble(i)
enddo
enddo
do i=1,nz
do j=1,ny
mat2(j,i)=5d0*dble(j)-dble(i)*0.45d0
enddo
enddo
call SYSTEM_CLOCK(tic,rate)
do r=1, nr
result1=mymatmul(mat1,mat2)
end do
call SYSTEM_CLOCK(toc,rate)
dt1 = dble(toc-tic)/rate
call SYSTEM_CLOCK(tic,rate)
do r=1, nr
result2=matmul(mat1,mat2)
end do
call SYSTEM_CLOCK(toc,rate)
dt2 = dble(toc-tic)/rate
diff=result2-result1
print ('(1x,a16,1x,a16,1x,a16)'), "Error", "Loops", "Matmul"
print ('(1x,g16.5,1x,f16.1,1x,f16.1)'), maxval(abs(diff)), nr/dt1, nr/dt2
! Error Loops Matmul
! 0.58208E-10 107526.9 140056.0 FAST
! 0.0000 43140.6 141844.0 STRICT
!
contains
pure function mymatmul(a,b) result(c)
real(real64), intent(in) :: a(:,:), b(:,:)
real(real64) :: c(size(a,1), size(b,2))
integer :: i,j,k
real(real64) :: sum
do j=1, size(c,2)
do i=1, size(c,1)
sum = 0d0
do k=1, size(a,2)
sum = sum + a(i,k)*b(k,j)
end do
c(i,j) = sum
end do
end do
end function
end program Console1
Always compiled as Release-x64 and not Debug.

Related

Is there anyway to check if n number of terms in a row of a matrix are equal in fortran?

I was wondering if there was a quick way to have fortran look throughout a maxtrix’s rows and determine if n number of terms are equal.
I wasn’t able to find a question similar to mine and can’t find any help online.
Assuming we consider a matrix of integers this comes at O(N³) cost, N being the dimension of the matrix. Essentially, for each row, you need to compare each element to each other element in that row, requiring O(N³) operations. You probably need to write that yourself, but its no big deal, loop over the rows and check for each separately, if some element appears n-times
integer :: M(N, N) ! matrix to check
integer :: n, i, j, k, counter ! flag if a value appears n times
logical :: appears
appears = .false.
do i = 1, N ! loop over the rows
do j = 1, N ! loop over the entries
counter = 1
do k = j + 1, N
! check if the elements are the same, if yes, increase the counter
! exact implementation depends on type of M
if(M(i, k) == M(i, j)) counter = counter + 1
! check if this element appears n times
if(counter == n) appears = .true.
! or even more often?
if(counter > n) appears = .false.
end do
end do
end do
You can adapt that to your need, but you can do it like this.
Here's a pragmatic alternative to the solutions #RodrigoRodrigues has already provided. In the absence of any good evidence (the question is seriously underspecified) that we need to be concerned about asymptotic complexity and all that good stuff, here's a simple straightforward function which took me about 5 minutes to design, code, and test.
This function accepts a rank-1 array of integers and spits back a rank-1 array of integers, each element corresponding to the count of that element in the input array. If that description confuses you, bear with me and read the code which is fairly simple:
FUNCTION get_counts(arr) RESULT(rslt)
INTEGER, DIMENSION(:), INTENT(in) :: arr
INTEGER, DIMENSION(SIZE(arr)) :: rslt
INTEGER :: ix
DO ix = 1, SIZE(arr)
rslt(ix) = COUNT(arr(ix)==arr)
END DO
END FUNCTION get_counts
For the input array [1,1,2,3,4,1,5] it returns [3,3,1,1,1,3,1]. If OP wants to use this as the basis of a function to see if there is any value which occurs n times then OP could write
any(get_counts(rank_1_integer_array)==n)
If OP is concerned to know what elements occur n times then it is fairly straightforward to use the result of get_counts to refer back to the original array to extract that element.
This solution is pragmatic in the sense that it is parsimonious with my time rather than with the computer's time. My solution is somewhat wasteful of space, which may be an issue for very large input arrays. Any of Rodrigo's solutions may outperform mine, in both time and space, in the limit.
I was wondering if there was a quick way to have fortran look throughout a maxtrix’s rows and determine if n number of terms are equal.
As far as I understood your problem, this is what you want:
a function with the signature: (integer(:), integer) -> logical
this function receives the 1-D array line and checks if there is any value that appears at least n times in the array
the function is not supposed to indicate what or how many were those values, their positions or the exact number of repetitions
There are many ways to achieve this. "What is the most efficient?" It will depend on the specific conditions of your data, system, compiler, etc. To illustrate that, I came out with 3 different solutions. All of them give the correct answer, of course. You are advised to test each of them (or any other you come up with) with samples of your real data.
Naive solution #1 - good 'ol do loops
This is the default algorithm. It traverses line and stores each value into the aggregator list packed, that has each distinct value found so far, along with how many times they appeared. In the moment that any value reaches n repetitions, the fuction returns .true.. If no values reached n repetitions, and there is no more chance to complete the predicate, it returns .false..
I say defalut because it is the minimum linear algorith (that I figured out) based on good ol' do loops. This would probably be the best for the general case, if you have zero information about the nature of the data, system or even the programming language specifics. The aggregator is there to terminating the function as soon as the condition is met, but at the cost of an aditional list-traverse (on its length). If there are many different values in the data and n is large, the aggregator gets too long and the look-up can become an expensive operation. Also, there is almost no room for parallelism, vectorization and other optimizations.
! generic approach, with loops and aggregator
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
integer :: i, j, max_repetitions, qty_distincts
! packed(1,:) -> the distinct integers found so far
! packed(2,:) -> number of repetitions of each distinct integer so far
integer :: packed(2, size(line) - n + 2)
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
packed(:, 1) = [line(1), 1]
qty_distincts = 1
max_repetitions = 1
i = 1
! iterate until there aren't enough elements left to reach n repetitions
outer: do, while(i - max_repetitions <= size(line) - n)
i = i + 1
! test for a match on packed
do j = 1, qty_distincts
if(packed(1, j) == line(i)) then
packed(2, j) = packed(2, j) + 1
if(packed(2, j) == n) then
has_at_least_n_repeated = .true.
return
end if
max_repetitions = max(max_repetitions, packed(2, j))
cycle outer
end if
end do
! add to packed
qty_distincts = qty_distincts + 1
packed(:, qty_distincts) = [line(i), 1]
end do outer
has_at_least_n_repeated = .false.
end if
end
Naive solution #2 - trying for some vectorization
This approach tries to take advantage of the arraysh-nature of Fortran and the fast implementations of the intrinsic functions. Instead of an internal do loop, there is a call to the intrinsic count with an array argument, allowing the compiler to do some vectorization. Also, if you hane any tool for parallelism or if you know how to work with coarrays (and your compiler supports), you could use this approach to implement them.
The disadvantage here is that the function does a scan for all elements, even if they appeared before. So, this is more suitable when there are many different posible values in your data, with few repetitions. Although, it would also be easy to add a cached list with the past values, and use the intrinsic any, passing the cache as a whole array.
! alternative approach, intrinsic functions without cache
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
Integer :: i
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
! iterate until there aren't enough elements left to reach n repetitions
do i = 1, size(line) - n + 1
if(count(line(i + 1:) == line(i)) + 1 >= n) then
has_at_least_n_repeated = .true.
return
end if
end do
has_at_least_n_repeated = .false.
end if
end
Naive solution #3 - functional style
This is my favorite (personal criteria). I like functional languages and I enjoy borrowing some aspects of it into imperative languages. This approach delegates the calculation to an internal auxiliary recursive function. There are no do loops here. On each function call, just a section of line is passed over as argument: a shorter array with only values not checked so far. No need for cache either.
To be honest, Fortran's support for recursion is far from great - there is no tail recursion, compilers usually implement low call-stack limit, and many auto-optimizations are prevented by recursion. Even though, the algorithm is smart, I love how it looks like and I wouldn't discard it before doing some tests and comparissons.
Note: Fortran does not allow nested procedures in the contains part of a main program. For it to work as presented, you'd need to put the function in a module, submodule or make it an external function. Other option would be extracting the nested function and making it a normal function in the same scope.
! functional approach, auxiliar recursive function and no loops
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
has_at_least_n_repeated = aux(line)
end if
contains
! on each iteration removes all entries of an element from array
pure recursive function aux(section) result(out)
integer, intent(in) :: section(:)
logical :: out, mask(size(section))
integer :: left
mask = section /= section(1)
left = count(mask)
if(size(section) - left >= n) then
out = .true.
else if(n > left) then
out = .false.
else
out = aux(pack(section, mask))
end if
end
end
Conclusion
Do the tests before choosing a path to follow! I talked a litte here about my personal feeling on each approach and its implications, but it would be really nice if some of the Fortran Gurus on this site join the discussion and provide accurate information an critic.
I took the question to mean that it was to be determined whether any value in a row was repeated at least n times. To figure this out I chose to sort a copy of each row using qsort from the C standard library and then it's easy to find the lengths of each run of values.
module sortrow
use ISO_C_BINDING
implicit none
interface
subroutine qsort(base, num, size, compar) bind(C,name='qsort')
import
implicit none
integer base(*)
integer(C_SIZE_T), value :: num, size
procedure(icompar) compar
end subroutine qsort
end interface
contains
function icompar(p1, p2) bind(C)
integer(C_INT) icompar
integer p1, p2
select case(p1-p2)
case(:-1)
icompar = -1
case(0)
icompar = 0
case(1:)
icompar = 1
end select
end function icompar
end module sortrow
program main
use sortrow
implicit none
integer, parameter :: M = 3, N = 10
integer i, j
integer array(M,N)
real harvest
integer, allocatable :: row(:)
integer current, maxMatch
call random_seed
do i = 1, M
do j = 1, N
call random_number(harvest)
array(i,j) = harvest*3
end do
end do
do i = 1, M
row = array(i,:)
call qsort(row, int(N,C_SIZE_T), C_SIZEOF(array(1,1)), icompar)
maxMatch = 0
current = 1
do j = 2, N
if(row(j) == row(j-1)) then
current = current+1
else
current = 1
end if
maxMatch = max(maxMatch,current)
end do
write(*,'(*(g0:1x))') array(i,:),'maxMatch =',maxMatch
end do
end program main
Sample run:
0 0 0 2 0 2 1 1 1 0 maxMatch = 5
2 1 2 1 0 1 2 1 2 0 maxMatch = 4
0 0 2 2 2 2 2 0 1 1 maxMatch = 5

Changing OMP_SCHEDULE leads to incorrect results

I wrote a Smoothed Particle Hydrodynamics code that produces correct results when using a static schedule for OpenMP. When I say correct results, I mean that I have validated them against analytical solutions.
However, I wanted to switch to a dynamic schedule since the work to be done for each iteration is not constant. As far as I understand, in that case, a dynamic schedule leads to a faster computation. Indeed, it runs faster but the results are now incorrect, leading to what appears to be IMO a race condition.
A simplified version of the loops I applied OpenMP to is as follows :
array1 = 0.0 ! Array of size N
array2 = 0.0 ! Array of size N
array3 = something ! Array of size N, will not change through the loop
!$OMP PARALLEL DO SHARED(array1, array2, array3) FIRSTPRIVATE(N) PRIVATE(i, j, N2, temp) SCHEDULE(runtime) DEFAULT(none)
do i=1,N
N2 = function1(i) ! Reading from array3 and some math
do j=1,N2
temp = 0.0
temp = function2(i,j) ! No writing here, just reading stuff from array3, doing math and storing the result in temp
array1(i) = array1(i) + temp
end do
array2(i) = function3(array1(i)) ! Some math on array1(i), just reading
end do
!$OMP END PARALLEL DO
I looked into the "false sharing" problem, but it seems to be only a performance issue meaning that it does not affect the results (if I am correct).
Did somebody already faced a problem like that? Am I missing a race condition?

Write elements of matrix into vector using OpenMP

I am rather new with OpenMP. I want to write all elements of a big matrix into a vector using OpenMP threading to speed things up.
In my serial code I am simply doing the following:
m=1
DO k=1,n_lorentz
DO i=1,n_channels
DO p=1,n_lorentz
DO j=1,n_channels
vector(m) = Omega(j,p,i,k)
m=m+1
END DO
END DO
END DO
END DO
Now I'd like to use an OMP loop to write the elements of Omega into vector in a parallel fashion:
!$OMP PARALLEL DO PRIVATE(k,i,p,j)
! bla bla
!$OMP END PARALLEL DO
The question is how to keep track of the current vector index, since in this case the m parameter from the serial code will be incremented by different threads, resulting in a total mess.
One answer is: you don't need to keep track of m. Instead, analyzing the loop, we find that:
Every time j increases by one, m increases by one;
Every time p increases by one, m increases by n_channels;
Every time i increases by one, m increases by n_channels*n_lorentz;
Every time k increases by one, m increases by n_channels*n_lorentz*n_channels.
From these observations, you can write an explicit expression for m:
m = j + n_channels*((p-1) + n_lorentz*((i-1) + n_channels*(k-1)))
Being able to explicitly calculate the index should solve your problem :).

Parallelizing nested loop with OpenMP

I have a relatively simple loop where I'm calculating the net acceleration of a system of particles using a brute-force method.
I have a working OpenMP loop which loops over each particles and compares it to every other particles for an n^2 complexity here:
!$omp parallel do private(i) shared(bodyArray, n) default(none)
do i = 1, n
!acc is real(real64), dimension(3)
bodyArray(i)%acc = bodyArray(i)%calcNetAcc(i, bodyArray)
end do
which works just fine.
What I'm trying to do now is to reduce my calculation time by only computing the force on each body once using the fact that the force from F(a->b) = -F(b->a), reducing the number of interactions to calculate by half (n^2 / 2). Which I do in this loop:
call clearAcceleration(bodyArray) !zero out acceleration
!$omp parallel do private(i, j) shared(bodyArray, n) default(none)
do i = 1, n
do j = i, n
if ( i /= j .and. j > i) then
bodyArray(i)%acc = bodyArray(i)%acc + bodyArray(i)%accTo(bodyArray(j))
bodyArray(j)%acc = bodyArray(j)%acc - bodyArray(i)%acc
end if
end do
end do
But I'm having a lot of difficulty with this parallelizing this loop, I keep getting junk results. I think it has to do with this line:
bodyArray(j)%acc = bodyArray(j)%acc - bodyArray(i)%acc
and that the forces are not being added up properly with all the different 'j' writing to it.
I've tried using the atomic statement, but that's not allowed on array variables. So then I tried critical, but that increases the time it takes by about 20, and still doesn't give correct results. I also tried adding an ordered statement, but then I just get NaN for all my results.
Is there an easy fix to get this loop working with OpenMP?
Working code, it has a slight speed improvement but not the ~2x I was looking for.
!$omp parallel do private(i, j) shared(bodyArray, forces, n) default(none) schedule(guided)
do i = 1, n
do j = 1, i-1
forces(j, i)%vec = bodyArray(i)%accTo(bodyArray(j))
forces(i, j)%vec = -forces(j, i)%vec
end do
end do
!$omp parallel do private(i, j) shared(bodyArray, n, forces) schedule(static)
do i = 1, n
do j = 1, n
bodyArray(i)%acc = bodyArray(i)%acc + forces(j, i)%vec
end do
end do
With your current approach and data structures you're going to struggle to get good speedup with OpenMP. Consider the loop nest
!$omp parallel do private(i, j) shared(bodyArray, n) default(none)
do i = 1, n
do j = i, n
if ( i /= j .and. j > i) then
bodyArray(i)%acc = bodyArray(i)%acc + bodyArray(i)%accTo(bodyArray(j))
bodyArray(j)%acc = bodyArray(j)%acc - bodyArray(i)%acc
end if
end do
end do
[Actually, before you consider it, revise it as follows ...
!$omp parallel do private(i, j) shared(bodyArray, n) default(none)
do i = 1, n
do j = i+1, n
bodyArray(i)%acc = bodyArray(i)%acc + bodyArray(i)%accTo(bodyArray(j))
bodyArray(j)%acc = bodyArray(j)%acc - bodyArray(i)%acc
end do
end do
..., now back to the issues]
There are two problems here:
As you've already twigged, you've got a data race updating bodyArray(j)%acc; multiple threads will try to update the same element and there is no coordination of those updates. Junk results. Using critical sections or ordering the statements serialises the code; when you get it right you also get it as slow as it was before you started with OpenMP.
The pattern of access to elements of bodyArray is cache-unfriendly. It wouldn't surprise me to find that, even if you address the data race without serialising the computation, the impact of the cache-unfriendliness is to produce code slower than the original. Modern CPUs are crazy-fast in computation but the memory systems struggle to feed the beasts so cache effects can be massive. Trying to run two loops over the same rank-1 array simultaneously, which is in essence what your code does, is never (?) going to shift data through cache at maximum speed.
Personally I would try the following. I'm not going to guarantee that this will be faster, but it will be easier (I think) than fixing your current approach and fit OpenMP like a glove. I do have a nagging doubt that this is overcomplicating matters, but I haven't had a better idea yet.
First, create a 2D array of reals, call it forces, where element force(i,j) is the force that element i exerts on j. Then, some code like this (untested, that's your responsibility if you care to follow this line)
forces = 0.0 ! Parallelise this if you want to
!$omp parallel do private(i, j) shared(forces, n) default(none)
do i = 1, n
do j = 1, i-1
forces(i,j) = bodyArray(i)%accTo(bodyArray(j)) ! if I understand correctly
end do
end do
then sum the forces on each particle (and get the following right, I haven't checked carefully)
!$omp parallel do private(i) shared(bodyArray,forces, n) default(none)
do i = 1, n
bodyArray(i)%acc = sum(forces(:,i))
end do
As I wrote above, computation is extremely fast and if you have the memory to spare it's often well worth trading some space for time.
Now what you have is, probably, a problem with load balancing in the loop nest over forces. Most OpenMP implementations will, by default, perform a static distribution of work (this is not required by the standard but seems to be most common, check your documentation). So thread 1 will get the first n/num_threads rows to deal with, but these are the itty-bitty little rows at the top of the triangle you're computing. Thread 2 will get more work, thread 3 still more, and so forth. You might get away with simply adding a schedule(dynamic) clause to the parallel directive, you might have to work a bit harder to balance the load.
You may also want to review my code snippets wrt cache-friendliness and adjust as appropriate. And you may well find, if you do as I suggest, that you were better off with your original code, that halving the amount of computation doesn't actually save much time.
Another approach would be to pack the lower (or upper) triangle of forces into a rank-1 array and use some fancy indexing arithmetic to transform 2D (i,j) indices into a 1D index into that array. This would save storage space, and might be easier to make cache-friendly.

openmp issues when three do-loops are involved (fortran)

I am very confused about this problem regarding openmp in fortran. Specifically, when I write the program like this:
PROGRAM TEST
IMPLICIT NONE
INTEGER :: i,j,l
INTEGER :: M(2,2)
i=2
j=2
l=41
!$OMP PARALLEL SHARED(M),PRIVATE(l,i,j)
!$OMP DO
DO i=1,2
DO j=1,2
DO l=0,41
M(i,j)=M(i,j)+1
ENDDO
ENDDO
ENDDO
!$OMP END DO
!$OMP END PARALLEL
END PROGRAM TEST
After compiling by: ifort -openmp test.f90, it works well, and the results of M(1,1) is 42 as expected.
However, when I only adjust the order of sum over l and {i,j}, like the following:
PROGRAM TEST
IMPLICIT NONE
INTEGER :: i,j,l
INTEGER :: M(2,2)
i=2
j=2
l=41
!$OMP PARALLEL SHARED(M),PRIVATE(l,i,j)
!$OMP DO
DO l=0,41
DO i=1,2
DO j=1,2
M(i,j)=M(i,j)+1
ENDDO
ENDDO
ENDDO
!$OMP END DO
!$OMP END PARALLEL
END PROGRAM TEST
After compiling by: ifort -openmp test.f90, it doesn't work well. In fact, when you run a.out several times, the results of M(1,1) seems to be random. Does anyone know what's the problem? Also, if I want to obtain the right results, under the summing order:
DO l=0,41
DO i=1,2
DO j=1,2
what part should I modify this code?
Many thanks for any help.
You have a race condition. Threads with different l are trying to use the same element M(i,j). You can use tools like Intel Inspector or Oracle Thread Analyzer to find it (I checked with Intel). The best thing to do is using your original order. You can also use reduction, but be careful with larger arrays:
PROGRAM TEST
IMPLICIT NONE
INTEGER :: i,j,l
INTEGER :: M(2,2)
M = 0
!$OMP PARALLEL DO PRIVATE(l,i,j),reduction(+:M)
DO l = 0, 41
DO i = 1, 2
DO j = 1, 2
M(i,j) = M(i,j) + 1
END DO
END DO
END DO
!$OMP END PARALLEL DO
print *, M
END PROGRAM
There are many problems with your approach. First of all, the missing initialization of your array M. Inside your loop, you issue
M(i,j) = M(i,j) + 1
without having given any initial value to M(i,j). So the algorithm is indeterministic even in the serial case, and it is just a matter of lack, that you obtain the right result with any specific compiler or any specific summation order.
Addintionally, if you parallelize the loop over l, like
!$OMP PARALLEL DO SHARED(M),PRIVATE(l,i,j)
DO l = 0, 41
DO i = 1, 2
DO j = 1, 2
M(i,j) = M(i,j) + 1
END DO
END DO
END DO
every thread will have an own nested loop construct over i and j covering all matrix elements. Consequently, different threads will access the same elements of the matrix at the same time. The result again being indeterministic. You could of course, try to solve the issue by ensuring via OpenMP constructs, that the threads wait on each other before accessing a certain matrix element. However, that would make the algorithm definitely too slow. The best you can do in this case, in my oppinion, to parallelize over the matrix elements (the loops over i and j).
By the way, the lines
i=2
j=2
l=41
in your code are superfluous, since you immediately use them as loop variables so that their will be overwritten anyway.