Why my C code is slower using OpenMP - c++

I m trying to do multi-thread programming on CPU using OpenMP. I have lots of for loops which are good candidate to be parallel. I attached here a part of my code. when I use first #pragma omp parallel for reduction, my code is faster, but when I try to use the same command to parallelize other loops it gets slower. does anyone have any idea why it is like this?
.
.
.
omp_set_dynamic(0);
omp_set_num_threads(4);
float *h1=new float[nvi];
float *h2=new float[npi];
while(tol>0.001)
{
std::fill_n(h2, npi, 0);
int k,i;
float h222=0;
#pragma omp parallel for private(i,k) reduction (+: h222)
for (i=0;i<npi;++i)
{
int p1=ppi[i];
int m = frombus[p1];
for (k=0;k<N;++k)
{
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i]=h222;
}
//*********** h3*****************
std::fill_n(h3, nqi, 0);
float h333=0;
#pragma omp parallel for private(i,k) reduction (+: h333)
for (int i=0;i<nqi;++i)
{
int q1=qi[i];
int m = frombus[q1];
for (int k=0;k<N;++k)
{
h333 += v[m-1]*v[k]*(G[m-1][k]*sin(del[m-1]-del[k])
- B[m-1][k]*cos(del[m-1]-del[k]));
}
h3[i]=h333;
}
.
.
.
}

I don't think your OpenMP code gives the same result as without OpenMP. Let's just concentrate on the h2[i] part of the code (since the h3[i] has the same logic). There is a dependency of h2[i] on the index i (i.e. h2[1] = h2[1] + h2[0]). The OpenMP reduction you're doing won't give the correct result. If you want to do the reduction with OpenMP you need do it on the inner loop like this:
float h222 = 0;
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
#pragma omp parallel for reduction(+:h222)
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
However, I don't know if that will be very efficient. An alternative method is fill h2[i] in parallel on the outer loop without a reduction and then take care of the dependency in serial. Even though the serial loop is not parallelized it still should have a small effect on the computation time since it does not have the inner loop over k. This should give the same result with and without OpenMP and still be fast.
#pragma omp parallel for
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
float h222 = 0;
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
//take care of the dependency serially
for(int i=1; i<npi; i++) {
h2[i] += h2[i-1];
}

Keep in mind that creating and destroying threads is a time consuming process; clock the execution time for the process and see for yourself. You only use parallel reduction twice which may be faster than a serial reduction, however the initial cost of creating the threads may still be higher. Try parallelizing the outer most loop (if possible) to see if you can obtain a speedup.

Related

Aspects that affects the efficiency of OpenMP parallelism

I would like to parallel a big loop using OpenMP to improve its efficiency. Here is the main part of the toy code:
vector<int> config;
config.resize(indices.size());
omp_set_num_threads(2);
#pragma omp parallel for schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
#pragma omp simd
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
#pragma omp atomic
result[index]++;
}
Then I found I cannot get improvements in efficiency if I use 2, 4, or 8 threads. The execution time of the parallel versions is generally greater than that of the sequential version. The outer loop has 10000 iterations and they are independent so I want multiple threads to execute those iterations in parallel.
I guess the reasons for performance decrease maybe include: private copies of config? or, random access of ref_table? or, expensive atomic operation? So what are the exact reasons for the performance decrease? More importantly, how can I get a shorter execution time?
Private copies of config or, random access of ref_tables are not problematic, I think the workload is very small, there are 2 potential issues which prevent efficient parallelization:
atomic operation is too expensive.
overheads are bigger than workload (it simply means that it is not worth parallelizing with OpenMP)
I do not know which one is more significant in your case, so it is worth trying to get rid of atomic operation. There are 2 cases:
a) If the results array is zero initialized you have to use:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config) where N is the size of result array and delete #pragma omp atomic. Note that this works on OpenMP 4.5 or later. It is also worth removing #parama omp simd for a loop of 2-10 iterations. So, your code should look like this:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
result[index]++;
}
b) If the result array is not zero initialized the solution is very similar, but use a temporary zero initialized array in the loop and after that add it to result array.
If the speed will not increase then your code is not worth parallelizing with OpenMP on your hardware.

OpenMP: Is array reduction always needed for updating an array in parallel?

I am quite new to OpenMP. I have the following simple loop that I want to run in parallel with OpenMP:
double rij[3];
double r;
#ifdef _OPENMP
#pragma omp parallel for private(rij,r)
#endif
for (int i=0; i<n; ++i)
{
for (int j=0; j<n; ++j)
{
if (i != j)
{
distance(X,rij,r,i,j);
V[i] += ke * Q[j] / r;
for (int k=0; k<3; ++k)
{
F[3*i+k] += ke * Q[j] * rij[k] / pow(r,3);
}
}
}
}
From what I understood, variables are shared by default which is why I only declared private(rij,r). But according to these questions (first second third), I should do array reduction in this case.
It's clear to me that if many threads need to sum to the same variable, this has to be done with #pragma omp parallel for reduction(+:A[:n]) for summing to array A of size n. This is what I do in another part of my code, and it works as expected.
However, in this case workers never have to sum to the same variable: every worker performs the sum on its index i. Is is correct to do as I do in this case i.e. not doing any array reduction and not using any critical section ?
If my implementation is correct, I believe it would avoid the overhead of the critical section while being simpler code. Feel free to give your advice on how this could be better optimized.
Thank you
You don't need a reduction. It is a feature to avoid copying the same code all over again because they are re-occurring problems (Try to think off, how you would implement a sum-reduction without OpenMP).
What you do right now is working on parallel data (V[i]) which should not overlap at any iteration (as you state in the question), because you divide by i itself. Furthermore write to F[...] shouldn't overlap either, because it only depends on iand k

OpenMP parallelize multiple sequential loops

I want to parallelize the following function with OpenMP:
void calculateAll() {
int k;
int nodeId1, minCost1, lowerLimit1, upperLimit8;
for (k = mostUpperLevel; k > 0; k--) {
int myStart = borderNodesArrayStartGlobal[k - 1];
int size = myStart + borderNodesArraySizeGlobal[k - 1];
/* this loop may be parallel */
for (nodeId1 = myStart; nodeId1 < size; nodeId1++) {
if (getNodeScanned(nodeId1)) {
setNodeScannedFalse(nodeId1);
} else {
minCost1 = myMax;
lowerLimit1 = getNode3LevelsDownAll(nodeId1);
upperLimit8 = getUpperLimit3LevelsDownAll(nodeId1);
changeNodeValue(nodeId1, lowerLimit1, upperLimit8, minCost1, minCost1);
}
}
}
int myStart = restNodesArrayStartGlobal;
int size = myStart + restNodesArraySizeGlobal;
/* this loop may also be parallel */
for (nodeId1 = myStart; nodeId1 < size; nodeId1++) {
if (getNodeScanned(nodeId1)) {
setNodeScannedFalse(nodeId1);
} else {
minCost1 = myMax;
lowerLimit1 = getNode3LevelsDownAll(nodeId1);
upperLimit8 = getUpperLimit3LevelsDownAll(nodeId1);
changeNodeValue(nodeId1, lowerLimit1, upperLimit8, minCost1, minCost1);
}
}
}
Although I can use "omp pragma parallel for" on the 2 inside loops, code is too slow due to the constant overhead of creating new threads. Is there a way to separate "omp pragma parallel" so that at the beginning of function I take the necessary threads and then with "omp pragma for" to get the best possible results? I am using gcc 4.6.
Thanks in advance
The creation of the threads is normally not the bottleneck in openmp programs. It is the distribution of the tasks to the threads. The threads are actually generated at the first #pragma omp for (You can verify that with a profiler like VTune. At each loop the work is assigned to the threads. This assignment is often the problem as this is a costly operation.
However you should try to play around with the schedulers. As this might have a big impact on the performance. E.g play with schedule(dynamic,chunksize) vs schedule(static,chunksize) and also try different chunksizes.

Parallel Processing Collision Pairs

I am writing some code for parallel processing of collisions, the expected result would be to have an acceleration for each thread, but I'm not getting any acceleration on the data processing because I have a critical section inside parallel_reduce() and I believe its serializing too much the access to the objects. This is how the code looks:
do {
totalVel = 0.;
#pragma omp parallel for
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
Is there any way to gain more speed by making it parallel or the serialization of access is too much?
Observations:
bodyA() and bodyB() are objects that repeat themselves a lot inside the bodyContact container.
For now parallel_reduce() only does one multiplication (the critical section), but will get more complex.
double parallel_reduce(){
#pragma omp critical
this->vel_ *= 0.99;
return vel_.length();
}
Actual timings:
serial, 25.635
parallel, 123.559
There is always cost of using OpenMP constructs, so avoid using a parallel inside a loop, following the implementation it could launch at each time new threads, instead of rewaking the previous launched threads.
In fact, if bodyContact.size() is small and the do {} while in number of step is big and parallel_reduce is very quick is very hard to have scalability with just a few OpenMP pragma.
#pragma omp parallel shared(totalVel) shared(bodyContact)
{
do {
totalVel = 0.;
#pragma omp for reduce(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
}
The above is likely not only slower, but very likely wrong; all the threads are trying to update the same totalVel. Tonnes of race conditions, but also contention, cache invalidation, etc.
Assuming the parallel_reduce() stuff is ok, you'd like something more like
do {
totalVel = 0.;
#pragma omp parallel for default(none) shared(bodyContact) reduction(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
which will do the reduction on totalVel correctly.

Parallel for loop in openmp

I'm trying to parallelize a very simple for-loop, but this is my first attempt at using openMP in a long time. I'm getting baffled by the run times. Here is my code:
#include <vector>
#include <algorithm>
using namespace std;
int main ()
{
int n=400000, m=1000;
double x=0,y=0;
double s=0;
vector< double > shifts(n,0);
#pragma omp parallel for
for (int j=0; j<n; j++) {
double r=0.0;
for (int i=0; i < m; i++){
double rand_g1 = cos(i/double(m));
double rand_g2 = sin(i/double(m));
x += rand_g1;
y += rand_g2;
r += sqrt(rand_g1*rand_g1 + rand_g2*rand_g2);
}
shifts[j] = r / m;
}
cout << *std::max_element( shifts.begin(), shifts.end() ) << endl;
}
I compile it with
g++ -O3 testMP.cc -o testMP -I /opt/boost_1_48_0/include
that is, no "-fopenmp", and I get these timings:
real 0m18.417s
user 0m18.357s
sys 0m0.004s
when I do use "-fopenmp",
g++ -O3 -fopenmp testMP.cc -o testMP -I /opt/boost_1_48_0/include
I get these numbers for the times:
real 0m6.853s
user 0m52.007s
sys 0m0.008s
which doesn't make sense to me. How using eight cores can only result in just 3-fold
increase of performance? Am I coding the loop correctly?
You should make use of the OpenMP reduction clause for x and y:
#pragma omp parallel for reduction(+:x,y)
for (int j=0; j<n; j++) {
double r=0.0;
for (int i=0; i < m; i++){
double rand_g1 = cos(i/double(m));
double rand_g2 = sin(i/double(m));
x += rand_g1;
y += rand_g2;
r += sqrt(rand_g1*rand_g1 + rand_g2*rand_g2);
}
shifts[j] = r / m;
}
With reduction each thread accumulates its own partial sum in x and y and in the end all partial values are summed together in order to obtain the final values.
Serial version:
25.05s user 0.01s system 99% cpu 25.059 total
OpenMP version w/ OMP_NUM_THREADS=16:
24.76s user 0.02s system 1590% cpu 1.559 total
See - superlinear speed-up :)
let's try to understand how parallelize simple for loop using OpenMP
#pragma omp parallel
#pragma omp for
for(i = 1; i < 13; i++)
{
c[i] = a[i] + b[i];
}
assume that we have 3 available threads, this is what will happen
firstly
Threads are assigned an independent set of iterations
and finally
Threads must wait at the end of work-sharing construct
Because this question is highly viewed I decided to add a bit a OpenMP background to help those visiting it
The #pragma omp parallel creates a parallel region with a team of threads, where each thread executes the entire block of code that the parallel region encloses.
From the OpenMP 5.1 one can read a more formal description :
When a thread encounters a parallel construct, a team of threads is
created to execute the parallel region (..). The
thread that encountered the parallel construct becomes the primary
thread of the new team, with a thread number of zero for the duration
of the new parallel region. All threads in the new team, including the
primary thread, execute the region. Once the team is created, the
number of threads in the team remains constant for the duration of
that parallel region.
The #pragma omp parallel for creates a parallel region (as described before), and to the threads of that region the iterations of the loop that it encloses will be assigned, using the default chunk size, and the default schedule which is typically static. Bear in mind, however, that the default schedule might differ among different concrete implementation of the OpenMP standard.
From the OpenMP 5.1 you can read a more formal description :
The worksharing-loop construct specifies that the iterations of one or
more associated loops will be executed in parallel by threads in the
team in the context of their implicit tasks. The iterations are
distributed across threads that already exist in the team that is
executing the parallel region to which the worksharing-loop region
binds.
Moreover,
The parallel loop construct is a shortcut for specifying a parallel
construct containing a loop construct with one or more associated
loops and no other statements.
Or informally, #pragma omp parallel for is a combination of the constructor #pragma omp parallel with #pragma omp for. In your case, this would mean that:
#pragma omp parallel for
for (int j=0; j<n; j++) {
double r=0.0;
for (int i=0; i < m; i++){
double rand_g1 = cos(i/double(m));
double rand_g2 = sin(i/double(m));
x += rand_g1;
y += rand_g2;
r += sqrt(rand_g1*rand_g1 + rand_g2*rand_g2);
}
shifts[j] = r / m;
}
A team of threads will be created, and to each of those threads will be assigned chunks of the iterations of the outermost loop.
To make it more illustrative, with 4 threads the #pragma omp parallel for with a chunk_size=1 and a static schedule would result in something like:
Code-wise the loop would be transformed to something logically similar to:
for(int i=omp_get_thread_num(); i < n; i+=omp_get_num_threads())
{
c[i]=a[i]+b[i];
}
where omp_get_thread_num()
The omp_get_thread_num routine returns the thread number, within the
current team, of the calling thread.
and omp_get_num_threads()
Returns the number of threads in the current team. In a sequential
section of the program omp_get_num_threads returns 1.
or in other words, for(int i = THREAD_ID; i < n; i += TOTAL_THREADS). With THREAD_ID ranging from 0 to TOTAL_THREADS - 1, and TOTAL_THREADS representing the total number of threads of the team created on the parallel region.
Armed with this knowledge, and looking at your code, one can see that you have a race-condition on the updates of the variables 'x' and 'y'. Those variables are shared among threads and update inside the parallel region, namely:
x += rand_g1;
y += rand_g2;
To solve this race-condition you can use OpenMP' reduction clause:
Specifies that one or more variables that are private to each thread
are the subject of a reduction operation at the end of the parallel
region.
Informally, the reduction clause, will create for each thread a private copy of the variables 'x' and 'y', and at the end of the parallel region perform the summation among all those 'x' and 'y' variables into the original 'x' and 'y' variables from the initial thread.
#pragma omp parallel for reduction(+:x,y)
for (int j=0; j<n; j++) {
double r=0.0;
for (int i=0; i < m; i++){
double rand_g1 = cos(i/double(m));
double rand_g2 = sin(i/double(m));
x += rand_g1;
y += rand_g2;
r += sqrt(rand_g1*rand_g1 + rand_g2*rand_g2);
}
shifts[j] = r / m;
}
What you can achieve at most(!) is a linear speedup.
Now I don't remember which is which with the times from linux, but I'd suggest you to use time.h or (in c++ 11) "chrono" and measure the runtime directly from the programm. Best pack the entire code into a loop, run it 10 times and average to get approx runtime by the prog.
Furthermore you've got imo a problem with x,y - which do not adhere to the paradigm of data locality in parallel programming.