openmp latency for inside for - c++

I have a piece of code that i want to parallelize and the openmp program is much slower than the serial version, so what is wrong with my implementation?. This is the code of the program
#include <iostream>
#include <gsl/gsl_math.h>
#include "Chain.h"
using namespace std;
int main(){
int const N=1000;
int timeSteps=100;
double delta=0.0001;
double qq[N];
Chain ch(N);
ch.initCond();
for (int t=0; t<timeSteps; t++){
ch.changeQ(delta*t);
ch.calMag_i();
ch.calForce001();
}
ch.printSomething();
}
The Chain.h is
class Chain{
public:
int N;
double *q;
double *mx;
double *my;
double *force;
Chain(int const Np);
void initCond();
void changeQ(double delta);
void calMag_i();
void calForce001();
};
And the Chain.cpp is
Chain::Chain(int const Np){
this->N = Np;
this->q = new double[Np];
this->mx = new double[Np];
this->my = new double[Np];
this->force = new double[Np];
}
void Chain::initCond(){
for (int i=0; i<N; i++){
q[i] = 0.0;
force[i] = 0.0;
}
}
void Chain::changeQ(double delta){
int i=0;
#pragma omp parallel
{
#pragma omp for
for (int i=0; i<N; i++){
q[i] = q[i] + delta*i + 1.0*i/N;
}
}
}
void Chain::calMag_i(){
int i =0;
#pragma omp parallel
{
#pragma omp for
for (i=0; i<N; i++){
mx[i] = cos(q[i]);
my[i] = sin(q[i]);
}
}
}
void Chain::calForce001(){
int i;
int j;
double fij =0.0;
double start_time = omp_get_wtime();
#pragma omp parallel
{
#pragma omp for private(j, fij)
for (i=0; i<N; i++){
force[i] = 0.0;
for (j=0; j<i; j++){
fij = my[i]*mx[j] - mx[i]*my[j];
#pragma omp critical
{
force[i] += fij;
force[j] += -fij;
}
}
}
}
double time = omp_get_wtime() - start_time;
cout <<"time = " << time <<endl;
}
So the methods changeQ() and calMag_i() are in fact faster than the serial code, but my problem is the calForce001(). The execution time are:
with openMP 3.939s
without openMP 0.217s
Now, clearly i'm doing something wrong or the code can't be parallelize. Please any help with be usefull.
Thanks in advance.
Carlos
Edit:
In order to clarify the question i add the functions omp_get_wtime() to calculate the execution time for the function calForce001() and the times for one execution are
with omp :0.0376656
without omp: 0.00196766
So with omp method is 20 times slower.
Otherwise, i'm also calculate the time for the calMag_i() method
with omp: 3.3845e-05
without omp: 9.9516e-05
for this method omp is 3 times faster.
I hope this confirm that the latency problem is in the calForce001() method.

There are three reasons why you don't benefit from any speedup.
you have #pragma omp parallel all over your code. What this pragma does, is start the "team of threads". At the end of the block, this team is disbanded. This is quite costly. Removing those and using #pragma omp parallel for instead of #pragma omp for will start the team upon first encounter and put it to sleep after each block. This made the application 4x faster for me.
you use #pragma omp critical. On most platforms, this will force the use of a mutex - which is heavily contended because all threads want to write to that variable at the same time. So, don't use a critical section here. You could use atomic updates, but in this case, that won't make much of a difference - see third item. Just removing the critical section improved the speed by another 3x.
Parallelism only makes sense when you have an actual workload. All of your code is too small to benefit from parallelism. There's simply too little workload to win back the time lost on starting/waking/destroying the threads. If your workload would be ten times this, some of the parallel for statements would make sense. But especially Chain::calForce001() will never be worth it if you have to do atomic updates.
With respect to programming style: you're programming in C++. Please use local scope variables wherever you can - in e.g. Chain::calForce001(), use a local double fij inside the inner loop. That saves you from having to write private clauses. Compilers are smart enough to optimize that. Correct scoping allows for better optimizations.

Related

OpenMP nested loop task parallelism, counter not giving correct result

I am pretty new in openMP. I am trying to parallelize the nested loop using tasking but it didn't give me the correct counter output. Sequential output is "Total pixel = 100000000". Can anyone help me with that?
Note: I have done this using #pragma omp parallel for reduction (+:pixels_inside) private(i,j). This works fine now I want to use tasking.
what I have try so far:
#include<iostream>
#include<omp.h>
using namespace std;
int main(){
int total_steps = 10000;
int i,j;
int pixels_inside=0;
omp_set_num_threads(4);
//#pragma omp parallel for reduction (+:pixels_inside) private(i,j)
#pragma omp parallel
#pragma omp single private(i)
for(i = 0; i < total_steps; i++){
#pragma omp task private(j)
for(j = 0; j < total_steps; j++){
pixels_inside++;
}
}
cout<<"Total pixel = "<<pixels_inside<<endl;
return 0;
}
First of all you need to declare for OpenMP what variables you are using and what protection do they have. Generally speaking your code has default(shared) as you didn't specified otherwise. This makes all variables accessible with same memory location for all threads.
You should use something like this:
#pragma omp parallel default(none) shared(total_steps, pixels_inside)
[...]
#pragma omp task private(j) default(none) shared(total_steps, pixels_inside)
Now, only what is necessary will be used by threads.
Secondly the main problem is that you don't have critical section protection. What this means, that when threads are running they may wish to use shared variable and race condition happens. For example, you have thread A and B with variable x accessible to both (a.k.a. shared memory variable). Now lets say A adds 2 and B adds 3 to the variable. Threads aren't same speed so this may happen, A takes x=0, B takes x=0, A adds 0+2, B adds 0+3, B returns data to memory location x=3, A returns data to memory location x=2. In end x = 2. The same happens with pixels_inside, as thread takes variable, adds 1 and returns it back from where it got it. To overcome this you use measurements to insure critical section protection:
#pragma omp critical
{
//Code with shared memory
pixels_inside++;
}
You didn't needed critical section protection in reduction as variables in recution parameters have this protection.
Now your code should look like this:
#include <iostream>
#include <omp.h>
using namespace std;
int main() {
int total_steps = 10000;
int i,j;
int pixels_inside=0;
omp_set_num_threads(4);
//#pragma omp parallel for reduction (+:pixels_inside) private(i,j)
#pragma omp parallel default(none) shared(total_steps, pixels_inside)
#pragma omp single private(i)
for(i = 0; i < total_steps; i++){
#pragma omp task private(j) default(none) shared(total_steps, pixels_inside)
for(j = 0; j < total_steps; j++){
#pragma omp critical
{
pixels_inside++;
}
}
}
cout<<"Total pixel = "<<pixels_inside<<endl;
return 0;
}
Although I would suggest using reduction as it has better performance and methods to optimize that kind of calculations.
As #tartarus already explained you have a race condition in your code and it is much better to avoid it by using reduction. If you what to do the same as #pragma omp parallel for reduction (+:pixels_inside) private(i,j) do but using tasks, you have to use the following:
#pragma omp parallel
#pragma omp single
#pragma omp taskloop reduction (+:pixels_inside) private(i,j)
for(i = 0; i < total_steps; i++){
for(j = 0; j < total_steps; j++){
pixels_inside++;
}
}
In this version fewer tasks are created and reduction is used instead of critical section, therefore the performance will be much better (similar to what you can obtain by using #pragma omp parallel for)
UPDATE(comment on performance): I guess it is just a simplified example not your real code to parallelize. If the performance gain is not good enough, most probably it means that the parallel overhead is bigger than the work to do. In this case try to parallelize bigger part of your code. Note that parallel overheads are typically bigger in case of tasks (compared to #pragma omp parallel for).

Why does gcc's implementation of openMP fail to parallelise a recursive function inside another recursive function

I am trying to parallelise these recursive functions with openMP tasks,
when I compile with gcc it runs only on 1 thread. When i compile it with clang it runs on multiple threads
The second function calls the first one which doesn't generate new tasks to stop wasting time.
gcc does work when there is only one function that calls itself.
Why is this?
Am I doing something wrong in the code?
Then why does it work with clang?
I am using gcc 9.3 on windows with Msys2.
The code was compiled with -O3 -fopenmp
//the program compiled by gcc only runs on one thread
#include<vector>
#include<omp.h>
#include<iostream>
#include<ctime>
using namespace std;
vector<int> vec;
thread_local double steps;
void excalibur(int current_node, int current_depth) {
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
excalibur(i, current_depth);
}
if(current_depth > 0){
int new_depth = current_depth - 1;
#pragma omp simd
for(int i = current_node;i <= vec[current_node];i++){
++steps;
excalibur(i + 1,new_depth);
}
}
}
void mario( int current_node, int current_depth) {
#pragma omp task firstprivate(current_node,current_depth)
{
if(current_depth > 0){
int new_depth = current_depth - 1;
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
}
}
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
excalibur(i, current_depth);
}
}
int main() {
double total = 0;
clock_t tim = clock();
omp_set_dynamic(0);
int nodes = 10;
int timesteps = 3;
omp_set_num_threads(4);
vec.assign( nodes, nodes - 2 );
#pragma omp parallel
{
steps = 0;
#pragma omp single
{
mario(nodes - 1, timesteps - 1);
}
#pragma omp atomic
total += steps;
}
double time_taken = (double)(tim) / CLOCKS_PER_SEC;
cout <<fixed<<total<<" steps, "<< fixed << time_taken << " seconds"<<endl;
return 0;
}
while this works with gcc
#include<vector>
#include<omp.h>
#include<iostream>
#include<ctime>
using namespace std;
vector<int> vec;
thread_local double steps;
void mario( int current_node, int current_depth) {
#pragma omp task firstprivate(current_node,current_depth)
{
if(current_depth > 0){
int new_depth = current_depth - 1;
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
}
}
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
mario(i, current_depth);
}
}
int main() {
double total = 0;
clock_t tim = clock();
omp_set_dynamic(0);
int nodes = 10;
int timesteps = 3;
omp_set_num_threads(4);
vec.assign( nodes, nodes - 2 );
#pragma omp parallel
{
steps = 0;
#pragma omp single
{
mario(nodes - 1, timesteps - 1);
}
#pragma omp atomic
total += steps;
}
double time_taken = (double)(tim) / CLOCKS_PER_SEC;
cout <<fixed<<total<<" steps, "<< fixed << time_taken << " seconds"<<endl;
return 0;
}
Your program doesn't run in parallel because there is simply nothing to run in parallel. Upon first entry in mario, current_node is 9 and vec is all 8s, so this loop in the first and only task never executes:
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
Hence, no recursive creation of new tasks. How and what runs in parallel when you compile it with Clang is well beyond me, since when I compile it with Clang 9, the executable behaves exactly the same as the one produced by GCC.
The second code runs in parallel because of the recursive call in the loop after the task region. But it also isn't a correct OpenMP program - the specification forbids nesting task regions inside a simd construct (see under Restrictions here):
The only OpenMP constructs that can be encountered during execution of a simd region are the atomic construct, the loop construct, the simd construct and the ordered construct with the simd clause.
None of the two compilers catches that problem when the nesting is in the dynamic and not in the lexical scope of the simd construct though.
Edit: I actually looked it a bit closer into it and I may have a suspicion about what might have caused your confusion. I guess you determine if your program works in parallel or not by looking at the CPU utilisation while it runs. This often leads to confusion. The Intel OpenMP runtime that Clang uses has a very aggressive waiting policy. When the parallel region in the main() function spawns a team of four threads, one of them goes executing mario() and the other three hit the implicit barrier at the end of the region. There they spin, waiting for new tasks to be eventually assigned to them. They never get one, but keep on spinning anyway, and that's what you see in the CPU utilisation. If you want to replicate the same with GCC, set OMP_WAIT_POLICY to ACTIVE and you'll see the CPU usage soar while the program runs. Still, if you profile the program's execution, you'll see that CPU time is spent inside your code in one thread only.

Parallelizing many nested for loops in openMP c++

Hi i am new to c++ and i made a code which runs but it is slow because of many nested for loops i want to speed it up by openmp anyone who can guide me. i tried to use '#pragma omp parallel' before ip loop and inside this loop i used '#pragma omp parallel for' before it loop but it does not works
#pragma omp parallel
for(int ip=0; ip !=nparticle; ip++){
inf14>>r>>xp>>yp>>zp;
zp/=sqrt(gamma2);
counter++;
double para[7]={0,0,Vz,x0-xp,y0-yp,z0-zp,0};
if(ip>=0 && ip<=43){
#pragma omp parallel for
for(int it=0;it<NT;it++){
para[6]=PosT[it];
for(int ix=0;ix<NumX;ix++){
para[3]=PosX[ix]-xp;
for(int iy=0;iy<NumY;iy++){
para[4]=PosY[iy]-yp;
for(int iz=0;iz<NumZ;iz++){
para[5]=PosZ[iz]-zp;
int position=it*NumX*NumY*NumZ+ix*NumY*NumZ+iy*NumZ+iz;
rotation(para,&Field[3*position]);
MagX[position] +=chg*Field[3*position];
MagY[position] +=chg*Field[3*position+1];
MagZ[position] +=chg*Field[3*position+2];
}
}
}
}
}
}enter code here
and my rotation function also has infinite integration for loop as given below
for(int i=1;;i++){
gsl_integration_qag(&F, 10*i, 10*i+10, 1.0e-8, 1.0e-8, 100, 2, w, &temp, &error);
result+=temp;
if(abs(temp/result)<ACCURACY){
break;
}
}
i am using gsl libraries as well. so how to speed up this process or how to make openmp?
If you don't have inter-loop dependences, you can use the collapse keyword to parallelize multiple loops altoghether. Example:
void scale( int N, int M, float A[N][M], float B[N][M], float alpha ) {
#pragma omp for collapse(2)
for( int i = 0; i < N; i++ ) {
for( int j = 0; j < M; j++ ) {
A[i][j] = alpha * B[i][j];
}
}
}
I suggest you to check out the OpenMP C/C++ cheat sheet (PDF), which contain all the specifications for loop parallelization.
Do not set parallel pragmas inside another parallel pragma. You might overhead the machine creating more threads than it can handle. I would establish the parallelization in the outter loop (if it is big enough):
#pragma omp parallel for
for(int ip=0; ip !=nparticle; ip++)
Also make sure you do not have any race condition between threads (e.g. RAW).
Advice: if you do not get a great speed-up, a good practice is iterating by chunks and not only by one increment. For instance:
int num_threads = 1;
#pragma omp parallel
{
#pragma omp single
{
num_threads = omp_get_num_threads();
}
}
int chunkSize = 20; //Define your own chunk here
for (int position = 0; position < total; position+=(chunkSize*num_threads)) {
int endOfChunk = position + (chunkSize*num_threads);
#pragma omp parallel for
for(int ip = position; ip < endOfChunk ; ip += chunkSize) {
//Code
}
}

Influence on the static scheduling overhead in OpenMP

I thought about which factors would influence the static scheduling overhead in OpenMP.
In my opinion it is influenced by:
CPU performance
specific implementation of the OpenMP run-time library
the number of threads
But am I missing further factors? Maybe the size of the tasks, ...?
And furthermore: Is the overhead linearly dependent on the number of iterations?
In this case I would expect that having static scheduling and 4 cores, the overhead increases linearly with 4*i iterations. Correct so far?
EDIT:
I am only interested in the static (!) scheduling overhead itself. I am not talking about thread start-up overhead and time spent in synchronisation and critical section overhead.
You need to separate the overhead for OpenMP to create a team/pool of threads and the overhead for each thread to operate separate sets of iterators in a for loop.
Static scheduling is easy to implement by hand (which is sometimes very useful). Let's consider what I consider the two most important static scheduling schedule(static) and schedule(static,1) then we can compare this to schedule(dynamic,chunk).
#pragma omp parallel for schedule(static)
for(int i=0; i<N; i++) foo(i);
is equivalent to (but not necessarily equal to)
#pragma omp parallel
{
int start = omp_get_thread_num()*N/omp_get_num_threads();
int finish = (omp_get_thread_num()+1)*N/omp_get_num_threads();
for(int i=start; i<finish; i++) foo(i);
}
and
#pragma omp parallel for schedule(static,1)
for(int i=0; i<N; i++) foo(i);
is equivalent to
#pragma omp parallel
{
int ithread = omp_get_thread_num();
int nthreads = omp_get_num_threads();
for(int i=ithread; i<N; i+=nthreads) foo(i);
}
From this you can see that it's quite trivial to implement static scheduling and so the overhead is negligible.
On the other hand if you want to implement schedule(dynamic) (which is the same as schedule(dynamic,1)) by hand it's more complicated:
int cnt = 0;
#pragma omp parallel
for(int i=0;;) {
#pragma omp atomic capture
i = cnt++;
if(i>=N) break;
foo(i);
}
This requires OpenMP >=3.1. If you wanted to do this with OpenMP 2.0 (for MSVC) you would need to use critical like this
int cnt = 0;
#pragma omp parallel
for(int i=0;;) {
#pragma omp critical
i = cnt++;
if(i>=N) break;
foo(i);
}
Here is an equivalent to schedule(dynamic,chunk) (I have not optimized this using atomic accesss):
int cnt = 0;
int chunk = 5;
#pragma omp parallel
{
int start, finish;
do {
#pragma omp critical
{
start = cnt;
finish = cnt+chunk < N ? cnt+chunk : N;
cnt += chunk;
}
for(int i=start; i<finish; i++) foo(i);
} while(finish<N);
}
Clearly using atomic accesses is going to cause more overhead. This also shows why using larger chunks for schedule(dynamic,chunk) can reduce the overhead.

OpenMP C++ - How to parallelize this function?

I'd like to parallelize this function but I'm new with open mp and I'd be grateful if someone could help me :
void my_function(float** A,int nbNeurons,int nbOutput, float* p, float* amp){
float t=0;
for(int r=0;r<nbNeurons;r++){
t+=p[r];
}
for(int i=0;i<nbOutput;i++){
float coef=0;
for(int r=0;r<nbNeurons;r++){
coef+=p[r]*A[r][i];
}
amp[i]=coef/t;
}
}
I don't know how to parallelize it properly because of the double loop for, for the moment, I only thought about doing a :
#pragma omp parallel for reduction(+:t)
But I think it is not the best way to get the computing faster through openMp.
Thank in advance,
First of all: we need to know context. Where does your profiler tell you the most time is spent?
In general, coarse grained parallellization works best, so as #Alex said: parallellize the outer for loop.
void my_function(float** A,int nbNeurons,int nbOutput, float* p, float* amp)
{
float t=0;
for(int r=0;r<nbNeurons;r++)
t+=p[r];
#pragma parallel omp for
for(int i=0;i<nbOutput;i++){
float coef=0;
for(int r=0;r<nbNeurons;r++){
coef+=p[r]*A[r][i];
}
amp[i]=coef/t;
}
}
Depending on the actual volumes, it may be interesting to calculate t in the background, and move the division out of the parallel loop:
void my_function(float** A,int nbNeurons,int nbOutput, float* p, float* amp)
{
float t=0;
#pragma omp parallel shared(amp)
{
#pragma omp single nowait // only a single thread executes this
{
for(int r=0;r<nbNeurons;r++)
t+=p[r];
}
#pragma omp for
for(int i=0;i<nbOutput;i++){
float coef=0;
for(int r=0;r<nbNeurons;r++){
coef+=p[r]*A[r][i];
}
amp[i]=coef;
}
#pragma omp barrier
#pragma omp master // only a single thread executes this
{
for(int i=0; i<nbOutput; i++){
amp[i] /= t;
}
}
}
}
Note untested code. OMP has tricky semantics sometimes, so I might have missed a 'shared' declaration there. Nothing a profiler won't quickly notify you about, though.