I am dealing with huge point cloud data. I try to use OpenMp.
But I found it's very hard for beginners to optimize code.
For example, when I want to get the Histogram of the point cloud (the point has another info beyond x,y,z). I write code below
#pragma omp parallel num_threads(N_THREAD) shared(hist,partHist)
{
int tId = omp_get_thread_num();
int index = tId * partCount;
#pragma omp for nowait
for(int i =0;i<partCount;++i)
{
if (index + i < size)
#pragma omp atomic
partHist[tId][(int)floor((array[index + i] - minValue) / stride)]++;
}
#pragma omp critical
{
for (int i = 0; i < binCount; ++i)
hist[i] += partHist[tId][i];
}
}
The code is being run on Linux with an i7-9700k, compiled with g++ and using omp 4.0
I have two questions
The data set is about 10^8 at least, I use 128 threads. but It's slower than serial.How can I optimize the code
Are there rules that I can follow to optimize the code when some other questions occur?
Related
I am a newbie to multithreading. I am trying to design a program that solves a sparse matrix. In my code I call Vector Vector dot product and Matix vector product as subroutines many times to arrive at the final solution. I am trying to parallelise the code using open MP (Especially the above two sub routines.)
I also have sequential codes in between which i donot intend to parallelise.
My question is how do I handle the threads created when the sub routine is called. Should I put a barrier at the end of every sub routine call.
Also where should I set the number of threads?
Mat_Vec_Mult(MAT,x0,rm);
#pragma omp parallel for schedule(static)
for(int i=0;i<numcols;i++)
rm[i] = b[i] - rm[i];
#pragma omp barrier
#pragma omp parallel for schedule(static)
for(int i=0;i<numcols;i++)
xm[i] = x0[i];
#pragma omp barrier
double* pm = (double*) malloc(numcols*sizeof(double));
#pragma omp parallel for schedule(static)
for(int i=0;i<numcols;i++)
pm[i] = rm[i];
#pragma omp barrier
scalarProd(rm,rm,numcols);
Thanks
EDIT:
for the scalar dotproduct, I am using the following piece of code:
double scalarProd(double* vec1, double* vec2, int n){
double prod = 0.0;
int chunk = 10;
int i;
//double* c = (double*) malloc(n*sizeof(double));
omp_set_num_threads(4);
// #pragma omp parallel shared(vec1,vec2,c,prod) private(i)
#pragma omp parallel
{
double pprod = 0.0;
#pragma omp for
for(i=0;i<n;i++) {
pprod += vec1[i]*vec2[i];
}
//#pragma omp for reduction (+:prod)
#pragma omp critical
for(i=0;i<n;i++) {
prod += pprod;
}
}
return prod;
}
I have now added the time calculation code in my ConjugateGradient function as below:
start_dotprod = omp_get_wtime();
rm_rm_old = scalarProd(rm,rm,MAT->ncols);
run_dotprod = omp_get_wtime() - start_dotprod;
fprintf(timing,"Time taken by rm_rm dot product : %lf \n",run_dotprod);
Observed results : Time taken for the dot product Sequential Version : 0.000007s Parallel Version : 0.002110
I am doing a simple compile using gcc -fopenmp command on Linux OS on my Intel I7 laptop.
I am currently using a matrix of size n = 5000.
I am getting huge speed down overall since the same dot product gets called multiple times till convergence is achieved( around 80k times).
Please suggest some improvements. Any help is much appreciated!
Honestly, I would suggest parallelizing at a higher level. By this I mean trying to minimize the number of #pragma omp parallels you are using. Every time you try and split up the work among your threads, there is an OpenMP overhead. Try and avoid this whenever possible.
So in your case at the very least I would try:
Mat_Vec_Mult(MAT,x0,rm);
double* pm = (double*) malloc(numcols*sizeof(double)); // must be performed once outside of parallel region
// all threads forked and created once here
#pragma omp parallel for schedule(static)
for(int i = 0; i < numcols; i++) {
rm[i] = b[i] - rm[i]; // (1)
xm[i] = x0[i]; // (2) does not require (1)
pm[i] = rm[i]; // (3) requires (1) at this i, not (2)
}
// implicit barrier at the end of omp for
// implicit join of all threads at the end of omp parallel
scalarProd(rm,rm,numcols);
Notice how I show that no barriers are actually necessary between your loops anyway.
If the majority of your time had been spent in this computation stage, you will surely be seeing considerable improvement. However, I'm reasonably confident that the majority of your time is being spent in Mat_Vec_Mult() and maybe also scalarProd(), so the amount of time you'll be saving is probably minimal.
** EDIT **
And as per your edit, I am seeing a few problems. (1) Always compile with -O3 when you are testing performance of your algorithm. (2) You won't be able to improve the runtime of something that takes .000007 sec to complete; that's nearly instantaneous. This goes back to what I said previously: try and parallelize at a higher level. CG Method is inherently a sequential algorithm, but there are certainly research papers developed detailing parallel CG. (3) Your implementation of scalar product is not optimal. Indeed, I suspect your implementation of matrix-vector product is not either. I would personally do the following:
double scalarProd(double* vec1, double* vec2, int n) {
double prod = 0.0;
int i;
// omp_set_num_threads(4); this should be done once during initialization somewhere previously in your program
#pragma omp parallel for private(i) reduction(+:prod)
for (i = 0; i < n; ++i) {
prod += vec1[i]*vec2[i];
}
return prod;
}
(4) There are entire libraries (LAPACK, BLAS, etc) that have highly optimized matrix-vector, vector-vector, etc operations. Any Linear Algebra library must be built upon them. Therefore, I'd suggest looking at using one of those libraries to do your two operations before you start re-creating the wheel here and trying to implement your own.
I m trying to do multi-thread programming on CPU using OpenMP. I have lots of for loops which are good candidate to be parallel. I attached here a part of my code. when I use first #pragma omp parallel for reduction, my code is faster, but when I try to use the same command to parallelize other loops it gets slower. does anyone have any idea why it is like this?
.
.
.
omp_set_dynamic(0);
omp_set_num_threads(4);
float *h1=new float[nvi];
float *h2=new float[npi];
while(tol>0.001)
{
std::fill_n(h2, npi, 0);
int k,i;
float h222=0;
#pragma omp parallel for private(i,k) reduction (+: h222)
for (i=0;i<npi;++i)
{
int p1=ppi[i];
int m = frombus[p1];
for (k=0;k<N;++k)
{
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i]=h222;
}
//*********** h3*****************
std::fill_n(h3, nqi, 0);
float h333=0;
#pragma omp parallel for private(i,k) reduction (+: h333)
for (int i=0;i<nqi;++i)
{
int q1=qi[i];
int m = frombus[q1];
for (int k=0;k<N;++k)
{
h333 += v[m-1]*v[k]*(G[m-1][k]*sin(del[m-1]-del[k])
- B[m-1][k]*cos(del[m-1]-del[k]));
}
h3[i]=h333;
}
.
.
.
}
I don't think your OpenMP code gives the same result as without OpenMP. Let's just concentrate on the h2[i] part of the code (since the h3[i] has the same logic). There is a dependency of h2[i] on the index i (i.e. h2[1] = h2[1] + h2[0]). The OpenMP reduction you're doing won't give the correct result. If you want to do the reduction with OpenMP you need do it on the inner loop like this:
float h222 = 0;
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
#pragma omp parallel for reduction(+:h222)
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
However, I don't know if that will be very efficient. An alternative method is fill h2[i] in parallel on the outer loop without a reduction and then take care of the dependency in serial. Even though the serial loop is not parallelized it still should have a small effect on the computation time since it does not have the inner loop over k. This should give the same result with and without OpenMP and still be fast.
#pragma omp parallel for
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
float h222 = 0;
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
//take care of the dependency serially
for(int i=1; i<npi; i++) {
h2[i] += h2[i-1];
}
Keep in mind that creating and destroying threads is a time consuming process; clock the execution time for the process and see for yourself. You only use parallel reduction twice which may be faster than a serial reduction, however the initial cost of creating the threads may still be higher. Try parallelizing the outer most loop (if possible) to see if you can obtain a speedup.
I'm sitting with some stuff here trying to make orphaning work, and reduce the overhead by reducing the calls of #pragma omp parallel.
What I'm trying is something like:
#pragma omp parallel default(none) shared(mat,mat2,f,max_iter,tol,N,conv) private(diff,k)
{
#pragma omp master // I'm not against using #pragma omp single or whatever will work
{
while(diff>tol) {
do_work(mat,mat2,f,N);
swap(mat,mat2);
if( !(k%100) ) // Only test stop criteria every 100 iteration
diff = conv[k] = do_more_work(mat,mat2);
k++;
} // end while
} // end master
} // end parallel
The do_work depends on the previous iteration so the while-loop is has to be run sequential.
But I would like to be able to run the ´do_work´ parallel, so it would look something like:
void do_work(double *mat, double *mat2, double *f, int N)
{
int i,j;
double scale = 1/4.0;
#pragma omp for schedule(runtime) // Just so I can test different settings without having to recompile
for(i=0;i<N;i++)
for(j=0;j<N;j++)
mat[i*N+j] = scale*(mat2[(i+1)*N+j]+mat2[(i-1)*N+j] + ... + f[i*N+j]);
}
I hope this can be accomplished some way, I'm just not sure how. So any help I can get is greatly appreciated (also if you're telling me this isn't possible). Btw I'm working with open mp 3.0, the gcc compiler and the sun studio compiler.
The outer parallel region in your original code contains only a serial piece (#pragma omp master), which makes no sense and effectively results in purely serial execution (no parallelism). As do_work() depends on the previous iteration, but you want to run it in parallel, you must use synchronisation. The openmp tool for that is an (explicit or implicit) synchronisation barrier.
For example (code similar to yours):
#pragma omp parallel
for(int j=0; diff>tol; ++j) // must be the same condition for each thread!
#pragma omp for // note: implicit synchronisation after for loop
for(int i=0; i<N; ++i)
work(j,i);
Note that the implicit synchronisation ensures that no thread enters the next j if any thread is still working on the current j.
The alternative
for(int j=0; diff>tol; ++j)
#pragma omp parallel for
for(int i=0; i<N; ++i)
work(j,i);
should be less efficient, as it creates a new team of threads at each iteration, instead of merely synchronising.
I am writing some code for parallel processing of collisions, the expected result would be to have an acceleration for each thread, but I'm not getting any acceleration on the data processing because I have a critical section inside parallel_reduce() and I believe its serializing too much the access to the objects. This is how the code looks:
do {
totalVel = 0.;
#pragma omp parallel for
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
Is there any way to gain more speed by making it parallel or the serialization of access is too much?
Observations:
bodyA() and bodyB() are objects that repeat themselves a lot inside the bodyContact container.
For now parallel_reduce() only does one multiplication (the critical section), but will get more complex.
double parallel_reduce(){
#pragma omp critical
this->vel_ *= 0.99;
return vel_.length();
}
Actual timings:
serial, 25.635
parallel, 123.559
There is always cost of using OpenMP constructs, so avoid using a parallel inside a loop, following the implementation it could launch at each time new threads, instead of rewaking the previous launched threads.
In fact, if bodyContact.size() is small and the do {} while in number of step is big and parallel_reduce is very quick is very hard to have scalability with just a few OpenMP pragma.
#pragma omp parallel shared(totalVel) shared(bodyContact)
{
do {
totalVel = 0.;
#pragma omp for reduce(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
}
The above is likely not only slower, but very likely wrong; all the threads are trying to update the same totalVel. Tonnes of race conditions, but also contention, cache invalidation, etc.
Assuming the parallel_reduce() stuff is ok, you'd like something more like
do {
totalVel = 0.;
#pragma omp parallel for default(none) shared(bodyContact) reduction(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
which will do the reduction on totalVel correctly.
I have this for loop that finds minimum and maximum length, as you can see I have two values to reduce here while looking at OpenMP I can only notice that it provides reduction technique for only one value.
for (size_t i = 0; i < m_patterns.size(); ++i)
{// start for loop
if (m_patterns[i].size() < m_lmin)
m_lmin = m_patterns[i].size();
else if (m_patterns[i].size() > m_lmax)
m_lmax = m_patterns[i].size();
}// end for loop
can I do the following
#pragma omp parallel for reduction (min:m_lmin,max:m_lmax)
or should I rewrite the for loop to two for loops one for the minimum and one for the maximum
another question .. can I use tbb containers like concurrent_vector in OpenMP
From OpenMP 3.1 they started support of min & max reduction operation. OpenMP 3.1 is available from GCC 4.7. You can refer this link for further details of min max reduction.
You can roll-your-own concurrent vector as well as min and max reductions by filling private versions of the variables in parallel and then merging them in a critical section. This will work in MSVC which only supports OpenMP 2.5 (which does not support min and max reductions). But irrespective of whether your version of OpenMP supports min and max reductions this is a useful technique to learn.
This method is efficient as long as the number of items you loop over is much larger than the number of threads (or the time run over the items is large compared to the merging).
#pragma parallel
{
int m_lmin_private = m_lmin;
int m_max_private = m_max_private;
#pragma omp for nowait
for (size_t i = 0; i < m_patterns.size(); ++i) {
if (m_patterns[i].size() < m_lmin_private)
m_lmin_private = m_patterns[i].size();
else if (m_patterns[i].size() > m_lmax_private)
m_lmax_private = m_patterns[i].size();
}
#pragma omp critical
{
if (m_lmin_private<m_lmin)
m_lmin = m_lmin_private;
if (m_lmax_private>m_lmax)
m_lmax = m_lmax_private;
}
}
For concurrent vectors us the same method:
std::vector<int> vec;
#pragma omp parallel
{
std::vector<int> vec_private;
#pragma omp for nowait //fill vec_private in parallel
for(int i=0; i<n; i++) {
vec_private.push_back(i);
}
#pragma omp critical
vec.insert(vec.end(), vec_private.begin(), vec_private.end());
}
As far as openmp is concerned - an official specification is available ( www.openmp.org ). But finally your compiler is doing all the work. So the answer to your question may be compiler related...
However Microsoft is offering http://msdn.microsoft.com/de-de/library/2etkydkz(v=vs.80).aspx
suggesting
#pragma omp parallel for reduction(min:m_lmin) reduction(max:m_lmax)