New here, hoping you can help. I am attempting to explicitly vectorize both of the for loops in the below member function code as they are the main runtime bottleneck and auto-vectorization does not work due to dependencies. For the life of me, however, I cannot find "safe" clause/reduction.
PSvector& RTMap::Apply(PSvector& X) const
{
PSvector Y(0);
double k=0;
// linear map
#pragma omp simd
for(RMap::const_itor r = rterms.begin(); r != rterms.end(); r++)
{
Y[r->i] += r->val * X[r->j];
}
// non-linear map
#pragma omp simd
for(const_itor t = tterms.begin(); t != tterms.end(); t++)
{
Y[t->i] += t->val * X[t->j] * X[t->k];
}
Y.location() = X.location();
Y.type() = X.type();
Y.id() = X.id();
Y.sd() = X.sd();
return X = Y;
}
Note that the #pragmas as written don't work because of race conditions. Is there a method of declaring reduction which could work? I tried something like:
#pragma omp declare reduction(+:PSvector:(*omp_out.getvec())+=(*omp_in.getvec()))
which compiles (icpc) but seems to produce nonsense.
/cheers
Related
I am generating class Objects and putting them into std::vector. Before adding, I need to check if they intersect with the already generated objects. As I plan to have millions of them, I need to parallelize this function as it takes a lot of time (The function must check each new object against all previously generated).
Unfortunately, the speed increase is not significant. The profiler also shows very low efficiency (all overhead). Any advise would be appreciated.
bool
Generator::_check_cube (std::vector<Cube> &cubes, const cube &cube)
{
auto ptr_cube = &cube;
auto npol = cubes.size();
auto ptr_cubes = cubes.data();
const auto nthreads = omp_get_max_threads();
bool check = false;
#pragma omp parallel shared (ptr_cube, ptr_cubes, npol, check)
{
#pragma omp single nowait
{
const auto batch_size = npol / nthreads;
for (int32_t i = 0; i < nthreads; i++)
{
const auto bstart = batch_size * i;
const auto bend = ((bstart + batch_size) > npol) ? npol : bstart + batch_size;
#pragma omp task firstprivate(i, bstart, bend) shared (check)
{
struct bd bd1{}, bd2{};
bd1 = allocate_bd();
bd2 = allocate_bd();
for (auto j = bstart; j < bend; j++)
{
bool loc_check;
#pragma omp atomic read
loc_check = check;
if (loc_check) break;
if (ptr_cube->cube_intersecting(ptr_cubes[j], &bd1, &bd2))
{
#pragma omp atomic write
check = true;
break;
}
}
free_bd(&bd1);
free_bd(&bd2);
}
}
}
}
return check;
}
UPDATE: The Cube is actually made of smaller objects Cuboids, each of them have size (L, W, H), position coordinates and rotation. The intersect function:
bool
Cube::cube_intersecting(Cube &other, struct bd *bd1, struct bd *bd2) const
{
const auto nom = number_of_cuboids();
const auto onom = other.number_of_cuboids();
for (int32_t i = 0; i < nom; i++)
{
get_mcoord(i, bd1);
for (int32_t j = 0; j < onom; j++)
{
other.get_mcoord(j, bd2);
if (check_gjk_intersection(bd1, bd2))
{
return true;
}
}
}
return false;
}
//get_mcoord calculates vertices of the cuboids
void
Cube::get_mcoord(int32_t index, struct bd *bd) const
{
for (int32_t i = 0; i < 8; i++)
{
for (int32_t j = 0; j < 3; j++)
{
bd->coord[i][j] = _cuboids[index].get_coord(i)[j];
}
}
}
inline struct bd
allocate_bd()
{
struct bd bd{};
bd.numpoints = 8;
bd.coord = (double **) malloc(8 * sizeof(double *));
for (int32_t i = 0; i < 8; i++)
{
bd.coord[i] = (double *) malloc(3 * sizeof(double));
}
return bd;
}
Typical values: npol > 1 million, threads 32, and each npol Cube consists of 1 - 3 smaller cuboids which are directly checked against other if intersect.
The problem with your search is that OpenMP really likes static loops, where the number of iterations is predetermined. Thus, maybe one task will break early, but all the other will go through their full search.
With recent versions of OpenMP (5, I think) there is a solution for that.
(Not sure about this one: Make your tasks much more fine-grained, for instance one for each intersection test);
Spawn your tasks in a taskloop;
Once you find your intersection (or any condition that causes you to break), do cancel taskloop.
Small problem: cancelling is disabled by default. Set the environment variable OMP_CANCELLATION to true.
Do you have more intersections being true or more being false ? If most are true, you're flooding your hardware with requests to write to a shared resource, and what you are doing is essentially sequential. One way to address this is to avoid using a shared resource so there is no mutex and you let all threads run and at the end you take a decision given the results; this will likely run faster but the benefit depends also on arbitrary choices such as few metrics (eg., nthreads, ncuboids).
It is possible that on another architecture (eg., gpu), your algorithm works well as it is. I may be worth it to benchmark it on a gpu, and see if you will benefit from that migration, given the production sizes (millions of cuboids, 24 dimensions).
You also have a complexity problem, which is, for every new cuboid you compare up to the whole set of existing cuboids. One way to address this is to gather all the cuboids size (range) by dimension and order them, and add the new cuboids ranges ordered. If there is intersection in one dimension, you test the next one etc. You also can runs them in parallel. Before running through the ranges, you test if you are hitting inside the global range, if not it's useless to test locally the intersection.
Here and in general you want to parallelize with minimum of dependency (shared resources, mutex). So you want to try to find a point of view where this will happen. Parallelising over dimensions over ordered ranges (segments) might be better that parallelizing over cuboids.
Algorithms and benefits of parallelism also depend on the values of your objects. This does not mean that complexity predictions are not relevant, but that one may find a smarter approach given those values.
I think your code is memory bound, so its bottleneck is memory read/write not calculations. This can be the main reason of poor speed increase. As already mentioned by #Soleil a different hardware (GPU) can be beneficial here.
You mentioned in the comments that Generator::_check_cub called many times. To reduce OpenMP overheads my suggestion is moving the parallel region out of this function, you can even use it in your main function:
main(){
#pragma omp parallel
#pragma omp single nowait
{
//your code
}
}
In this case you have to use #pragma omp taskwait to wait for the tasks to complete.
for (int32_t i = 0; i < nthreads; i++)
{
#pragma omp task default(none) firstprivate(...) shared (..)
{
//your code comes here
}
}
#pragma omp taskwait
I also suggest using default(none) clause in #pragma omp task directive so you have to explicitly tell the sharing attribute of all your variables.
Do you really need function get_mcoord? It seems a redunant memory copy to me. I think it may be better to write a check_gjk_intersection function which takes _cuboids or its indices as parameters. In this case you get rid of many memory allocations/deallocations of bd1 and bd2, which also can be time consuming as #Victor pointed out.
I have a few nested loops and I put the first one in parallel mode. apar and mpar are structs whose values are modified in the loop and then function breakLogic is called which generates a struct which i store in a pre created vector of those structs.
one, two ... have been declared earlier in the function.
I have tried to include ordered and critical to ensure accuracy but i am still getting incorrect results.
#pragma omp parallel for ordered private(appFlip, atur, apar, mpar, i, j, k, l, m, n) shared(rawFlip)
for(i=0; i<oneL; i++)
{
initialize mpar
#pragma omp critical
apar.one = one[i];
for(j=0; j<twoL; j++)
{
apar.two = two[j];
for(k=0; k<threeL; k++)
{
apar.three = floor(three[k]*apar.two);
appFlip = applyParamSin(rawFlip, apar);
for(l=0; l< fourL; l++)
{
mpar.four = four[l];
for(m=0; m<fiveL; m++)
{
mpar.five = five[m];
for(n=0; n<sixL; n++)
{
mpar.six = add[n];
atur = breakLogic(appFlip, mpar, dt);
#pragma omp ordered
{
sinResVec[itr] = atur;
itr++;
}
}
}
}
r0(appFlip);
}
}
}
Or is this code not conducive for parallelism? Are there any tools for g++ which can profile code for parallel processing and indicate potential issues?
This modified code works but gives no performance improvement.
You original code can be paralleled by a few modifications.
set apar and mpar as firstprivate. apar and mpar should be thread local variables and be initialized when entering the parallel for region;
remove all critical and ordered clauses, including the one in the parallel for directive. they are not working as your expected;
calculate iter with i,j,k,l,m,n to remove the dependency.
.
iter=(((i*twoL+j)*threeL+k)*fourL+m)*fiveL+n;
sinResVec[itr] = atur;
update
See here for more details of OpenMP, especially the differences between private and firstprivate.
http://msdn.microsoft.com/en-us/library/tt15eb9t.aspx
I m trying to do multi-thread programming on CPU using OpenMP. I have lots of for loops which are good candidate to be parallel. I attached here a part of my code. when I use first #pragma omp parallel for reduction, my code is faster, but when I try to use the same command to parallelize other loops it gets slower. does anyone have any idea why it is like this?
.
.
.
omp_set_dynamic(0);
omp_set_num_threads(4);
float *h1=new float[nvi];
float *h2=new float[npi];
while(tol>0.001)
{
std::fill_n(h2, npi, 0);
int k,i;
float h222=0;
#pragma omp parallel for private(i,k) reduction (+: h222)
for (i=0;i<npi;++i)
{
int p1=ppi[i];
int m = frombus[p1];
for (k=0;k<N;++k)
{
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i]=h222;
}
//*********** h3*****************
std::fill_n(h3, nqi, 0);
float h333=0;
#pragma omp parallel for private(i,k) reduction (+: h333)
for (int i=0;i<nqi;++i)
{
int q1=qi[i];
int m = frombus[q1];
for (int k=0;k<N;++k)
{
h333 += v[m-1]*v[k]*(G[m-1][k]*sin(del[m-1]-del[k])
- B[m-1][k]*cos(del[m-1]-del[k]));
}
h3[i]=h333;
}
.
.
.
}
I don't think your OpenMP code gives the same result as without OpenMP. Let's just concentrate on the h2[i] part of the code (since the h3[i] has the same logic). There is a dependency of h2[i] on the index i (i.e. h2[1] = h2[1] + h2[0]). The OpenMP reduction you're doing won't give the correct result. If you want to do the reduction with OpenMP you need do it on the inner loop like this:
float h222 = 0;
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
#pragma omp parallel for reduction(+:h222)
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
However, I don't know if that will be very efficient. An alternative method is fill h2[i] in parallel on the outer loop without a reduction and then take care of the dependency in serial. Even though the serial loop is not parallelized it still should have a small effect on the computation time since it does not have the inner loop over k. This should give the same result with and without OpenMP and still be fast.
#pragma omp parallel for
for (int i=0; i<npi; ++i) {
int p1=ppi[i];
int m = frombus[p1];
float h222 = 0;
for (int k=0;k<N; ++k) {
h222 += v[m-1]*v[k]*(G[m-1][k]*cos(del[m-1]-del[k])
+ B[m-1][k]*sin(del[m-1]-del[k]));
}
h2[i] = h222;
}
//take care of the dependency serially
for(int i=1; i<npi; i++) {
h2[i] += h2[i-1];
}
Keep in mind that creating and destroying threads is a time consuming process; clock the execution time for the process and see for yourself. You only use parallel reduction twice which may be faster than a serial reduction, however the initial cost of creating the threads may still be higher. Try parallelizing the outer most loop (if possible) to see if you can obtain a speedup.
I am writing some code for parallel processing of collisions, the expected result would be to have an acceleration for each thread, but I'm not getting any acceleration on the data processing because I have a critical section inside parallel_reduce() and I believe its serializing too much the access to the objects. This is how the code looks:
do {
totalVel = 0.;
#pragma omp parallel for
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
Is there any way to gain more speed by making it parallel or the serialization of access is too much?
Observations:
bodyA() and bodyB() are objects that repeat themselves a lot inside the bodyContact container.
For now parallel_reduce() only does one multiplication (the critical section), but will get more complex.
double parallel_reduce(){
#pragma omp critical
this->vel_ *= 0.99;
return vel_.length();
}
Actual timings:
serial, 25.635
parallel, 123.559
There is always cost of using OpenMP constructs, so avoid using a parallel inside a loop, following the implementation it could launch at each time new threads, instead of rewaking the previous launched threads.
In fact, if bodyContact.size() is small and the do {} while in number of step is big and parallel_reduce is very quick is very hard to have scalability with just a few OpenMP pragma.
#pragma omp parallel shared(totalVel) shared(bodyContact)
{
do {
totalVel = 0.;
#pragma omp for reduce(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
}
The above is likely not only slower, but very likely wrong; all the threads are trying to update the same totalVel. Tonnes of race conditions, but also contention, cache invalidation, etc.
Assuming the parallel_reduce() stuff is ok, you'd like something more like
do {
totalVel = 0.;
#pragma omp parallel for default(none) shared(bodyContact) reduction(+:totalVel)
for (unsigned long i = 0; i < bodyContact.size(); i++) {
totalVel += bodyContact.at(i).bodyA()->parallel_reduce();
totalVel += bodyContact.at(i).bodyB()->parallel_reduce();
}
} while (totalVel >= 0.00001);
which will do the reduction on totalVel correctly.
Below is a portion of code parallelized via openMP. Arrays, ap[] and sc[], are emposed to addition assignment so, I decided to make them shared and then put them in critical clause section since reduction clause does not accept arrays. But it gives a different result than its serial counterpart. Where is the problem?
Vector PN, Pf, Nf; // Vector is user-defined structure
Vector NNp, PPp;
Vector gradFu, gradFv, gradFw;
float dynVis_eff, SGSf;
float Xf_U, Xf_H;
float mf_P, mf_N;
float an_diff, an_conv_P, an_conv_N, an_trans;
float sc_cd, sc_pres, sc_trans, sc_SGS, sc_conv_P, sc_conv_N;
float ap_trans;
#pragma omp parallel for
for (int e=0; e<nElm; ++e)
{
ap[e] = 0.f;
sc[e] = 0.f;
}
#pragma omp parallel for shared(ap,sc)
for (int f=0; f<nFaces; ++f)
{
PN = cntE[face_N[f]] - cntE[face_P[f]];
Pf = cntF[f] - cntE[face_P[f]];
Nf = cntF[f] - cntE[face_N[f]];
PPp = Pf - (Pf|norm(PN))*norm(PN);
NNp = Nf - (Nf|norm(PN))*norm(PN);
mf_P = mf[f];
mf_N = -mf[f];
SGSf = (1.f-ifac[f]) * SGSvis[face_P[f]]
+ ifac[f] * SGSvis[face_N[f]];
dynVis_eff = dynVis + SGSf;
an_diff = dynVis_eff * Ad[f] / mag(PN);
an_conv_P = -neg(mf_P);
an_conv_N = -neg(mf_N);
an_P[f] = an_diff + an_conv_P;
an_N[f] = an_diff + an_conv_N;
// cross-diffusion
sc_cd = an_diff * ( (gradVel[face_N[f]]|NNp) - (gradVel[face_P[f]]|PPp) );
#pragma omp critical
{
ap[face_P[f]] += an_N[f];
ap[face_N[f]] += an_P[f];
sc[face_P[f]] += sc_cd + sc_conv_P;
sc[face_N[f]] += -sc_cd + sc_conv_N;
}
You have not declared whether all the other variables in your parallel clause should be shared or not. You can do this generically with the default clause. If no default is specified, the variables are all shared, which is causing the problems in your code.
In your case, I'm guessing you should go for
#pragma omp parallel for default(none), shared(ap,sc,face_N,face_P,cntF,cntE,mf,ifac,Ad,an_P,an_N,SGSvis,dynVis), private(PN,Pf,Nf,PPp,NNp,mf_P,mf_N,SGSf,dynVis_eff,an_diff,an_conv_P,an_conv_N,sc_cd)
I strongly recommend always using default(none) so that the compiler complains every time you don't declare a variable explicitly and forces you to think about it explicitly.