I'm pretty new to OpenMP, so I'm fine if I have this wrong. Also I wasn't successful in finding information about this but I'm sure I missed something obvious.
I have some nested loops, I would like to parallelize a certain way.
This is a sequential version. Notice f(i) is a larger integer between 100 and 100,000 roughly.
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result[a][b]++;
}
}
}
}
}
Naively, I came up with this method of parallelizing the code.
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
#pragma omp for collapse(2) schedule(guided) nowait
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result_local[a][b]++;
}
}
}
}
}
// Add the result_local to result
}
This is the part where I'm not so sure. If my understanding is correct, OpenMP will not parallelize the c and d loops meaning each thread will execute a c loop in its entirety. Given f(i) can return relatively low numbers like 100 or relatively high numbers like 100,000, this means some of the threads might get stuck with a lot more work than other threads which is not ideal.
So then the question is how can I parallelize the inner loops to share the work better. I can't change collapse(2) to collapse(4) because the c and d loops iterate up to a number that is a function of the a and b variables.
I saw something in my research that maybe is helpful.
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
#pragma omp parallel for collapse(2) schedule(guided)
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result_local[a][b]++;
}
}
}
}
}
// Add the result_local to result
}
Admittedly, I don't know enough to know if this helpful at all. What I saw indicates this might be parallelizing the c and d loops but leaving the a and b loops serial?
Any help is appreciated.
omp will not parallelize the c and d loops meaning each thread will execute a c loop in its entirety.
This is correct.
some of the threads might get stuck with a lot more work than other threads
You are right: the work imbalance between thread is a performance issue in the first code. A schedule(dynamic) help a bit to fix this, but there is not much more you can do on this version.
I don't know enough to know if this helpful at all. What I saw indicates this might be parallelizing the c and d loops but leaving the a and b loops serial?
Technically, the a and b loops are executed in parallel too (since they are in a parallel section, but all the threads will completely execute all the iterations in lockstep (because the omp parallel for contains an implicit synchronization). You should not use a second omp parallel: regarding the runtime, this can created new threads 100 times, and even when no new threads are created, this result in an inefficient code (for example because of a bad default thread pinning). Moreover, schedule(guided) is not needed here and should be less efficient than a schedule(static). Thus, use omp for collapse(2) schedule(static).
how can I parallelize the inner loops to share the work better.
The last code is not soo bad in term of work balancing although it introduces some unwanted overheads:
The implicit synchronization of the omp for can be skipped using nowait since all threads are working on thread-private data.
The access to result_local[a][b] can be replaced by a fast thread-private variable access.
The conditional increment can be replaced by a branch-less boolean increment.
f(a) and f(b) can be per-computed although optimizing compilers should already do this.
When f(a) * f(b) is very small, this could be better not to execute the loop in parallel (because of the expensive cost to communicate between cores). However this is highly dependent of whether cond is expensive or not.
When f(a) is big, there is no need to use a costly collapse(2) as there will be enough work for all threads (collapse(2) usually slow down the execution since compilers often generate a slow modulus instruction to find the value of the loop iterators at runtime).
Here is the resulting code tacking into account most fixes:
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
// Arbritrary threshold (this may not be optimal)
const int threshold = 4 * omp_get_num_threads();
for (int a = 0; a < 10; a++)
{
const int c_lim = f(a);
for (int b = 0; b < 10; b++)
{
const int d_lim = f(b);
int64_t local_sum = 0;
if(c_lim < threshold)
{
#pragma omp for collapse(2) schedule(static) nowait
for (int c = 0; c < c_lim; c++)
for (int d = 0; d < d_lim; d++)
local_sum += comp(c, d);
}
else
{
#pragma omp for schedule(static) nowait
for (int c = 0; c < c_lim; c++)
for (int d = 0; d < d_lim; d++)
local_sum += comp(c, d);
}
result_local[a][b] += local_sum;
}
}
// Add the result_local to result
}
Another more efficient strategy is to redesign the sequential algorithm to significantly reduce the amount of work.
Redesigning of the algorithm
One can note that comp(c, d) is recomputed with the same value several times (up to 100 times) and the same for result_local[a][b]++ or even f(b) (up to 1,000,000 times). In such cases, the generic solution is to memoize the results (see here for more information) to avoid expensive parts of the algorithm to be recomputed over and over.
Note that you cannot pre-compute all the needed comp(a, b) values: this solution would be too expensive in terms of memory usage (up to 10 Gio needed). Thus, the trick is to split the 2D space in tiles. Here is how the algorithm works:
compute all the f(a) and f(b) sequentially (100 values);
split the iteration space in tiles of reasonable size (eg. 100x100) and pre-compute all the required tiles that should be completely computed (possibly in parallel, although this is tedious);
compute the sum of all comp(a, b) for each tile (i.e. for a in [a_tile_begin;a_tile_end[ and b in [b_tile_begin;b_tile_end[) in parallel (each thread should work on several tiles) and write the sums in a shared array.
compute the final result using the tile sums (partial tiles are computed on the fly in this last step) in parallel.
This algorithm is definitively much more complex, but it should be up to 100 time faster than the above one since most operations are computed only once.
For your attempt of parallelizing the inner loops to have a chance to work, you need to do something about the data race to result_local:
If you have enough memory for every thread to have it's own private version of result_local, you might be able to specify reduction(+: result_local[:10][:10]) in the pragma, but I haven't used it with multidimensional arrays yet. You might have to use a linear array and "lexic indexing" (idx = a * 10 + b). If result_local is dynamically allocated (on the heap), this might be the better way of dealing with it anyway (better than some std::vector<std::vector<int>>, due to cache locality).
If comp is computationally intensive enough you might be better off by putting #pragma omp atomic update in front of result_local[a][b]++. This takes less memory. In your example with a * b == 100 memory is probably not an issue.
As branching inside the innermost loop can be bad for performance, you might want to try out if result_local[a][b] += comp(c, d); gives better performance, as addition is quite cheap.
Related
I would like to parallel a big loop using OpenMP to improve its efficiency. Here is the main part of the toy code:
vector<int> config;
config.resize(indices.size());
omp_set_num_threads(2);
#pragma omp parallel for schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
#pragma omp simd
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
#pragma omp atomic
result[index]++;
}
Then I found I cannot get improvements in efficiency if I use 2, 4, or 8 threads. The execution time of the parallel versions is generally greater than that of the sequential version. The outer loop has 10000 iterations and they are independent so I want multiple threads to execute those iterations in parallel.
I guess the reasons for performance decrease maybe include: private copies of config? or, random access of ref_table? or, expensive atomic operation? So what are the exact reasons for the performance decrease? More importantly, how can I get a shorter execution time?
Private copies of config or, random access of ref_tables are not problematic, I think the workload is very small, there are 2 potential issues which prevent efficient parallelization:
atomic operation is too expensive.
overheads are bigger than workload (it simply means that it is not worth parallelizing with OpenMP)
I do not know which one is more significant in your case, so it is worth trying to get rid of atomic operation. There are 2 cases:
a) If the results array is zero initialized you have to use:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config) where N is the size of result array and delete #pragma omp atomic. Note that this works on OpenMP 4.5 or later. It is also worth removing #parama omp simd for a loop of 2-10 iterations. So, your code should look like this:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
result[index]++;
}
b) If the result array is not zero initialized the solution is very similar, but use a temporary zero initialized array in the loop and after that add it to result array.
If the speed will not increase then your code is not worth parallelizing with OpenMP on your hardware.
I am quite new to OpenMP. I have the following simple loop that I want to run in parallel with OpenMP:
double rij[3];
double r;
#ifdef _OPENMP
#pragma omp parallel for private(rij,r)
#endif
for (int i=0; i<n; ++i)
{
for (int j=0; j<n; ++j)
{
if (i != j)
{
distance(X,rij,r,i,j);
V[i] += ke * Q[j] / r;
for (int k=0; k<3; ++k)
{
F[3*i+k] += ke * Q[j] * rij[k] / pow(r,3);
}
}
}
}
From what I understood, variables are shared by default which is why I only declared private(rij,r). But according to these questions (first second third), I should do array reduction in this case.
It's clear to me that if many threads need to sum to the same variable, this has to be done with #pragma omp parallel for reduction(+:A[:n]) for summing to array A of size n. This is what I do in another part of my code, and it works as expected.
However, in this case workers never have to sum to the same variable: every worker performs the sum on its index i. Is is correct to do as I do in this case i.e. not doing any array reduction and not using any critical section ?
If my implementation is correct, I believe it would avoid the overhead of the critical section while being simpler code. Feel free to give your advice on how this could be better optimized.
Thank you
You don't need a reduction. It is a feature to avoid copying the same code all over again because they are re-occurring problems (Try to think off, how you would implement a sum-reduction without OpenMP).
What you do right now is working on parallel data (V[i]) which should not overlap at any iteration (as you state in the question), because you divide by i itself. Furthermore write to F[...] shouldn't overlap either, because it only depends on iand k
I look for a better way to cancel my threads.
In my approach, I use a shared variable and if this variable is set, I just throw a continue. This finishes my threads fast, but threads keep theoretically spawning and ending, which seems not elegant.
So, is there a better way to solve the issue (break is not supported by my OpenMP)?
I have to work with Visual, so my OpenMP Lib is outdated and there is no way around that. Consequently, I think #omp cancel will not work
int progress_state = RunExport;
#pragma omp parallel
{
#pragma omp for
for (int k = 0; k < foo.z; k++)
for (int j = 0; j < foo.y; j++)
for (int i = 0; i < foo.x; i++) {
if (progress_state == StopExport) {
continue;
}
// do some fancy shit
// yeah here is a condition for speed due to the critical
#pragma omp critical
if (condition) {
progress_state = StopExport;
}
}
}
You should do it the simple way of "just continue in all remaining iterations if cancellation is requested". That can just be the first check in the outermost loop (and given that you have several nested loops, that will probably not have any measurable overhead).
std::atomic<int> progress_state = RunExport;
// You could just write #pragma omp parallel for instead of these two nested blocks.
#pragma omp parallel
{
#pragma omp for
for (int k = 0; k < foo.z; k++)
{
if (progress_state == StopExport)
continue;
for (int j = 0; j < foo.y; j++)
{
// You can add break statements in these inner loops.
// OMP only parallelizes the outermost loop (at least given the way you wrote this)
// so it won't care here.
for (int i = 0; i < foo.x; i++)
{
// ...
if (condition) {
progress_state = StopExport;
}
}
}
}
}
Generally speaking, OMP will not suddenly spawn new threads or end existing ones, especially not within one parallel region. This means there is little overhead associated with running a few more tiny iterations. This is even more true given that the default scheduling in your case is most likely static, meaning that each thread knows its start and end index right away. Other scheduling modes would have to call into the OMP runtime every iteration (or every few iterations) to request more work, but that won't happen here. The compiler will basically see this code for the threaded work:
// Not real omp functions.
int myStart = __omp_static_for_my_start();
int myEnd = __omp_static_for_my_end();
for (int k = myStart; k < myEnd; ++k)
{
if (progress_state == StopExport)
continue;
// etc.
}
You might try a non-atomic thread-local "should I cancel?" flag that starts as false and can only be changed to true (which the compiler may understand and fold into the loop condition). But I doubt you will see significant overhead either way, at least on x86 where int is atomic anyway.
which seems not elegant
OMP 2.0 does not exactly shine with respect to elegance. I mean, iterating over a std::vector requires at least one static_cast to silence signed -> unsigned conversion warnings. So unless you have specific evidence of this pattern causing a performance problem, there is little reason not to use it.
I have a C++ code that performs a time evolution of four variables that live on a 2D spatial grid. To save some time, I tried to parallelise my code with OpenMP but I just cannot get it to work: No matter how many cores I use, the runtime stays basically the same or increases. (My code does use 24 cores or however many I specify, so the compilation is not a problem.)
I have the feeling that the runtime for one individual time-step is too short and the overhead of producing threads kills the potential speed-up.
The layout of my code is:
for (int t = 0; t < max_time_steps; t++) {
// do some book-keeping
...
// perform time step
// (1) calculate righthand-side of ODE:
for (int i = 0; i < nr; i++) {
for (int j = 0; j < ntheta; j++) {
rhs[0][i][j] = A0[i][j] + B0[i][j] + ...;
rhs[1][i][j] = A1[i][j] + B1[i][j] + ...;
rhs[2][i][j] = A2[i][j] + B2[i][j] + ...;
rhs[3][i][j] = A3[i][j] + B3[i][j] + ...;
}
}
// (2) perform Euler step (or Runge-Kutta, ...)
for (int d = 0; d < 4; d++) {
for (int i = 0; i < nr; i++) {
for (int j = 0; j < ntheta; j++) {
next[d][i][j] = current[d][i][j] + time_step * rhs[d][i][j];
}
}
}
}
I thought this code should be fairly easy to parallelise... I put "#pragma omp parellel for" in front of the (1) and (2) loops, and I also specified the number of cores (e.g. 4 cores for loop (2) since there are four variables) but there is simply no speed-up whatsoever.
I have found that OpenMP is fairly smart about when to create/destroy the threads. I.e. it realises that threads are required soon again and then they're only put asleep to save overhead time.
I think one "problem" is that my time step is coded in a subroutine (I'm using RK4 instead of Euler) and the computation of the righthand-side is again in another subroutine that is called by the time_step() function. So, I believe that due to this, OpenMP cannot see that the threads should be kept open for longer and hence the threads are created and destroyed at every time step.
Would it be helpful to put a "#pragma omp parallel" in front of the time-loop so that the threads are created at the very beginning? And then do the actual parallelisation for the righthand-side (1) and the Euler step (2)? But how do I do that?
I have found numerous examples for how to parallelise nested for loops, but none of them were concerned with the setup where the inner loops have been sourced out to separate modules. Would this an obstacle for parallelising?
I have now removed the d loops (by making the indices explicit) and collapsed the i and j loops (by running over the entire 2D array with one variable only).
The code looks like:
for (int t = 0; t < max_time_steps; t++) {
// do some book-keeping
...
// perform time step
// (1) calculate righthand-side of ODE:
#pragma omp parallel for
for (int i = 0; i < nr*ntheta; i++) {
rhs[0][0][i] = A0[0][i] + B0[0][i] + ...;
rhs[1][0][i] = A1[0][i] + B1[0][i] + ...;
rhs[2][0][i] = A2[0][i] + B2[0][i] + ...;
rhs[3][0][i] = A3[0][i] + B3[0][i] + ...;
}
// (2) perform Euler step (or Runge-Kutta, ...)
#pragma omp parallel for
for (int i = 0; i < nr*ntheta; i++) {
next[0][0][i] = current[0][0][i] + time_step * rhs[0][0][i];
next[1][0][i] = current[1][0][i] + time_step * rhs[1][0][i];
next[2][0][i] = current[2][0][i] + time_step * rhs[2][0][i];
next[3][0][i] = current[3][0][i] + time_step * rhs[3][0][i];
}
}
The size of nr*ntheta is 400*40=1600 and I a make max_time_steps=1000 time steps. Still, the parallelisation does not result in a speed-up:
Runtime without OpenMP (result of time on the command line):
real 0m23.597s
user 0m23.496s
sys 0m0.076s
Runtime with OpenMP (24 cores)
real 0m23.162s
user 7m47.026s
sys 0m0.905s
I do not understand what's happening here.
One peculiarity that I don't show in my code snippet above is that my variables are not actually doubles but a self-defined struct of two doubles which resemble real and imaginary part. But I think this should not make a difference.
Just wanted to report some success after I left the parallelisation alone for a while. The code evolved for a year and now I went back to parallelisation. This time, I can say that OpenMP does it's job and reduces the required walltime.
While the code evolved overall, this particular loop that I've shown above did not really change; merely two things: a) The resolution is higher so that it covers about 10 times as many points and b) the number of calculations per loop also is about 10-fold (maybe even more).
My only explanation why it works now and didn't work a little over a year ago, is that, when I tried to parallelise the code last time, it wasn't computationally expensive enough and the speed-up was killed by the OpenMP overhead. One single loop now requires about 200-300ms whereas that time required must have been in the single digit ms last time.
I can see such effect when comparing gcc and the Intel compiler (which are doing a very different job when vectorizing):
a) Using gcc, one loop needs about 300ms without OpenMP, and on two cores only 52% of the time is required --> near perfect optimization.
b) Using icpc, one loop needs about 160ms without OpenMP, and on two cores it needs 60% of the time --> good optimization but about 20% less effective.
When going for more than two cores, the speed-up is not large enough to make it worthwhile.
Currently, somewhere deep in my code, I am working with a nested for-loop (N1=~10000, N2 = ~500, x,y= 10-50). I used the #pragma omp, to have OpenMP distribute my calculation on several cores.
#pragma omp parallel for
for (int i = 0; i < N1; ++i)
{
for (int j = 0; j < N2; ++j)
{
for (int k = x; k <= y; ++k)
{
// calculation
}
}
}
Now, my two innerloops becomes conditional
#pragma omp parallel for
for (int i = 0; i < N1; ++i)
{
if (toExecute[i])
{
for (int j = 0; j < N2; ++j)
{
for (int k = x; k <= y; ++k)
{
// calculation
}
}
}
}
The inner nested loop either takes a long time, or is immediately done. Of course I can omit the if-statement by replacing the outer-loop and if-statement with a shorter loop and lookup for the later indexing.
My question is: Is OpenMP smart enough to handle the if-statement within my outer loop, or do I have to do something manually?
I am currently using C++ in Visual Studio 2017 if that matters (I think the OpenMP version is a bit behind).
Ideally, you should let OpenMP handle that for you. But as always when you're doing performance stuffs, you have to try to see what is best for you. Indeed, you can gain great speedup by doing things manually. OpenMP is not omniscient, he does not know all the details and intelligence about your calculation.
If your calculation implies the same work of amount for any iteration then your condition is likely to lead to some different work load regarding the most outter loop. So theoritically, a dynamic scheduling should be more fitted
#pragma omp parallel for schedule(dynamic)
You could also try static or guided scheduling which might fit your calculation (I don't know the details of your calculation so I cannot say) and play with the granularity block.
An other test to do, if you can afford that (i.e. is it parallelizable ?), you should try to move the parallelization in the inner loops.
You can even nest the parallelization, it sometimes give nice speedup. Try and tune step by step, take time to see what gives you the best output. Just to remind you these tweaks are often not generic accross different architectures, so aim for a good tradeoff between performance and code reusability.