How to stop parallel region of OpenMP 2.0 - c++

I look for a better way to cancel my threads.
In my approach, I use a shared variable and if this variable is set, I just throw a continue. This finishes my threads fast, but threads keep theoretically spawning and ending, which seems not elegant.
So, is there a better way to solve the issue (break is not supported by my OpenMP)?
I have to work with Visual, so my OpenMP Lib is outdated and there is no way around that. Consequently, I think #omp cancel will not work
int progress_state = RunExport;
#pragma omp parallel
{
#pragma omp for
for (int k = 0; k < foo.z; k++)
for (int j = 0; j < foo.y; j++)
for (int i = 0; i < foo.x; i++) {
if (progress_state == StopExport) {
continue;
}
// do some fancy shit
// yeah here is a condition for speed due to the critical
#pragma omp critical
if (condition) {
progress_state = StopExport;
}
}
}

You should do it the simple way of "just continue in all remaining iterations if cancellation is requested". That can just be the first check in the outermost loop (and given that you have several nested loops, that will probably not have any measurable overhead).
std::atomic<int> progress_state = RunExport;
// You could just write #pragma omp parallel for instead of these two nested blocks.
#pragma omp parallel
{
#pragma omp for
for (int k = 0; k < foo.z; k++)
{
if (progress_state == StopExport)
continue;
for (int j = 0; j < foo.y; j++)
{
// You can add break statements in these inner loops.
// OMP only parallelizes the outermost loop (at least given the way you wrote this)
// so it won't care here.
for (int i = 0; i < foo.x; i++)
{
// ...
if (condition) {
progress_state = StopExport;
}
}
}
}
}
Generally speaking, OMP will not suddenly spawn new threads or end existing ones, especially not within one parallel region. This means there is little overhead associated with running a few more tiny iterations. This is even more true given that the default scheduling in your case is most likely static, meaning that each thread knows its start and end index right away. Other scheduling modes would have to call into the OMP runtime every iteration (or every few iterations) to request more work, but that won't happen here. The compiler will basically see this code for the threaded work:
// Not real omp functions.
int myStart = __omp_static_for_my_start();
int myEnd = __omp_static_for_my_end();
for (int k = myStart; k < myEnd; ++k)
{
if (progress_state == StopExport)
continue;
// etc.
}
You might try a non-atomic thread-local "should I cancel?" flag that starts as false and can only be changed to true (which the compiler may understand and fold into the loop condition). But I doubt you will see significant overhead either way, at least on x86 where int is atomic anyway.
which seems not elegant
OMP 2.0 does not exactly shine with respect to elegance. I mean, iterating over a std::vector requires at least one static_cast to silence signed -> unsigned conversion warnings. So unless you have specific evidence of this pattern causing a performance problem, there is little reason not to use it.

Related

Using consistent RNG with OpenMP

So I have a function, let's call it dostuff() in which it's beneficial for my application to sometimes parallelize within, or to do it multiple times and parallelize the whole thing. The function does not change though between both use cases.
Note: object is large enough that it cannot viably be stored in a list, and so it must be discarded with each iteration.
So, let's say our code looks like this:
bool parallelize_within = argv[1];
if (parallelize_within) {
// here we assume parallelization is handled within the function dostuff()
for (int i = 0; i < 100; ++i) {
object randomized = function_that_accesses_rand();
dostuff(i, randomized, parallelize_within);
}
} else {
#pragma omp parallel for
for (int i = 0; i < 100; ++i) {
object randomized = function_that_accesses_rand();
dostuff(i, randomized, parallelize_within);
}
}
Obviously, we run into the issue that dostuff() will have threads access the random object at different times in different iterations of the same program. This is not the case when parallelize_within == true, but when we run dostuff() in parallel individually per thread, is there a way to guarantee that the random object is accessed in order based on the iteration? I know that I could do:
#pragma omp parallel for schedule(dynamic)
which will guarantee that eventually, as iterations are assigned to threads at runtime dynamically, the objects will access rand in order with the iteration number, but for the first set of iterations it will be totally random. Any suggestions on how to avoid this?
First of all you have to make sure that both function_that_accesses_rand and do_stuff are threadsafe.
You do not have to duplicate your code if you use the if clause:
#pragma omp parallel for if(!parallelize_within)
To make sure that in function dostuff(i, randomized,...); i reflects the order of creation of randomized object you have to do something like this:
int j = 0;
#pragma omp parallel for if(!parallelize_within)
for (int i = 0; i < 100; ++i) {
int k;
object randomized;
#pragma omp critical
{
k = j++;
randomized = function_that_accesses_rand();
}
dostuff(k, randomized, parallelize_within);
}
You may eliminate the use of the critical section if your function_that_accesses_rand makes it possible, but I cannot be more specific without knowing your function. One solution is that this function returns the value representing the order. Do not forget that this function has to be threadsafe!
#pragma omp parallel for if(!parallelize_within)
for (int i = 0; i < 100; ++i) {
int k;
object randomized = function_that_accesses_rand(k);
dostuff(k, randomized, parallelize_within);
}
... function_that_accesses_rand(int& k){
...
#pragma omp atomic capture
k = some_internal_counter++;
...
}
You could pre generate the random object and store it in a list. Then have a variable in the omp loop, that is incremented per thread.
// generate random objects
i=0
#pragma omp parallel for
for( ... ){
do_stuff(...,rand_obj[i],...)

How to optimize omp parallelization when batching

I am generating class Objects and putting them into std::vector. Before adding, I need to check if they intersect with the already generated objects. As I plan to have millions of them, I need to parallelize this function as it takes a lot of time (The function must check each new object against all previously generated).
Unfortunately, the speed increase is not significant. The profiler also shows very low efficiency (all overhead). Any advise would be appreciated.
bool
Generator::_check_cube (std::vector<Cube> &cubes, const cube &cube)
{
auto ptr_cube = &cube;
auto npol = cubes.size();
auto ptr_cubes = cubes.data();
const auto nthreads = omp_get_max_threads();
bool check = false;
#pragma omp parallel shared (ptr_cube, ptr_cubes, npol, check)
{
#pragma omp single nowait
{
const auto batch_size = npol / nthreads;
for (int32_t i = 0; i < nthreads; i++)
{
const auto bstart = batch_size * i;
const auto bend = ((bstart + batch_size) > npol) ? npol : bstart + batch_size;
#pragma omp task firstprivate(i, bstart, bend) shared (check)
{
struct bd bd1{}, bd2{};
bd1 = allocate_bd();
bd2 = allocate_bd();
for (auto j = bstart; j < bend; j++)
{
bool loc_check;
#pragma omp atomic read
loc_check = check;
if (loc_check) break;
if (ptr_cube->cube_intersecting(ptr_cubes[j], &bd1, &bd2))
{
#pragma omp atomic write
check = true;
break;
}
}
free_bd(&bd1);
free_bd(&bd2);
}
}
}
}
return check;
}
UPDATE: The Cube is actually made of smaller objects Cuboids, each of them have size (L, W, H), position coordinates and rotation. The intersect function:
bool
Cube::cube_intersecting(Cube &other, struct bd *bd1, struct bd *bd2) const
{
const auto nom = number_of_cuboids();
const auto onom = other.number_of_cuboids();
for (int32_t i = 0; i < nom; i++)
{
get_mcoord(i, bd1);
for (int32_t j = 0; j < onom; j++)
{
other.get_mcoord(j, bd2);
if (check_gjk_intersection(bd1, bd2))
{
return true;
}
}
}
return false;
}
//get_mcoord calculates vertices of the cuboids
void
Cube::get_mcoord(int32_t index, struct bd *bd) const
{
for (int32_t i = 0; i < 8; i++)
{
for (int32_t j = 0; j < 3; j++)
{
bd->coord[i][j] = _cuboids[index].get_coord(i)[j];
}
}
}
inline struct bd
allocate_bd()
{
struct bd bd{};
bd.numpoints = 8;
bd.coord = (double **) malloc(8 * sizeof(double *));
for (int32_t i = 0; i < 8; i++)
{
bd.coord[i] = (double *) malloc(3 * sizeof(double));
}
return bd;
}
Typical values: npol > 1 million, threads 32, and each npol Cube consists of 1 - 3 smaller cuboids which are directly checked against other if intersect.
The problem with your search is that OpenMP really likes static loops, where the number of iterations is predetermined. Thus, maybe one task will break early, but all the other will go through their full search.
With recent versions of OpenMP (5, I think) there is a solution for that.
(Not sure about this one: Make your tasks much more fine-grained, for instance one for each intersection test);
Spawn your tasks in a taskloop;
Once you find your intersection (or any condition that causes you to break), do cancel taskloop.
Small problem: cancelling is disabled by default. Set the environment variable OMP_CANCELLATION to true.
Do you have more intersections being true or more being false ? If most are true, you're flooding your hardware with requests to write to a shared resource, and what you are doing is essentially sequential. One way to address this is to avoid using a shared resource so there is no mutex and you let all threads run and at the end you take a decision given the results; this will likely run faster but the benefit depends also on arbitrary choices such as few metrics (eg., nthreads, ncuboids).
It is possible that on another architecture (eg., gpu), your algorithm works well as it is. I may be worth it to benchmark it on a gpu, and see if you will benefit from that migration, given the production sizes (millions of cuboids, 24 dimensions).
You also have a complexity problem, which is, for every new cuboid you compare up to the whole set of existing cuboids. One way to address this is to gather all the cuboids size (range) by dimension and order them, and add the new cuboids ranges ordered. If there is intersection in one dimension, you test the next one etc. You also can runs them in parallel. Before running through the ranges, you test if you are hitting inside the global range, if not it's useless to test locally the intersection.
Here and in general you want to parallelize with minimum of dependency (shared resources, mutex). So you want to try to find a point of view where this will happen. Parallelising over dimensions over ordered ranges (segments) might be better that parallelizing over cuboids.
Algorithms and benefits of parallelism also depend on the values of your objects. This does not mean that complexity predictions are not relevant, but that one may find a smarter approach given those values.
I think your code is memory bound, so its bottleneck is memory read/write not calculations. This can be the main reason of poor speed increase. As already mentioned by #Soleil a different hardware (GPU) can be beneficial here.
You mentioned in the comments that Generator::_check_cub called many times. To reduce OpenMP overheads my suggestion is moving the parallel region out of this function, you can even use it in your main function:
main(){
#pragma omp parallel
#pragma omp single nowait
{
//your code
}
}
In this case you have to use #pragma omp taskwait to wait for the tasks to complete.
for (int32_t i = 0; i < nthreads; i++)
{
#pragma omp task default(none) firstprivate(...) shared (..)
{
//your code comes here
}
}
#pragma omp taskwait
I also suggest using default(none) clause in #pragma omp task directive so you have to explicitly tell the sharing attribute of all your variables.
Do you really need function get_mcoord? It seems a redunant memory copy to me. I think it may be better to write a check_gjk_intersection function which takes _cuboids or its indices as parameters. In this case you get rid of many memory allocations/deallocations of bd1 and bd2, which also can be time consuming as #Victor pointed out.

Aspects that affects the efficiency of OpenMP parallelism

I would like to parallel a big loop using OpenMP to improve its efficiency. Here is the main part of the toy code:
vector<int> config;
config.resize(indices.size());
omp_set_num_threads(2);
#pragma omp parallel for schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
#pragma omp simd
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
#pragma omp atomic
result[index]++;
}
Then I found I cannot get improvements in efficiency if I use 2, 4, or 8 threads. The execution time of the parallel versions is generally greater than that of the sequential version. The outer loop has 10000 iterations and they are independent so I want multiple threads to execute those iterations in parallel.
I guess the reasons for performance decrease maybe include: private copies of config? or, random access of ref_table? or, expensive atomic operation? So what are the exact reasons for the performance decrease? More importantly, how can I get a shorter execution time?
Private copies of config or, random access of ref_tables are not problematic, I think the workload is very small, there are 2 potential issues which prevent efficient parallelization:
atomic operation is too expensive.
overheads are bigger than workload (it simply means that it is not worth parallelizing with OpenMP)
I do not know which one is more significant in your case, so it is worth trying to get rid of atomic operation. There are 2 cases:
a) If the results array is zero initialized you have to use:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config) where N is the size of result array and delete #pragma omp atomic. Note that this works on OpenMP 4.5 or later. It is also worth removing #parama omp simd for a loop of 2-10 iterations. So, your code should look like this:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
result[index]++;
}
b) If the result array is not zero initialized the solution is very similar, but use a temporary zero initialized array in the loop and after that add it to result array.
If the speed will not increase then your code is not worth parallelizing with OpenMP on your hardware.

Parallelize inner loops

I'm pretty new to OpenMP, so I'm fine if I have this wrong. Also I wasn't successful in finding information about this but I'm sure I missed something obvious.
I have some nested loops, I would like to parallelize a certain way.
This is a sequential version. Notice f(i) is a larger integer between 100 and 100,000 roughly.
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result[a][b]++;
}
}
}
}
}
Naively, I came up with this method of parallelizing the code.
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
#pragma omp for collapse(2) schedule(guided) nowait
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result_local[a][b]++;
}
}
}
}
}
// Add the result_local to result
}
This is the part where I'm not so sure. If my understanding is correct, OpenMP will not parallelize the c and d loops meaning each thread will execute a c loop in its entirety. Given f(i) can return relatively low numbers like 100 or relatively high numbers like 100,000, this means some of the threads might get stuck with a lot more work than other threads which is not ideal.
So then the question is how can I parallelize the inner loops to share the work better. I can't change collapse(2) to collapse(4) because the c and d loops iterate up to a number that is a function of the a and b variables.
I saw something in my research that maybe is helpful.
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
for (int a = 0; a < 10; a++)
{
for (int b = 0; b < 10; b++)
{
#pragma omp parallel for collapse(2) schedule(guided)
for (int c = 0; c < f(a); c++)
{
for (int d = 0; d < f(b); d++)
{
if (comp(c, d))
{
result_local[a][b]++;
}
}
}
}
}
// Add the result_local to result
}
Admittedly, I don't know enough to know if this helpful at all. What I saw indicates this might be parallelizing the c and d loops but leaving the a and b loops serial?
Any help is appreciated.
omp will not parallelize the c and d loops meaning each thread will execute a c loop in its entirety.
This is correct.
some of the threads might get stuck with a lot more work than other threads
You are right: the work imbalance between thread is a performance issue in the first code. A schedule(dynamic) help a bit to fix this, but there is not much more you can do on this version.
I don't know enough to know if this helpful at all. What I saw indicates this might be parallelizing the c and d loops but leaving the a and b loops serial?
Technically, the a and b loops are executed in parallel too (since they are in a parallel section, but all the threads will completely execute all the iterations in lockstep (because the omp parallel for contains an implicit synchronization). You should not use a second omp parallel: regarding the runtime, this can created new threads 100 times, and even when no new threads are created, this result in an inefficient code (for example because of a bad default thread pinning). Moreover, schedule(guided) is not needed here and should be less efficient than a schedule(static). Thus, use omp for collapse(2) schedule(static).
how can I parallelize the inner loops to share the work better.
The last code is not soo bad in term of work balancing although it introduces some unwanted overheads:
The implicit synchronization of the omp for can be skipped using nowait since all threads are working on thread-private data.
The access to result_local[a][b] can be replaced by a fast thread-private variable access.
The conditional increment can be replaced by a branch-less boolean increment.
f(a) and f(b) can be per-computed although optimizing compilers should already do this.
When f(a) * f(b) is very small, this could be better not to execute the loop in parallel (because of the expensive cost to communicate between cores). However this is highly dependent of whether cond is expensive or not.
When f(a) is big, there is no need to use a costly collapse(2) as there will be enough work for all threads (collapse(2) usually slow down the execution since compilers often generate a slow modulus instruction to find the value of the loop iterators at runtime).
Here is the resulting code tacking into account most fixes:
#pragma omp parallel
{
// Create a result_local array to avoid critical sections in the loop
// Arbritrary threshold (this may not be optimal)
const int threshold = 4 * omp_get_num_threads();
for (int a = 0; a < 10; a++)
{
const int c_lim = f(a);
for (int b = 0; b < 10; b++)
{
const int d_lim = f(b);
int64_t local_sum = 0;
if(c_lim < threshold)
{
#pragma omp for collapse(2) schedule(static) nowait
for (int c = 0; c < c_lim; c++)
for (int d = 0; d < d_lim; d++)
local_sum += comp(c, d);
}
else
{
#pragma omp for schedule(static) nowait
for (int c = 0; c < c_lim; c++)
for (int d = 0; d < d_lim; d++)
local_sum += comp(c, d);
}
result_local[a][b] += local_sum;
}
}
// Add the result_local to result
}
Another more efficient strategy is to redesign the sequential algorithm to significantly reduce the amount of work.
Redesigning of the algorithm
One can note that comp(c, d) is recomputed with the same value several times (up to 100 times) and the same for result_local[a][b]++ or even f(b) (up to 1,000,000 times). In such cases, the generic solution is to memoize the results (see here for more information) to avoid expensive parts of the algorithm to be recomputed over and over.
Note that you cannot pre-compute all the needed comp(a, b) values: this solution would be too expensive in terms of memory usage (up to 10 Gio needed). Thus, the trick is to split the 2D space in tiles. Here is how the algorithm works:
compute all the f(a) and f(b) sequentially (100 values);
split the iteration space in tiles of reasonable size (eg. 100x100) and pre-compute all the required tiles that should be completely computed (possibly in parallel, although this is tedious);
compute the sum of all comp(a, b) for each tile (i.e. for a in [a_tile_begin;a_tile_end[ and b in [b_tile_begin;b_tile_end[) in parallel (each thread should work on several tiles) and write the sums in a shared array.
compute the final result using the tile sums (partial tiles are computed on the fly in this last step) in parallel.
This algorithm is definitively much more complex, but it should be up to 100 time faster than the above one since most operations are computed only once.
For your attempt of parallelizing the inner loops to have a chance to work, you need to do something about the data race to result_local:
If you have enough memory for every thread to have it's own private version of result_local, you might be able to specify reduction(+: result_local[:10][:10]) in the pragma, but I haven't used it with multidimensional arrays yet. You might have to use a linear array and "lexic indexing" (idx = a * 10 + b). If result_local is dynamically allocated (on the heap), this might be the better way of dealing with it anyway (better than some std::vector<std::vector<int>>, due to cache locality).
If comp is computationally intensive enough you might be better off by putting #pragma omp atomic update in front of result_local[a][b]++. This takes less memory. In your example with a * b == 100 memory is probably not an issue.
As branching inside the innermost loop can be bad for performance, you might want to try out if result_local[a][b] += comp(c, d); gives better performance, as addition is quite cheap.

Is the OpenMP scheduling still efficient with a conditional inner loop?

Currently, somewhere deep in my code, I am working with a nested for-loop (N1=~10000, N2 = ~500, x,y= 10-50). I used the #pragma omp, to have OpenMP distribute my calculation on several cores.
#pragma omp parallel for
for (int i = 0; i < N1; ++i)
{
for (int j = 0; j < N2; ++j)
{
for (int k = x; k <= y; ++k)
{
// calculation
}
}
}
Now, my two innerloops becomes conditional
#pragma omp parallel for
for (int i = 0; i < N1; ++i)
{
if (toExecute[i])
{
for (int j = 0; j < N2; ++j)
{
for (int k = x; k <= y; ++k)
{
// calculation
}
}
}
}
The inner nested loop either takes a long time, or is immediately done. Of course I can omit the if-statement by replacing the outer-loop and if-statement with a shorter loop and lookup for the later indexing.
My question is: Is OpenMP smart enough to handle the if-statement within my outer loop, or do I have to do something manually?
I am currently using C++ in Visual Studio 2017 if that matters (I think the OpenMP version is a bit behind).
Ideally, you should let OpenMP handle that for you. But as always when you're doing performance stuffs, you have to try to see what is best for you. Indeed, you can gain great speedup by doing things manually. OpenMP is not omniscient, he does not know all the details and intelligence about your calculation.
If your calculation implies the same work of amount for any iteration then your condition is likely to lead to some different work load regarding the most outter loop. So theoritically, a dynamic scheduling should be more fitted
#pragma omp parallel for schedule(dynamic)
You could also try static or guided scheduling which might fit your calculation (I don't know the details of your calculation so I cannot say) and play with the granularity block.
An other test to do, if you can afford that (i.e. is it parallelizable ?), you should try to move the parallelization in the inner loops.
You can even nest the parallelization, it sometimes give nice speedup. Try and tune step by step, take time to see what gives you the best output. Just to remind you these tweaks are often not generic accross different architectures, so aim for a good tradeoff between performance and code reusability.