Spanning OpenMP parallel regions across multiple functions/objects - c++

Is there a way to span an OpenMP parallel region across multiple functions?
void run()
{
omp_set_num_threads(2);
#pragma omp parallel
{
foo();
#pragma omp for
for(int i = 0; i < 10; ++i)
{
//Do stuff here
}
}
}
void foo()
{
#pragma omp for
for(int j = 0; j < 10; ++j)
{
// Have this code be run as a worksharing loop by the OMP threads
// spawned in run
}
}
In this example, I want the threads started in the omp parallel region in the run function to enter foo, and run it as a working sharing loop, the same way they would run the for loop in run. Is this what happens by default or does each thread run the loop independently? How do you test for each?
In my example, function foo and run are member functions is separate classes.
Thanks!

What you describe as your desire is how OpenMP works.

Related

using omp threads inside function

I'm trying to play a little with omp threads. Inorder to make "main" function more clean, I want to use omp threads inside function which called by the main function.
Here we have an example:
void main()
{
func();
}
void func()
{
#pragma omp parallel for
for (int i = 0; i < 5; i++)
{
for (int j = 0; j < 5; j++)
{
doSomething();
}
}
}
With complex computations, when running, after thread 0 finishes, the funcion returns while the other threads haven't finish yet. How can I suspend the return until all threads finishes?
Using barrier inside for loop is impossible, so I don't have another idea.

openmp tasks in nested loops

I am trying to write the following piece of code.
#pragma omp parallel
{
int .... some variables
for (int x:map){
int ...
#pragma omp single
{
#pragma omp task firstprivate(x,..) depend(out:a)
{
assigning the variables some values
}
for (int loop over j)
{
#pragma omp task firstprivate(j) depend (in:a) depend (out:b)
{
}
third loop over k
#pragma omp task depend(in:a,b)
{
}
}
}
}
Is this valid? The threads are getting formed but they are not getting into either (the loop over j i.e. the second loop) or the 3rd loop (I checked by print statements).
Please suggest how to correct this.
I tried printing inside the 2 loops and saw that nothing is getting printed which implies the threads don't enter into the loops at all. I was expecting that the work would be distributed among the threads but unfortunately I am unable to achieve this.
Minimum reproducible example:
(as asked in the comments I am making an example)
int a,b;
#pragma omp parallel
{
#pragma omp single
{for (int &x:map)
{
#pragma omp task
for(int i=0;i<x.second;++i)
{
vector<int> val = m2[i];
for (int j=0;j<val.size();++j)
{
#pragma omp critical
update a global map m3.
}
}
}
}

OpenMP behavior - Nested Multithreading

My question pertains to nested parallelism and OpenMP. Let's start with the following single threaded code snippet:
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
Now let's say we want to make our calls to performAnotherTask in parallel utilizing OpenMP.
So we get the following code:
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
#pragma omp parallel for
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
My understanding is that the calls to performAnotherTask will be performed in parallel, and by default openMP will try and use all available threads on your machine (perhaps this assumption is incorrect).
Let's say we now also want to parallelize the calls to performTask such that we get the following code:
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
#pragma omp parallel for
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
#pragma omp parallel for
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
How will this work? Will both the for loops still be multithreaded? Can we say anything on the number of threads each loop will use? Is there a way to enforce the inner for loop (within performTask) to only utilize a single thread while the outer for loop uses all available threads?
In your last example, the execution behavior depends on a few environmental settings.
First, OpenMP indeed does support such patterns, but by default disables parallel execution in a nested parallel region. To enabled it, you must set OMP_NESTED=true or call omp_set_nested(1) in your code. Then the support for nested parallel execution is enabled.
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
#pragma omp parallel for
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
omp_set_nested(1);
#pragma omp parallel for
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
Second, when OpenMP reaches the outer parallel region, it might grab all the available cores and assume that it can execute a thread on them, so you might want to reduce the number of threads for the outer level, so that some cores are available for in nested region. Say, if you have 32 cores, you could do this:
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
#pragma omp parallel for num_threads(8)
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
omp_set_nested(1);
#pragma omp parallel for num_threads(4)
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
The outer parallel region will execute using 4 threads, each of which will execute the inner region with 8 threads. Note, each of the 4 outer threads will be one of the master threads of the four concurrently executing nested parallel regions. If you want to be more flexible, you can inject the number of threads to use for each level using the environment variable OMP_NUM_THREADS. If you set it to OMP_NUM_THREADS=4,8 you get the same behavior as the above the first code snippet that I have posted.
The problem with the coding pattern is that you need to be careful in balancing each level to not overload the system or get load imbalances between the nested parallel regions. An alternative solution is to use OpenMP tasks instead:
void performAnotherTask() {
// DO something here
}
void performTask() {
// Do other stuff here
#pragma omp taskloop
for (size_t i = 0; i < 100; ++i) {
performAnotherTask();
}
}
int main() {
omp_set_nested(1);
#pragma omp parallel
#pragma omp single
#pragma omp taskloop
for (size_t i = 0; i < 100; ++i) {
performTask();
}
return 0;
}
Here each of the taskloop constructs will generate OpenMP task that are scheduled to execute on the threads that have been created by the single parallel region in the code. Caveate is that tasks are inherently dynamic in their behavior, so you might lose locality properties as you do not know where exactly the tasks will be executing in the system.

OpenMP On-Demand Nested Parallelism

I have a list of jobs, which I am processing in parallel with OpenMP:
void processAllJobs()
{
#pragma omp parallel for
for(int i = 0; i < n; ++i)
processJob(i);
}
All jobs have some sequential parts and parts that could be parallelized if called alone:
void processJob(int i)
{
for(int iteration = 0; iteration < iterationCount; ++iteration)
{
doSomePreparation(i);
std::vector<Subtask> subtasks = getSubtasks(i);
#pragma omp parallel for
for(int j = 0; j < substasks.size(); ++j)
subtasks[j].Process();
doSomePostProcessing(i)
}
}
When I run processAllJobs(), threads are created for the outer loop (over each job) and the inner loop (over the subtasks) are done sequentially within the thread. This is all fine and intended.
Sometimes there are very large jobs that take a lot of time to process. Long enough, such that all other threads in the outer loop already finish way before the last thread and don't do anything. Is there a way to re-purpose the unused threads to parallelize the inner loop as soon as they are finished? I imagine something that checks the number of unused threads each time the inner parallel region is entered.
I cannot predict how long a job runs. It might not only be one long-lasting job - maybe there are two or three.
Your description of the problem sounds more like OpenMP tasking will be a much better choice. Your code would then look like this:
void processAllJobs()
{
#pragma omp parallel master
for(int i = 0; i < n; ++i)
#pragma omp task
processJob(i);
}
Then the processing of the job would look like this:
void processJob(int i)
{
for(int iteration = 0; iteration < iterationCount; ++iteration)
{
doSomePreparation(i);
std::vector<Subtask> subtasks = getSubtasks(i);
#pragma omp taskloop // add grainsize() clause, if Process() is very short
for(int j = 0; j < substasks.size(); ++j)
subtasks[j].Process();
doSomePostProcessing(i)
}
}
That way you get natural load balancing (assuming that you have enough tasks) without having to rely on nested parallelism.

OpenMP: having a complete 'for' loop into each thread

I have this code:
#pragma omp parallel
{
#pragma omp single
{
for (int i=0; i<given_number; ++i) myBuffer_1[i] = myObject_1->myFunction();
}
#pragma omp single
{
for (int i=0; i<given_number; ++i) myBuffer_2[i] = myObject_2->myFunction();
}
}
// and so on... up to 5 or 6 of myObject_x
// Then I sum up the buffers and do something with them
float result;
for (int i=0; i<given_number; ++i)
result = myBuffer_1[i] + myBuffer_2[i];
// do something with result
If I run this code, I get what I expect but the CPU usage looks quite high. Instead, if I run it normally without OpenMP I get the same results but the CPU usage is much lower, despite running in a single thread.
I don't want to specify a number of threads, I wish the program pick the max number of threads according to the CPU capabilities, but I want that each for loop runs entirely in its own thread. How can I do that?
Also, my expectation is that the for loop for myBuffer_1 runs a thread, the other for loop runs another thread, and the rest runs in the 'master' thread. Is this correct?
#pragma omp single has an implicit barrier at the end, you need to use #pragma omp single nowait if you want the two single block run concurrently.
However, for your requirement, using section might be a better idea
#pragma omp parallel
{
#pragma omp sections
{
#pragma omp section
{
for (int i=0; i<given_number; ++i) myBuffer_1[i] = myObject_1->myFunction();
}
#pragma omp section
{
for (int i=0; i<given_number; ++i) myBuffer_2[i] = myObject_2->myFunction();
}
}
}