Share scoped variable in OpenMP - c++

I'm working with OpenMP and would like to share variable that's declared inside scoped block between threads. Here's overall idea of what I'm doing:
#pragma omp parallel
{
// ...parallel code...
{
uint8_t* pixels;
int pitch;
#pragma omp barrier
#pragma omp master
{
// SDL video code must be run in main thread
SDL_LockTexture(renderTexture.get(), nullptr, (void**)&pixels, &pitch);
}
#pragma omp barrier
// parallel code that reads `pixels` and `pitch` and writes to texture
#pragma omp barrier
#pragma omp master
{
// Once done, main thread must do SDL call again (this will upload texture to GPU)
SDL_UnlockTexture(renderTexture.get());
}
}
}
When compiled as is, pixels and pitch will be thread-private and set only in main thread, leading to segfault. Is there a way to share those variables without increasing their scope (declaring them before #pragma omp parallel) or needlessly joining and re-creating threads (leaving parallelized part and entering another #pragma omp parallel block)?

One way to overcome this problem is to use OpenMP tasks. Here is an example:
#pragma omp parallel
{
// ...parallel code...
// May not be needed
#pragma omp barrier
#pragma omp master
{
uint8_t* pixels;
int pitch;
// SDL video code must be run in main thread
SDL_LockTexture(renderTexture.get(), nullptr, (void**)&pixels, &pitch);
// Note that the variables are firstprivate by default for taskloops
// and that shared variables must be explicitly listed as shared here
// (as opposed to an omp for).
#pragma omp taskloop collapse(2) firstprivate(pixels, pitch)
for(int y=0 ; y<height ; ++y)
{
for(int x=0 ; x<width ; ++x)
{
// Code reading `pixels` and `pitch` and writing into texture
}
}
// Once done, main thread must do SDL call again (this will upload texture to GPU)
SDL_UnlockTexture(renderTexture.get());
}
// May not be needed
#pragma omp barrier
}
This task-based implementation benefits from having less synchronizations (costly on many-core systems).
Another possible alternative is to use pointers to share the value of the private variable to other threads. However, this approach require some shared variables to be declared outside the parallel section which may not be possible in your case.

Related

In OpenMP how can we run in parallel multiple code blocks where each block contains omp single and omp for loops?

In C++ Openmp how could someone run in parallel multiple code blocks where each block contains omp single and omp for loops?
More precisely, I have 3 functions:
block1();
block2();
block3();
I want each of these 3 functions to run in parallel. However I do not want each one of these functions to be assigned a single thread. If I wanted each one of them to use a single thread I could enclose them in three "#pragma omp single nowait" followed by a "#pragma barrier" at the end. Instead each one of these three functions may look something like this:
#pragma omp single
{
//some code here
}
#pragma omp for nowait
for(std::size_t i=0;i<numloops;i++)
{
//some code here
}
Notice in the above code that I need an omp single region to be executed before each parallel for loop. If I did not have this constraint I could have simply added a "nowait" to the "omp single". Instead because I have the "omp single" without a "nowait" I do not want block2() to have to wait for the "omp single" region in block1() to complete. Nor do I want block3() to have to wait for the "omp single" region in block2() to complete. Any ideas? Thanks
The best solution is using tasks. Run each block() in different tasks, so they run parallel:
#pragma omp parallel
#pragma omp single nowait
{
#pragma omp task
block1();
#pragma omp task
block2();
#pragma omp task
block3();
}
In block() you can set some code, which is executed before the for loop and you can use taskloop to distribute work among the available threads.
void block1()
{
//single thread code here
{
//.... this code runs before the loop and independent of block2 and block3
}
#pragma omp taskloop
for(std::size_t i=0;i<numloops;i++)
{
//some code here - this is distributed among the remaining threads
}
}

Optimize loop with openmp

I've got the following loop:
while (a != b) {
#pragma omp parallel
{
#pragma omp for
// first for
#pragma omp for
// second for
}
}
In this way the team is created at each loop. Is it possible to rearrange the code in order to have a single team? "a" variable is accessed with omp atomic inside the loop and "b" is a constant.
The only thing that comes to my mind is something like this:
#pragma omp parallel
{
while (a != b) {
#pragma omp barrier
// This barrier ensures that threads
// wait each other after evaluating the condition
// in the while loop
#pragma omp for
// first for (implicit barrier)
#pragma omp for
// second for (implicit barrier)
// The second implicit barrier ensures that every
// thread will have the same view of a
} // while
} // omp parallel
In this way each thread will evaluate the condition, but every evaluation will be consistent with the others. If you really want a single thread to evaluate the condition, then you should think of transforming your worksharing constructs into task constructs.

OpenMP tasks passing "shared" pointers

I would like to use the task pragmas of openMP for the next code:
std::vector<Class*> myVectorClass;
#pragma omp parallel
{
#pragma omp single nowait
{
for (std::list<Class*>::iterator it = myClass.begin(); it != myClass.end();) {
#pragma omp task firstprivate(it)
(*it)->function(t, myVectorClass))
++it;
}
}
#pragma omp taskwait
}
The problem, or one of them, is that the myVectorClass is a pointer to an object. So it is not possible to set this vector as shared. myVectorClass is modified by the function. The previous code crash. So, could you tell me how to modify the previous code (without using the for-loop pragmas)?
Thanks
myVectorClass is a vector of pointers. In your current code, you set it as shared. Since your code crashes, I suppose you changes the length of myVectorClass in function(). However std::vector is not thread-safe, so modifying the length in multiple threads will crash its data structure.
Depending on what exactly function() does, you could have simple solutions. The basic idea is to use one thread-local vector per thread to collect the result of function() first, then concatenate/merge these vectors into a single one.
The code shown here gives a good example.
C++ OpenMP Parallel For Loop - Alternatives to std::vector
std::vector<int> vec;
#pragma omp parallel
{
std::vector<int> vec_private;
#pragma omp for nowait //fill vec_private in parallel
for(int i=0; i<100; i++) {
vec_private.push_back(i);
}
#pragma omp critical
vec.insert(vec.end(), vec_private.begin(), vec_private.end());
}

Creating threads within a multithreaded for loop using openMP

I am new to OpenMP and I am not able to create threads within each threaded loop iteration. My question may sound naive, please bear with me.
#pragma omp parallel private(a,b) shared(f)
{
#pragma omp for
for(...)
{
//some operations
// I want to parallelize the code in italics along within in the multi threaded for loop
*int x=func1(a,b);*
*int val1=validate(x);*
*int y=func2(a,b);*
*int val2=validate(y);*
}
}
Within the for loop all threads are busy with loop iterations, so there are no resources left to execute stuff in side an iteration in parallel. And in case the work is well balanced you won't gain any better performance.
If it is hard/impossible to well-balance the work with a parallel for. You can try generating tasks within the loop, and do the work afterwords. But be aware of the overhead of task generation.
#pragma omp parallel private(a,b) shared(f)
{
#pragma omp for nowait
for(...)
{
//some operations
#pragma omp task
{
int x=func1(a,b);
int val1=validate(x);
}
#pragma omp task
{
int y=func2(a,b);
int val2=validate(y);
}
}
// wait for all tasks to be finished (implicit at the end of the parallel region (here))
#pragma omp taskwait
}

Elegantly initializing openmp threads in parallel for loop

I have a for loop that uses a (somewhat complicated) counter object sp_ct to initialize an array. The serial code looks like
sp_ct.depos(0);
for(int p=0;p<size; p++, sp_ct.increment() ) {
in[p]=sp_ct.parable_at_basis();
}
My counter supports parallelization because it can be initialized to the state after p increments, leading to the following working code-fragment:
int firstloop=-1;
#pragma omp parallel for \
default(none) shared(size,in) firstprivate(sp_ct,firstloop)
for(int p=0;p<size;p++) {
if( firstloop == -1 ) {
sp_ct.depos(p); firstloop=0;
} else {
sp_ct.increment();
}
in[p]=sp_ct.parable_at_basis();
} // end omp paralell for
I dislike this because of the clutter that obscures what is really going on, and because it has an unnecessary branch inside the loop (Yes, I know that this is likely to not have a measurable influence on running time because it is so predictable...).
I would prefer to write something like
#pragma omp parallel for default(none) shared(size,in) firstprivate(sp_ct,firstloop)
for(int p=0;p<size;p++) {
#prgma omp initialize // or something
{ sp_ct.depos(p); }
in[p]=sp_ct.parable_at_basis();
sp_ct.increment();
}
} // end omp paralell for
Is this possible?
If I generalize you problem, the question is "How to execute some intialization code for each thread of a parallel section ?", is that right ? You may use a property of the firstprivate clause : "the initialization or construction of the given variable happens as if it were done once per thread, prior to the thread's execution of the construct".
struct thread_initializer
{
explicit thread_initializer(
int size /*initialization params*/) : size_(size) {}
//Copy constructor that does the init
thread_initializer(thread_initializer& _it) : size_(_it.size)
{
//Here goes once per thread initialization
for(int p=0;p<size;p++)
sp_ct.depos(p);
}
int size_;
scp_type sp_ct;
};
Then the loop may be written :
thread_initializer init(size);
#pragma omp parallel for \
default(none) shared(size,in) firstprivate(init)
for(int p=0;p<size;p++) {
init.sp_ct.increment();
}
in[p]=init.sp_ct.parable_at_basis();
The bad things are that you have to write this extra initializer and some code is moved away from its actual execution point. The good thing is that you can reuse it as well as the cleaner loop syntaxe.
From what I can tell you can do this by manually defining the chunks. This looks somewhat like something I was trying to do with induction in OpenMP Induction with OpenMP: getting range values for a parallized for loop in OpenMP
So you probably want something like this:
#pragma omp parallel
{
const int nthreads = omp_get_num_threads();
const int ithread = omp_get_thread_num();
const int start = ithread*size/nthreads;
const int finish = (ithread+1)*size/nthreads;
Counter_class_name sp_ct;
sp_ct.depos(start);
for(int p=start; p<finish; p++, sp_ct.increment()) {
in[p]=sp_ct.parable_at_basis();
}
}
Notice that except for some declarations and changing the range values this code is almost identical to the serial code.
Also you don't have to declare anything shared or private. Everything declared inside the parallel block is private and everything declared outside is shared. You don't need firstprivate either. This makes the code cleaner and more clear (IMHO).
I see what you're trying to do, and I don't think it is possible. I'm just going to write some code that I believe would achieve the same thing, and is somewhat clean, and if you like it, sweet!
sp_ct.depos(0);
in[0]=sp_ct.parable_at_basis();
#pragma omp parallel for \
default(none) shared(size,in) firstprivate(sp_ct,firstloop)
for(int p = 1; p < size; p++) {
sp_ct.increment();
in[p]=sp_ct.parable_at_basis();
} // end omp paralell for
Riko, implement sp_ct.depos(), so it will invoke .increment() only as often as necessary to bring the counter to the passed parameter. Then you can use this code:
sp_ct.depos(0);
#pragma omp parallel for \
default(none) shared(size,in) firstprivate(sp_ct)
for(int p=0;p<size;p++) {
sp_ct.depos(p);
in[p]=sp_ct.parable_at_basis();
} // end omp paralell for
This solution has one additional benefit: Your implementation only works if each thread receives only one chunk out of 0 - size. Which is the case when specifying schedule(static) omitting the chunk size (OpenMP 4.0 Specification, chapter 2.7.1, page 57). But since you did not specify a schedule the used schedule will be implementation dependent (OpenMP 4.0 Specification, chapter 2.3.2). If the implementation chooses to use dynamic or guided, threads will receive multiple chunks with gaps between them. So one thread could receive chunk 0-20 and then chunk 70-90 which will make p and sp_ct out of sync on the second chunk. The solution above is compatible to all schedules.