Provide thread private preallocated buffer to a parallelized for() loop? - c++

My program contains a for() loop that processes some raw image data, line by line, which I want to parallelize using OpenMP like this:
...
#if defined(_OPENMP)
int const threads = 8;
omp_set_num_threads( threads );
omp_set_dynamic( threads );
#endif
int line = 0;
#pragma omp parallel private( line )
{
// tell the compiler to parallelize the next for() loop using static
// scheduling (i.e. balance workload evenly among threads),
// while letting each thread process exactly one line in a single run
#pragma omp for schedule( static, 1 )
for( line = 0 ; line < max; ++line ) {
// some processing-heavy code in need of a buffer
}
} // end of parallel section
....
The question is this:
Is it possible to provide an individual (preallocated) buffer (pointer) to each thread of the team executing my loop using a standard OpenMP pragma/function (thus eliminating the need to allocate a fresh buffer with each loop)?
Thanks in advance.
Bjoern

I may be understanding you wrong, but I think this should do it:
#pragma omp parallel
{
unsigned char buffer[1024]; // private
// while letting each thread process exactly one line in a single run
#pragma omp for // ... etc
for(int line = 0; line < max; ++line ) {
//...
}
}
If you really meant you want to share the same buffer for different parallell blocks, you'll have to resort to thread-local storage. (Boost as well as C++11 have facilities for making that easier to do (more portably too) than directly using TlsAlloc and friends).
Note that this approach replaces some of the thread-safety checking burden back on the programmer because it is perfectly possible to have different omp parallel sections running at the same time, especially when they are being nested.
Consider that parallel blocks could be nesting at runtime, even though they are not lexically nested. In practice that is usually not good style - and often results in poor performance. However, it is something you need to be aware of when doing this).

There is threadprivate: http://msdn.microsoft.com/en-us/library/2z1788dd
static int buffer[BUFSIZE];
#pragma omp threadprivate(buffer)
This pragma works on a global/static variable, so you don't need to worry about the stack overflow. (In such a stack-overflow case, it's not a bad idea at all to increase the stack size by tweaking linker option.)
Note that compilers may have different implementation details for threadprivate. For example, VS 2010 compiler can't do make threadprivate if the variable has a constructor. However, Intel C/C++ compiler does this job greatly.
Using separate omp parallel and omp for is also good idea as sehe showed it. However, using threadprivate allows you to use omp parallel for directly.
FYI: Even if you need to allocate your own thread-local storage, in many case you don't actually need to call an OS-specific function call such as TlsAlloc. You may simply allocate an array of N of the data structures. And, then access them by using omp_get_thread_num that gives thread ID from 0 to N-1. Of course, you must consider false sharing by inserting a padding to ensure each data structure should be aligned to a different cache line (mostly modern CPU caches have 64-byte cache line).

Related

Performance issues of multiple independent for loop with openMp

I am planning to use OpenMP threads for an intense computation. However, I couldn't acquire my expected performance in first trial. I thought I have several issues on it, but I have not assured yet. Generally, I am thinking the performance bottleneck is caused from fork and join model. Can you help me in some ways.
First, in a route cycle, running on a consumer thread, there is 2 independent for loops and some additional functions. The functions are located at end of the routine cycle and between the for loops, which is already seen below:
void routineFunction(short* xs, float* xf, float* yf, float* h)
{
// Casting
#pragma omp parallel for
for (int n = 0; n<1024*1024; n++)
{
xf[n] = (float)xs[n];
}
memset(yf,0,1024*1024*sizeof( float ));
// Filtering
#pragma omp parallel for
for (int n = 0; n<1024*1024-1024; n++)
{
for(int nn = 0; nn<1024; nn++)
{
yf[n]+=xf[n+nn]*h[nn];
}
}
status = DftiComputeBackward(hand, yf, yf); // Compute backward transform
}
Note: This code cannot be compilied, because I did it more readible as clearing details.
OpenMP thread number is set 8 dynamically. I observed the used threads in Windows taskbar. While thread number is increased by significantly, I didn't observe any performance improvement. I have some guesses, but I want to still discuss with you for further implementations.
My questions are these.
Does fork and join model correspond to thread creation and abortion? Is it same cost for the software?
Once routineFunction is called by consumer, Does OpenMP thread fork and join every time?
During the running of rutineFunction, does OpenMP thread fork and join at each for loop? Or, does compiler help the second loop as working with existed threads? In case, the for loops cause fork and join at 2 times, how to align the code again. Is combining the two loops in a single loop sensible for saving performance, or using parallel region (#pragma omp parallel) and #pragma omp for (not #pragma omp parallel for) better choice for sharing works. I care about it forces me static scheduling by using thread id and thread numbers. According the document at page 34, static scheduling can cause load imbalance. Actually, I am familiar static scheduling because of CUDA programming, but I want to still avoid it, if there is any performance issue. I also read an answer in stackoverflow which points smart OpenMP algorithms do not join master thread after a parallel region is completed writed by Alexey Kukanov in last paragraph. How to utilize busy wait and sleep attributes of OpenMP for avoiding joining the master thread after first loop is completed.
Is there another reason for performance issue in the code?
This is mostly memory-bound code. Its performance and scalability are limited by the amount of data the memory channel can transfer per unit time. xf and yf take 8 MiB in total, which fits in the L3 cache of most server-grade CPUs but not of most desktop or laptop CPUs. If two or three threads are already able to saturate the memory bandwidth, adding more threads is not going to bring additional performance. Also, casting short to float is a relatively expensive operation - 4 to 5 cycles on modern CPUs.
Does fork and join model correspond to thread creation and abortion? Is it same cost for the software?
Once routineFunction is called by consumer, Does OpenMP thread fork and join every time?
No, basically all OpenMP runtimes, including that of MSVC++, implement parallel regions using thread pools as this is the easiest way to satisfy the requirement of the OpenMP specification that thread-private variables retain their value between the different parallel regions. Only the very first parallel region suffers the full cost of starting new threads. Consequent regions reuse those threads and an additional price is paid only if more threads are needed that in any of the previously executed parallel regions. There is still some overhead, but it is far lower than that of starting new threads each time.
During the running of rutineFunction, does OpenMP thread fork and join at each for loop? Or, does compiler help the second loop as working with existed threads?
Yes, in your case two separate parallel regions are created. You can manually merge them into one:
#pragma omp parallel
{
#pragma omp for
for (int n = 0; n<1024*1024; n++)
{
xf[n] = (float)xs[n];
}
#pragma omp single
{
memset(yf,0,1024*1024*sizeof( float ));
//
// Other code that was between the two parallel regions
//
}
// Filtering
#pragma omp for
for (int n = 0; n<1024*1024-1024; n++)
{
for(int nn = 0; nn<1024; nn++)
{
yf[n]+=xf[n+nn]*h[nn];
}
}
}
Is there another reason for performance issue in the code?
It is memory-bound, or at least the two loops shown here are.
Alright, it's been a while since I did OpenMP stuff so hopefully I didn't mess any of this up... but here goes.
Forking and joining is the same thing as creating and destroying threads. How the cost compares to other threads (such as a C++11 thread) will be implementation dependent. I believe in general OpenMP threads might be slightly lighter-weight than C++11 threads, but I'm not 100% sure about that. You'd have to do some testing.
Currently each time routineFunction is called you will fork for the first for loop, join, do a memset, fork for the second loop, join, and then call DftiComputeBackward
You would be better off creating a parallel region as you stated. Not sure why the scheduling is an extra concern. It should be as easy as moving your memset to the top of the function, starting a parallel region using your noted command, and making sure each for loop is marked with #pragma omp for as you mentioned. You may need to put an explicit #pragma omp barrier in between the two for loops to make sure all threads finish the first for loop before starting the second... OpenMP has some implicit barriers but I forgot if #pragma omp for has one or not.
Make sure that the OpenMP compile flag is turned on for your compiler. If it isn't, the pragmas will be ignored, it will compile, and nothing will be different.
Your operations are prime for SIMD acceleration. You might want to see if your compiler supports auto-vectorization and if it is doing it. If not, I'd look into SIMD a bit, perhaps using intrinsics.
How much time does DftiComputeBackwards take relative to this code?

What is the best way to parallelise tasks sharing an object but otherwise independent?

I'm coding a physics simulation consisting mainly of a central loop of hundreds of billions of repetitions of operations on an array. These operations are independent from the other (well actually the array changes along the way) and so I'm thinking about parallelising my code as I can make it run on 4 or 8 cores computers in my lab.
It's my first time doing something alike and I've been advised to look at openmp. I've started to code some toy programs with it, but I'm really unsure about how it works and the documentation is quite cryptic to me. For example the following code:
int a = 0;
#pragma omp parallel
{
a++;
}
cout << a << endl;
launched on my computer (4 cores CPU) gives me sometimes 4, other times 3 or 2. Is it because it doesn't wait for all the cores to execute the instructions? Because I sure need to know how many iterations were done in my case. Should I look for something else than openmp considering what I want in the end?
When writing concurrently to a shared variable (a in your code), you have a data race. To avoid different threads writing "simultaneously", you must either use an atomic assignment or protect the assignment with a mutex (= mutual exclusion). In OpenMP, the latter is done via a critical region
int a = 0;
#pragma omp parallel
{
#pragma omp critical
{
a++;
}
}
cout << a << endl;
(of course, this particular program does nothing in parallel, hence will be slower than a serial one doing the same).
For more info, read the openMP documentation! However, I would advise you to not use OpenMP, but TBB if you're using C++. It's much more flexible.
What you are seeing is the typical example of a race condition. Four threads are trying to increment variable a and they are fighting for it. Some 'lose' and they are not able to increment so you see a result lower than 4.
What happens is that the a++ command is actually a set of three instructions: read a from memory and put it in a register, increment the value in the register, then put the value back in memory. If thread 1 reads the value of a after thread 2 has read it but before thread 2 has written the new value back to a, the increment operation of thread2 will be overwritten. Using #omp critical is a way to ensure that all the read/increment/write operations are not interrupted by another thread.
If you need to parallelize iterations, you can use omp parallel for, for instance to increment all the elements in an array.
Typical use:
#pragma omp parallel for
for (i = 0; i < N; i++)
a[i]++;

Shared vectors in OpenMP

I am trying to parallize a program I am using and got the following question.
Will I get a loss of performance if multiple threads need to read/write on the same vector but different elements of the vector ? I have the feeling thats the reason my program hardly gets any faster upon parallizing it. Take the following code:
#include <vector>
int main(){
vector<double> numbers;
vector<double> results(10);
double x;
//write 10 values in vector numbers
for (int i =0; i<10; i++){
numbers.push_back(cos(i));
}
#pragma omp parallel for \
private(x) \
shared(numbers, results)
for(int j = 0; j < 10; j++){
x = 2 * numbers[j] + 5;
#pragma omp critical // do I need this ?
{
results[j] = x;
}
}
return 0;
}
Obviously the actual program does far more expensive operations, but this example shall
only explain my question. So can the for loop be done fast and completely parallel or do the different threads have to wait for each other because only one thread at a time can access the vector number for instance although they are all reading different elements of the vector ?
Same question with the write operation: Do I need the critical pragma or is it no problem since every thread writes into a different element of the vector results ?
I am happy with every help I can get and also it would be good to know if there is a better way to do this (maybe not use vectors at all, but simple arrays and pointers etc. ?)
I also read vectors aren't thread safe in certain cases and it is recommended to use a pointer: OpenMP and STL vector
Thanks a lot for your help!
I imagine that most of the issues with vectors in multiple threads would be if it has to resize, then it copies the entire contents of the vector into a new place in memory (a larger allocated chunk) which if you're accessing this in parallel then you just tried to read an object that has been deleted.
If you are not resizing your array, then I have had never had any trouble with concurrent read writes into the vector (obviously as long as I'm not writing twice the same element)
As for the lack of performance boost, the openmp critical section will slow your program down to probably the same as just using 1 thread (depending on how much is actually done outside that critical section)
You can remove the critical section statement (with the conditions above in mind).
You get no speedup precisely because of the critical sectino, which is superfluous, since the same elements will never be modified at the same time. Remove the critical section piece and it will work just fine.
You can play with the schedule strategy as well, because if memory access is not linear (it is in the example you gave), threads might fight for cache (writing elements in the same cache line). OTOH if the number of elements is given as in your case and there is no branching in the loop (therefore they will execute at about the same speed), static, which is IIRC the default, should work the best anyway.
(BTW you can declare x inside the loop to avoid private(x) and the shared directive is implied IIRC (I never used it).)

C++ OpenMP directives

I have a loop that I'm trying to parallelize and in it I am filling a container, say an STL map. Consider then the simple pseudo code below where T1 and T2 are some arbitrary types, while f and g are some functions of integer argument, returning T1, T2 types respectively:
#pragma omp parallel for schedule(static) private(i) shared(c)
for(i = 0; i < N; ++i) {
c.insert(std::make_pair<T1,T2>(f(i),g(i))
}
This looks rather straighforward and seems like it should be trivially parallelized but it doesn't speed up as I expected. On the contrary it leads to run-time errors in my code, due to unexpected values being filled in the container, likely due to race conditions. I've even tried putting barriers and what-not, but all to no-avail. The only thing that allows it to work is to use a critical directive as below:
#pragma omp parallel for schedule(static) private(i) shared(c)
for(i = 0; i < N; ++i) {
#pragma omp critical
{
c.insert(std::make_pair<T1,T2>(f(i),g(i))
}
}
But this sort of renders useless the whole point of using omp in the above example, since only one thread at a time is executing the bulk of the loop (the container insert statement). What am I missing here? Short of changing the way the code is written, can somebody kindly explain?
This particular example you have is not a good candidate for parallelism unless f() and g() are extremely expensive function calls.
STL containers are not thread-safe. That's why you're getting the race conditions. So accessing them needs to be synchronized - which makes your insertion process inherently sequential.
As the other answer mentions, there's a LOT of overhead for parallelism. So unless f() and g() extremely expensive, your loop doesn't do enough work to offset the overhead of parallelism.
Now assuming f() and g() are extremely expensive calls, then your loop can be parallelized like this:
#pragma omp parallel for schedule(static) private(i) shared(c)
for(i = 0; i < N; ++i) {
std::pair<T1,T2> p = std::make_pair<T1,T2>(f(i),g(i));
#pragma omp critical
{
c.insert(p);
}
}
Running multithreaded code make you think about thread safety and shared access to your variables. As long as you start inserting into c from multiple threads, the collection should be prepared to take such "simultaneous" calls and keep its data consistent, are you sure it is made this way?
Another thing is that parallelization has its own overhead and you are not going to gain anything when you try to run a very small task on multiple threads - with the cost of splitting and synchronization you might end up with even higher total execution time for the task.
c will have obviously data races, as you guessed. STL map is not thread-safe. Calling insert method concurrently in multiple threads will have very unpredictable behavior, mostly just crash.
Yes, to avoid the data races, you must have either (1) a mutex like #pragma omp critical, or (2) concurrent data structure (aka look-free data structures). However, not all data structures can be lock-free in current hardware. For example, TBB provides tbb::concurrent_hash_map. If you don't need ordering of the keys, you may use it and could get some speedup as it does not have a conventional mutex.
In case where you can use just a hash table and the table is very huge, you could take a reduction-like approach (See this link for the concept of reduction). Hash tables do not care about the ordering of the insertion. In this case, you allocate multiple hash tables for each thread, and let each thread inserts N/#thread items in parallel, which will give a speedup. Looking up is also can be easily done by accessing these tables in parallel.

Concurrency and optimization using OpenMP

I'm learning OpenMP. To do so, I'm trying to make an existing code parallel. But I seems to get an worse time when using OpenMP than when I don't.
My inner loop:
#pragma omp parallel for
for(unsigned long j = 0; j < c_numberOfElements; ++j)
{
//int th_id = omp_get_thread_num();
//printf("thread %d, j = %d\n", th_id, (int)j);
Point3D current;
#pragma omp critical
{
current = _points[j];
}
Point3D next = getNext(current);
if (!hasConstraint(next))
{
continue;
}
#pragma omp critical
{
_points[j] = next;
}
}
_points is a pointMap_t, defined as:
typedef boost::unordered_map<unsigned long, Point3D> pointMap_t;
Without OpenMP my running time is 44.904s. With OpenMP enabled, on a computer with two cores, it is 64.224s. What I am doing wrong?
Why have you wrapped your reads and writes to _points[j] in critical sections ? I'm not much of a C++ programmer, but it doesn't look to me as if you need those sections at all. As you've written it (uunamed critical sections) each thread is going to wait while the other goes through each of the sections. This could easily make the program slower.
It seems possible that the lookup and write to _points in critical sections is dragging down the performance when you use OpenMP. Single-threaded, this will not result in any contention.
Sharing seed data like this seems counterproductive in a parallel programming context. Can you restructure to avoid these contention points?
You need to show the rest of the code. From a comment to another answer, it seems you are using a map. That is really a bad idea, especially if you are mapping 0..n numbers to values: why don't you use an array?
If you really need to use containers, consider using the ones from the Intel's Thread Building Blocks library.
I agree that it would be best to see some working code.
The ultimate issue here is that there are criticals within a parallel region, and criticals are (a) enormously expensive in and of themselves, and (b) by definition, kill parallelism. The assignment to current certainl doesn't need to be inside a critical, as it is private; I wouldn't have thought the _points[j] assignment would be, either, but I don't know what the map stuff does, so there you go.
But you have a loop in which you have a huge amount of overhead, which grows linearly in the number of threads (the two critical regions) in order to do a tiny amount of actual work (walk along a linked list, it looks like). That's never going to be a good trade...