I am still confused. If i use the reduction clause in OpenMP can false sharing happen? (Both code snippets give the correct result.)
A little example, where the maximum of an array is wanted:
double max_red(double *A, int N){
double mx = std::numeric_limits<double>::min();
#pragma omp parallel for reduction(max:mx)
for(int i=0; i<N; ++i){
if(A[i]>mx) mx = A[i];
}
return mx;
}
This example can also written with extra padding
double max_padd(double *A, int N){
omp_set_num_threads(NUM_THREADS);
double local_max[NUM_THREADS][8];
double res;
#pragma omp parallel
{
int id = omp_get_thread_num();
local_max[id][0] = std::numeric_limits<double>::min();
#pragma omp for
for(int i=0; i<N; ++i){
if(A[i]>local_max[id][0])local_max[id][0]=A[i];
}
#pragma omp single
{
res = local_max[0][0];
for(int i=0; i<NUM_THREADS; ++i){
if(local_max[i][0]> res)res = local_max[i][0];
}
}
}
return res;
}
But is the extra padding necessary for totally prohibit false sharing or is the reduction clause safe enough?
Thanks
The padding is not necessary.
Technically this is not mandated by the standard. The standard doesn't say where each threads private copy is to be located in memory. Remember that false sharing is not a correctness issue, but a (very significant) practical performance issue.
However, it would be extremely surprising if any OpenMP implementation would make such a rookie mistake and place the private copies on the same cache line.
Assume that the the implementation has a better understanding of the platform and it's performance characteristics than the programmer. Only write manual "performance improvements" such as your second solution if you have proof by measurement that the idiomatic solution (e.g. your first solution) has bad performance that cannot be fixed by tuning.
Practical Note: I'm fairly sure implementations would typically place the private copy on the (private) stack of the executing thread, and afterwards each thread would update the shared variable with a critical section or atomically when supported.
In theory, a logarithmic time tree-based reduction is be possible. However implementations don't seem to do that, at least not in all cases.
Related
I have the following piece of code.
for (i = 0; i < n; ++i) {
++cnt[offset[i]];
}
where offset is an array of size n containing values in the range [0, m) and cnt is an array of size m initialized to 0. I use OpenMP to parallelize it as follows.
#pragma omp parallel for shared(cnt, offset) private(i)
for (i = 0; i < n; ++i) {
++cnt[offset[i]];
}
According to the discussion in this post, if offset[i1] == offset[i2] for i1 != i2, the above piece of code may result in incorrect cnt. What can I do to avoid this?
This code:
#pragma omp parallel for shared(cnt, offset) private(i)
for (i = 0; i < n; ++i) {
++cnt[offset[i]];
}
contains a race-condition during the updates of the array cnt, to solve it you need to guarantee mutual exclusion of those updates. That can be achieved with (for instance) #pragma omp atomic update but as already pointed out in the comments:
However, this resolves just correctness and may be terribly
inefficient due to heavy cache contention and synchronization needs
(including false sharing). The only solution then is to have each
thread its private copy of cnt and reduce these copies at the end.
The alternative solution is to have a private array per thread, and at end of the parallel region you perform the manual reduction of all those arrays into one. An example of such approach can be found here.
Fortunately, with OpenMP 4.5 you can reduce arrays using a dedicate pragma, namely:
#pragma omp parallel for reduction(+:cnt)
You can have look at this example on how to apply that feature.
Worth mentioning that regarding the reduction of arrays versus the atomic approach as kindly point out by #Jérôme Richard:
Note that this is fast only if the array is not huge (the atomic based
solution could be faster in this specific case regarding the platform
and if the values are not conflicting). So that is m << n. –
As always profiling is the key!; Hence, you should test your code with aforementioned approaches to find out which one is the most efficient.
I have a nested loop, with few outer, and many inner iterations. In the inner loop, I need to calculate a sum, so I want to use an OpenMP reduction. The outer loop is on a container, so the reduction is supposed to happen on an element of that container.
Here's a minimal contrived example:
#include <omp.h>
#include <vector>
#include <iostream>
int main(){
constexpr int n { 128 };
std::vector<int> vec (4, 0);
for (unsigned int i {0}; i<vec.size(); ++i){
/* this does not work */
//#pragma omp parallel for reduction (+:vec[i])
//for (int j=0; j<n; ++j)
// vec[i] +=j;
/* this works */
int* val { &vec[0] };
#pragma omp parallel for reduction (+:val[i])
for (int j=0; j<n; ++j)
val[i] +=j;
/* this is allowed, but looks very wrong. Produces wrong results
* for std::vector, but on an Eigen type, it worked. */
#pragma omp parallel for reduction (+:val[i])
for (int j=0; j<n; ++j)
vec[i] +=j;
}
for (unsigned int i=0; i<vec.size(); ++i) std::cout << vec[i] << " ";
std::cout << "\n";
return 0;
}
The problem is, that if I write the reduction clause as (+:vec[i]), I get the error ‘vec’ does not have pointer or array type, which is descriptive enough to find a workaround. However, that means I have to introduce a new variable and somewhat change the code logic, and I find it less obvious to see what the code is supposed to do.
My main question is, whether there is a better/cleaner/more standard way to write a reduction for container elements.
I'd also like to know why and how the third way shown in the code above somewhat works. I'm actually working with the Eigen library, on whose containers that variant seems to work just fine (haven't extensively tested it though), but on std::vector, it produces results somewhere between zero and the actual result (8128). I thought it should work, because vec[i] and val[i] should both evaluate to dereferencing the same address. But alas, apparently not.
I'm using OpenMP 4.5 and gcc 9.3.0.
I'll answer your question in three parts:
1. What is the best way to perform to OpenMP reductions in your example above with a std::vec ?
i) Use your approach, i.e. create a pointer int* val { &vec[0] };
ii) Declare a new shared variable like #1201ProgramAlarm answered.
iii) declare a user defined reduction (which is not really applicable in your simple case, but see 3. below for a more efficient pattern).
2. Why doesn't the third loop work and why does it work with Eigen ?
Like the previous answer states you are telling OpenMP to perform a reduction sum on a memory address X, but you are performing additions on memory address Y, which means that the reduction declaration is ignored and your addition is subjected to the usual thread race conditions.
You don't really provide much detail into your Eigen venture, but here are some possible explanations:
i) You're not really using multiple threads (check n = Eigen::nbThreads( ))
ii) You didn't disable Eigen's own parallelism which can disrupt your own usage of OpenMP, e.g. EIGEN_DONT_PARALLELIZE compiler directive.
iii) The race condition is there, but you're not seeing it because Eigen operations take longer, you're using a low number of threads and only writing a low number of values => lower occurrence of threads interfering with each other to produce the wrong result.
3. How should I parallelize this scenario using OpenMP (technically not a question you asked explicitly) ?
Instead of parallelizing only the inner loop, you should parallelize both at the same time. The less serial code you have, the better. In this scenario each thread has its own private copy of the vec vector, which gets reduced after all the elements have been summed by their respective thread. This solution is optimal for your presented example, but might run into RAM problems if you're using a very large vector and very many threads (or have very limited RAM).
#pragma omp parallel for collapse(2) reduction(vsum : vec)
for (unsigned int i {0}; i<vec.size(); ++i){
for (int j = 0; j < n; ++j) {
vec[i] += j;
}
}
where vsum is a user defined reduction, i.e.
#pragma omp declare reduction(vsum : std::vector<int> : std::transform(omp_out.begin(), omp_out.end(), omp_in.begin(), omp_out.begin(), std::plus<int>())) initializer(omp_priv = decltype(omp_orig)(omp_orig.size()))
Declare the reduction before the function where you use it, and you'll be good to go
For the second example, rather than storing a pointer then always accessing the same element, just use a local variable:
int val = vec[i];
#pragma omp parallel for reduction (+:val)
for (int j=0; j<n; ++j)
val +=j;
vec[i] = val;
With the 3rd loop, I suspect that the problem is because the reduction clause names a variable, but you never update that variable by that name in the loop so there is nothing that the compiler sees to reduce. Using Eigen may make the code a bit more complicate to analyze, resulting in the loop working.
I'm using OpenMP for this and I'm not confident of my answer as well. Really need your help in this. I've been wondering which method (serial or parallel) is faster in run speed in this. My #pragma commands (set into comments) are shown below.
Triangle Triangle::t_ID_lookup(Triangle a[], int ID, int n)
{
Triangle res; int i;
//#pragma omp for schedule(static) ordered
for(i=0; i<n; i++)
{
if(ID==a[i].t_ID)
{
//#pragma omp ordered
return (res=a[i]); // <-changed into "res = a[i]" instead of "return(...)"
}
}
return res;
}
It depends on n. If n is small, then the overhead required for the OMP threads makes the OMP version slower. This can be overcome by adding an if clause: #pragma omp parallel if (n > YourThreshhold)
If all a[i].t_ID are not unique, then you may receive different results from the same data when using OMP.
If you have more in your function than just a single comparison, consider adding a shared flag variable that would indicate that it was found so that a comparison if(found) continue; can be added at the beginning of the loop.
I have no experience with ordered, so if that was the crux of your question, ignore all the above and consider this answer.
Profile. In the end, there is no better answer.
If you still want a theoretical answer, then a random lookup would be O(n) with a mean of n/2 while the OMP version would be a constant n/k where k is the number of threads/cores not including overhead.
For an alternative way of writing your loop, see Z Boson's answer to a different question.
I was wondering if there is any reason for preferring the private(var) clause in OpenMP over the local definition of (private) variables, i.e.
int var;
#pragma omp parallel private(var)
{
...
}
vs.
#pragma omp parallel
{
int var;
...
}
Also, I'm wondering what's the point of private clauses then. This question was partially explained in OpenMP: are local variables automatically private?, but I don't like the answer since even C89 doesn't bar you from defining variables in the middle of functions as long as they are in the beginning of a scope (which is automatically the case when you enter a parallel region). So even for old-fashion C-programmers this shouldn't make any difference.
Should I consider this as syntactic sugar that allows for a "define-variables-in-the-beginning-of-your-function" style as used in the good old days?
By the way: In my opinion the second version also prevents programmers from using private variables after the parallel region in the hope that it may contain something useful so another -1 for the private clause.
But since I'm quite new to OpenMP I don't want to question anything without having a good explanation for it. Thanks in advance for your answers!
It's not just syntactic sugar. One of the features of OpenMP strives for is to not change the serial code if the code is not compiled with OpenMP. Any construct you use as part of a pragma is ignored if you don't compile with OpenMP. Doing this you can use things like private, firstprivaate, collapse, and parallel for without changing your code. Changing the code can affect for example how the code is optimized by the compiler.
If you have code like
int i,j;
#pragma omp parallel for private(j)
for(i = 0; i < n; i++) {
for(j = 0; j < n; j++) {
}
}
The only way to do this without private in C89 is to change the code by defining j inside the parallel section e.g:
int i,j;
#pragma omp parallel
{
int j;
#pragma omp for
for(i = 0; i < n; i++) {
for(j = 0; j < n; j++) {
}
}
}
Here's a C++ example with firstprivate. Let's say you have a vector which you want to be private. If you use firstprivate you don't have to change your code but if you declare a private copy inside the parallel region you do change your code. If you compile that without OpenMP it makes a unnecessary copy.
vector<int> a;
#pragma omp parallel {
vector<int> a_private = a;
}
This logic applies to many other constructs. For example collapse. You could manually fuse a loop which changes your code or you can use collapse and only fuse it when compiled with OpenMP.
However, having said all that, in practice I find often that I need to change the code anyway to get the best parallel result so I usually define everything in parallel sections anyway and don't use features such as private, firstprivate, or collapse (not to mention that OpenMP implementations in C++ often struggle with non-POD anyway so it's often better to do it yourself).
With OpenMP 3.1, it is possible to have a reduction clause with min:
double m;
#pragma omp parallel for reduction(min:m)
for (int i=0;i< n; i++){
if (a[i]*2 < m) {
m = a[i] * 2;
}
return m;
Suppose I also need the index of the minimal element; is there a way to use the reduction clause for this? I believe the alternative is writing the reduction manually using nowait and critical.
Suppose I also need the index of the minimal element; is there a way to use the reduction clause for this?
Unfortunately, no. the list of possible reductions in OpenMP is very … small. In particular, min and max are the only “higher-level” functions and they aren’t customisable. At all.
I have to admit that I don’t like OpenMP’s approach to reductions, precisely because it’s not extensible in the slightest, it is designed only to work on special cases. Granted, those are interesting special cases but it’s still fundamentally a bad approach.
For such operations, you need to implement the reduction yourself by accumulating thread-local results into thread-local variables and combining them at the end.
The easiest way of doing this (and indeed quite close to how OpenMP implements reductions) is to have an array with elements for each thread, and using omp_get_thread_num() to access an element. Note however that this will lead to a performance degradation due to false sharing if the elements in the array share a cache line. To mitigate this, pad the array:
struct min_element_t {
double min_val;
size_t min_index;
};
size_t const CACHE_LINE_SIZE = 1024; // for example.
std::vector<min_element_t> mins(threadnum * CACHE_LINE_SIZE);
#pragma omp parallel for
for (int i = 0; i < n; ++i) {
size_t const index = omp_get_thread_num() * CACHE_LINE_SIZE;
// operate on mins[index] …
}