OpenMP: Nested for-loop, barely any difference in execution time - c++

I am doing some image processing and have a nested for loop. I want to implement multiprocessing using OpenMP. The for loop looks like this, where I have added the pragma tags and declared some of the variables private as well.
int a,b,j, idx;
#pragma omp parallel for private(b,j,sumG,sumGI)
for(a = 0; a < ny; ++a)
{
for(b = 0; b < nx; ++b)
{
idx = a*ny+b;
if (imMask[idx] == 0)
{
Wshw[idx] = 0;
continue;
}
sumG = 0;
sumGI = 0;
for(j = a; j < ny; ++j)
{
sumG += shadowM[j-a];
sumGI += shadowM[j-a] * imBlurred[nx*j + b];
}
Wshw[idx] = sumGI / sumG;
}
}
The size of both nx and ny is large and I thought that, using OpenMP, I would get a descent decrease in execution time, instead there is almost no difference. Am I doing something wrong when I implement the multi-threading maybe?

You have a race conditon in idx. You need to make it private as well.
However, instead you could try something like this.
int a,b,j, idx;
#pragma omp parallel for private(a,b,j,sumG,sumGI)
for(idx=0; idx<ny*nx; ++idx) {
if (imMask[idx] == 0)
{
Wshw[idx] = 0;
continue;
}
sumG = 0;
sumGI = 0;
a=idx/ny;
b=idx%ny;
for(j = a; j < ny; ++j) {
sumG += shadowM[j-a];
sumGI += shadowM[j-a] * imBlurred[nx*j + b];
}
Wshw[idx] = sumGI / sumG;
}
You might be able to simiply the inner loop as well as a functcion of idx instead a and b.

Related

OPENMP Parallel Problem Error for Double Loop

I was getting the error: "free(): corrupted unsorted chunks" when trying to run:
#pragma omp parallel for reduction(+:save) shared(save2)
for (size_t i = 0; i <= N; ++i) {
vector<float> dist = cdist(i, arestas);
vector<float> distinv(dist.size());
for (size_t j = 0; j < N(); ++j) {
if (arr[j] > 0)
arrv[j] = (1/N) + (1 / arr[j]);
else
arrv[j] = 0;
}
save = accumulate(arrv.begin(), arrv.end(), 0.0);
vector<double>::iterator iter = save2.begin() + i;
save2.insert(iter, sum);
}
I might miss the point here, but what about just doing it this way (not tested)?
vector<double> sum2(N);
#pragma omp parallel for num_threads(8)
for ( size_t i = 0; i < N; i++ ) {
double sum = 0;
for ( size_t j = 0; j < dist.size(); ++j ) {
if ( dist[j] > 0 ) {
sum += 1. / dist[j];
}
}
sum2[i] = sum;
}
There is still some room for improving this version (by removing the if statement for example, in order to help the vectorization), but unless you had some unexplained constrains in your code, I think this version is a good starting point.

How to parellalize histogram addition?

I have this algoritmo which scans an image and for each pixel p calculates a 256 bins histogram in which values of the pixel inside a patch around p are saved. The algorithm needs to be O(1) so a need to do many histogram addition, I'd like to make the algorithm faster by parallelizing the histogram addition with OpenMP, so I added #pragma omp parallel for before each for (just the ones with histogram additions) but it actually makes it 10 times slower. I think i need to create a parallel region outside but I don't understand how.
Also, I'm afraid the overhead generated by OpenMP overcomes the speed gained by the parallelization of a 256-for, but I don't know for sure
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
if (j == 0)
{ ... }
else {
if (j > side/2) { // subtract col
for (int h = 0; h < 256; h++) // THIS ONE
histogram[h] -= colHisto[j - (side/2) - 1][h];
}
if (j < src.cols - side/2) { // add column
if (i > side/2) { // subtract pixel
colHisto[j + side/2][src.at<uchar>(i - side/2 - 1, j + side/2)]--;
}
if (i < src.rows - side/2) { // add pixel
colHisto[j + side/2][src.at<uchar>(i + side/2, j + side/2)]++;
}
for (int h = 0; h < 256; h++) // AND THIS ONE
histogram[h] += colHisto[j + side/2][h];
}
}
}
}
I actually solved myself by studying OpenMP more here is the code
#pragma omp parallel
{
for (int i = 0; i < src.rows; i++) {
for (int j = 0; j < src.cols; j++) {
// printf("%d%d:", i, j);
if (j == 0) { ... }
else {
#pragma omp single
{ ... }
one = getTickCount();
#pragma omp for
for (int h = 0; h < 256; h++)
histogram[h] += colHisto[j + side / 2][h];
printf("histotime = %d\n", getTickCount() - one);
}
}
}
}
It's significantly faster than putting #pragma omp parallel for before each loop but still slower than the sequential version

Very slow mutex in LLVM/OpenMP

I wrote code to test the performance of openmp on win (Win7 x64, Corei7 3.4HGz) and on Mac (10.12.3 Core i7 2.7 HGz).
In xcode I made a console application setting the compiled default. I use LLVM 3.7 and OpenMP 5 (in opm.h i searched define KMP_VERSION_MAJOR=5, define KMP_VERSION_MINOR=0 and KMP_VERSION_BUILD = 20150701, libiopm5) on macos 10.12.3 (CPU - Corei7 2700GHz)
For win I use VS2010 Sp1. Additional I set c/C++ -> Optimization -> Optimization = Maximize Speed (O2), c/C++ -> Optimization ->Favor Soze Or Speed = Favor Fast code (Ot).
If I run the application in a single thread, the time difference corresponds to the frequency ratio of processors (approximately). But if you run 4 threads, the difference becomes tangible: win program be faster then mac program in ~70 times.
#include <cmath>
#include <mutex>
#include <cstdint>
#include <cstdio>
#include <iostream>
#include <omp.h>
#include <boost/chrono/chrono.hpp>
static double ActionWithNumber(double number)
{
double sum = 0.0f;
for (std::uint32_t i = 0; i < 50; i++)
{
double coeff = sqrt(pow(std::abs(number), 0.1));
double res = number*(1.0-coeff)*number*(1.0-coeff) * 3.0;
sum += sqrt(res);
}
return sum;
}
static double TestOpenMP(void)
{
const std::uint32_t len = 4000000;
double *a;
double *b;
double *c;
double sum = 0.0;
std::mutex _mutex;
a = new double[len];
b = new double[len];
c = new double[len];
for (std::uint32_t i = 0; i < len; i++)
{
c[i] = 0.0;
a[i] = sin((double)i);
b[i] = cos((double)i);
}
boost::chrono::time_point<boost::chrono::system_clock> start, end;
start = boost::chrono::system_clock::now();
double k = 2.0;
omp_set_num_threads(4);
#pragma omp parallel for
for (int i = 0; i < len; i++)
{
c[i] = k*a[i] + b[i] + k;
if (c[i] > 0.0)
{
c[i] += ActionWithNumber(c[i]);
}
else
{
c[i] -= ActionWithNumber(c[i]);
}
std::lock_guard<std::mutex> scoped(_mutex);
sum += c[i];
}
end = boost::chrono::system_clock::now();
boost::chrono::duration<double> elapsed_time = end - start;
double sum2 = 0.0;
for (std::uint32_t i = 0; i < len; i++)
{
sum2 += c[i];
c[i] /= sum2;
}
if (std::abs(sum - sum2) > 0.01) printf("Incorrect result.\n");
delete[] a;
delete[] b;
delete[] c;
return elapsed_time.count();
}
int main()
{
double sum = 0.0;
const std::uint32_t steps = 5;
for (std::uint32_t i = 0; i < steps; i++)
{
sum += TestOpenMP();
}
sum /= (double)steps;
std::cout << "Elapsed time = " << sum;
return 0;
}
I specifically use a mutex here to compare the performance of openmp on the "mac" and "win". On the "Win" function returns the time of 0.39 seconds. On the "Mac" function returns the time of 25 seconds, i.e. 70 times slower.
What is the cause of this difference?
First of all, thank for edit my post (i use translater to write text).
In the real app, I update the values in a huge matrix (20000х20000) in random order. Each thread determines the new value and writes it in a particular cell. I create a mutex for each row, since in most cases different threads write to different rows. But apparently in cases when 2 threads write in one row and there is a long lock. At the moment I can't divide the rows in different threads, since the order of records is determined by the FEM elements.
So just to put a critical section in there comes out, as it will block writes to the entire matrix.
I wrote code like in real application.
static double ActionWithNumber(double number)
{
const unsigned int steps = 5000;
double sum = 0.0f;
for (u32 i = 0; i < steps; i++)
{
double coeff = sqrt(pow(abs(number), 0.1));
double res = number*(1.0-coeff)*number*(1.0-coeff) * 3.0;
sum += sqrt(res);
}
sum /= (double)steps;
return sum;
}
static double RealAppTest(void)
{
const unsigned int elementsNum = 10000;
double* matrix;
unsigned int* elements;
boost::mutex* mutexes;
elements = new unsigned int[elementsNum*3];
matrix = new double[elementsNum*elementsNum];
mutexes = new boost::mutex[elementsNum];
for (unsigned int i = 0; i < elementsNum; i++)
for (unsigned int j = 0; j < elementsNum; j++)
matrix[i*elementsNum + j] = (double)(rand() % 100);
for (unsigned int i = 0; i < elementsNum; i++) //build FEM element like Triangle
{
elements[3*i] = rand()%(elementsNum-1);
elements[3*i+1] = rand()%(elementsNum-1);
elements[3*i+2] = rand()%(elementsNum-1);
}
boost::chrono::time_point<boost::chrono::system_clock> start, end;
start = boost::chrono::system_clock::now();
omp_set_num_threads(4);
#pragma omp parallel for
for (int i = 0; i < elementsNum; i++)
{
unsigned int* elems = &elements[3*i];
for (unsigned int j = 0; j < 3; j++)
{
//in here set mutex for row with index = elems[j];
boost::lock_guard<boost::mutex> lockup(mutexes[i]);
double res = 0.0;
for (unsigned int k = 0; k < 3; k++)
{
res += ActionWithNumber(matrix[elems[j]*elementsNum + elems[k]]);
}
for (unsigned int k = 0; k < 3; k++)
{
matrix[elems[j]*elementsNum + elems[k]] = res;
}
}
}
end = boost::chrono::system_clock::now();
boost::chrono::duration<double> elapsed_time = end - start;
delete[] elements;
delete[] matrix;
delete[] mutexes;
return elapsed_time.count();
}
int main()
{
double sum = 0.0;
const u32 steps = 5;
for (u32 i = 0; i < steps; i++)
{
sum += RealAppTest();
}
sum /= (double)steps;
std::cout<<"Elapsed time = " << sum;
return 0;
}
You're combining two different sets of threading/synchronization primitives - OpenMP, which is built into the compiler and has a runtime system, and manually creating a posix mutex with std::mutex. It's probably not surprising that there's some interoperability hiccups with some compiler/OS combinations.
My guess here is that in the slow case, the OpenMP runtime is going overboard to make sure that there's no interactions between higher-level ongoing OpenMP threading tasks and the manual mutex, and that doing so inside a tight loop causes the dramatic slowdown.
For mutex-like behaviour in the OpenMP framework, we can use critical sections:
#pragma omp parallel for
for (int i = 0; i < len; i++)
{
//...
// replacing this: std::lock_guard<std::mutex> scoped(_mutex);
#pragma omp critical
sum += c[i];
}
or explicit locks:
omp_lock_t sumlock;
omp_init_lock(&sumlock);
#pragma omp parallel for
for (int i = 0; i < len; i++)
{
//...
// replacing this: std::lock_guard<std::mutex> scoped(_mutex);
omp_set_lock(&sumlock);
sum += c[i];
omp_unset_lock(&sumlock);
}
omp_destroy_lock(&sumlock);
We get much more reasonable timings:
$ time ./openmp-original
real 1m41.119s
user 1m15.961s
sys 1m53.919s
$ time ./openmp-critical
real 0m16.470s
user 1m2.313s
sys 0m0.599s
$ time ./openmp-locks
real 0m15.819s
user 1m0.820s
sys 0m0.276s
Updated: There's no problem with using an array of openmp locks in exactly the same way as the mutexes:
omp_lock_t sumlocks[elementsNum];
for (unsigned idx=0; idx<elementsNum; idx++)
omp_init_lock(&(sumlocks[idx]));
//...
#pragma omp parallel for
for (int i = 0; i < elementsNum; i++)
{
unsigned int* elems = &elements[3*i];
for (unsigned int j = 0; j < 3; j++)
{
//in here set mutex for row with index = elems[j];
double res = 0.0;
for (unsigned int k = 0; k < 3; k++)
{
res += ActionWithNumber(matrix[elems[j]*elementsNum + elems[k]]);
}
omp_set_lock(&(sumlocks[i]));
for (unsigned int k = 0; k < 3; k++)
{
matrix[elems[j]*elementsNum + elems[k]] = res;
}
omp_unset_lock(&(sumlocks[i]));
}
}
for (unsigned idx=0; idx<elementsNum; idx++)
omp_destroy_lock(&(sumlocks[idx]));

Deadlock on parallel loop

I'm trying to parallelize the code below. It's easy to see that there is a dependency between the values of aux, since they are computed after the inner loop, but they are needed inside that inner loop (note that on the first iteration j = 0, the code inside the inner loop is not executed). On the other hand, there is no dependency between the values of mu because we only update mu[k], but the only values needed for other computations are in mu[j], for 0 <= j < k.
My approach consists in having the elements of aux locked until they are computed. As soon as a given value of aux is computed, the lock of that element is released and every thread can use it. However, with this code a deadlock occurs and I can't figure out why. Does someone have any tips?
Thanks
for (j = 0; j < k; ++j)
locks[j] = 0;
#pragma omp parallel for num_threads(N_THREADS) private(j, i)
for (j = 0; j < k; ++j)
{
vals[j] = (long)0;
for (i = 0; i < j; i++)
{
while(!locks[i]);
vals[j] += mu[j][i] * aux[i];
}
aux[j] = (s[j] - vals[j]);
locks[j] = 1;
mu[k][j] = aux[j] / c[j];
}
Does it also hang when not optimized?
In optimized code, gcc would not bother reading locks[i] more than once, so this:
for (i = 0; i < j; i++) {
while(!locks[i]);
would be like writing:
for (i = 0; i < j; i++) {
if( !locks[i] ) for(;;) {}
Try adding a barrier to force gcc to re-read locks[i]:
#define pause() do { asm volatile("pause;":::"memory"); } while(0)
...
for (i = 0; i < j; i++) {
while(!locks[i]) pause();
HTH

How can I use openMP for this loop?

I'm wondering if it is feasible to make this loop parallel using openMP.
Of coarse there is the issue with the race conditions. I'm unsure how to deal with the n in the inner loop being generated by the outerloop, and the race condition with where D=A[n]. Do you think it is practical to try and make this parallel?
for(n=0; n < 10000000; ++n) {
for (n2=0; n2< 100; ++n2) {
A[n]=A[n]+B[n2][n+C[n2]+200];
}
D=D+A[n];
}
Yes, this is indeed parallelizable assuming none of the pointers are aliased.
int D = 0; // Or whatever the type is.
#pragma omp parallel for reduction(+:D) private(n2)
for (n=0; n < 10000000; ++n) {
for (n2 = 0; n2 < 100; ++n2) {
A[n] = A[n] + B[n2][n + C[n2] + 200];
}
D += A[n];
}
It could actually be optimized somewhat as follows:
int D = 0; // Or whatever the type is.
#pragma omp parallel for reduction(+:D) private(n2)
for (n=0; n < 10000000; ++n) {
int tmp = A[n]
for (n2 = 0; n2 < 100; ++n2) {
tmp += B[n2][n + C[n2] + 200];
}
A[n] = tmp;
D += tmp;
}