open mp three for loops with reduction - c++

i need to multiply two 10x10 matrices using open mp. I decided to split the rows of one matrice into groups of 3rows,3 rows and 4 rows. how do i fix this code for the first three rows ?
#pragma omg parallel for reduction(+:m[p][q])
{
for (p = 0; p < 3; p++)
for (q = 0; q < 10; q++)
for (k = 0; k < 10; ++k)
{
m[p][q] += l[p][k] * o[k][q];
}
}

For a start - don't split the matrix yourself, but let OpenMP take care of sharing the work in the loops, e.g.
#pragma omg parallel for
{
for (p = 0; p < 10; p++)
for (q = 0; q < 10; q++)
for (k = 0; k < 10; ++k)
{
m[p][q] += l[p][k] * o[k][q];
}
}
In this code there is no need for a reduction because all concurrent write operations happen to different elements of m. Even if you collapse(2) the first two loops, you are still fine in that regard.
That said, optimizing matrix multiplication is an immensely complex topic on modern hardware. Parallelizing it even more so. If you want to get performance, use a BLAS implementation that is optimized for your architecture. If you want to learn - I suggest you start with the serial implementation and then go on parallelizing it. There plenty of educational material available for either.

Related

How to calculate Matrix efficiently in C++?

I am new to C++ and programming so I think I am making inefficient codes.
I was wondering whether there is any way I can speed up the matrix calculation process.
For example, this is the sample code I write which finds the maximum differences(in absolute value) between 3d array 'V' and 'Vnew'.
First, I take subtraction.
And then, I put the value of tempdiff[0][0][0] to 'dif'
Then, I compare 'dif' and tempdiff[i][j][k] and replace if the latter is larger than the former.
This is just a part of my code and there are lots of matrix calculations inside so that I have too many 'for' statements.
So I was wondering whether there is any way I could avoid using 'for' in the matrix calculations.
Thanks in advance.
for (int i = 0; i < Na; i++) {
for (int j = 0; j < Nd; j++) {
for (int k = 0; k < Ny; k++) {
tempdiff[i][j][k] = abs(V[i][j][k] - Vnew[i][j][k]);
}
}
}
dif = tempdiff[0][0][0];
for (int i = 0; i < Na; i++) {
for (int j = 0; j < Nd; j++) {
for (int k = 0; k < Ny; k++) {
if (tempdiff[i][j][k] > dif) {
dif = tempdiff[i][j][k];
}
else {
dif = dif;
}
}
}
}
There's not much you can do with the for loops, as the maximum difference can locate at all possible places. You have already succeeded in iterating the array in the correct, linear, order.
Compilers are generally quite efficient in optimising, but they apparently fail to flatten a contiguous array, such as float V[Na][Nd][Ny];. After you flatten it manually to float V[Na*Nd*Ny], at least clang can auto-vectorise and produce SIMD code for x64 and arm.
A further optimisation is to avoid making this in two steps, as the total memory throughput is exactly doubled with the temporary array compared to a one-pass solution.
I was assuming your matrices are of type float -- if you can select int, gcc can auto-vectorise this as well (relates to NaN handling); furthermore int16_t or int8_t types are even quicker to evaluate, as more operations can be packed to a single SIMD instruction.

Implementation of sequential LU decomposition in C++

I am trying to follow the Guassian Elimination algorithm in https://courses.engr.illinois.edu/cs554/fa2015/notes/06_lu_8up.pdf in order to implement LU factorization and eventually parallelize it with openmp. Does the following algorithm look correct, where l is the multiplier and m is the matrix?
void decompose2(double **m) {
begin =clock();
int i=0, j=0, k=0;
for(k = 1; k < size - 1; k++)
{
for(i = k + 1; i < size; i++)
{
l[i][k] = m[i][k]/m[k][k];
}
for(j = k + 1; j < size; j++)
{
for(i = k + 1; k < size; k++)
{
m[i][j] = m[i][j] - (l[i][k]*m[k][j]);
}
}
}
end = clock();
}
I don't think it is correct because according to a different paper the times I am getting after parallelization on the same number of processors are completely different.
"Does the following algorithm look correct, …" -- No, because
arrays are 0-index in C++,
double[size][size] (which you are likely using) is not convertible to double**,
int is not a good type for iterators (use size_t instead),
you don't check if m[k][k] might be (close to) zero, when you might have to swap rows.
Please notice that I only looked at the obvious implementation errors, not at possible instances to make the code better, e.g. increasing the stability of the calculation.

How to optimally parallelize nested loops?

I'm writing a program that should run both in serial and parallel versions. Once I get it to actually do what it is supposed to do I started trying to parallelize it with OpenMP (compulsory).
The thing is I can't find documentation or references on when to use what #pragma. So I am trying my best at guessing and testing. But testing is not going fine with nested loops.
How would you parallelize a series of nested loops like these:
for(int i = 0; i < 3; ++i){
for(int j = 0; j < HEIGHT; ++j){
for(int k = 0; k < WIDTH; ++k){
switch(i){
case 0:
matrix[j][k].a = matrix[j][k] * someValue1;
break;
case 1:
matrix[j][k].b = matrix[j][k] * someValue2;
break;
case 2:
matrix[j][k].c = matrix[j][k] * someValue3;
break;
}
}
}
}
HEIGHT and WIDTH are usually the same size in the tests I have to run. Some test examples are 32x32 and 4096x4096.
matrix is an array of custom structs with attributes a, b and c
someValue is a double
I know that OpenMP is not always good for nested loops but any help is welcome.
[UPDATE]:
So far I've tried unrolling the loops. It boosts performance but am I adding unnecesary overhead here? Am I reusing threads? I tried getting the id of the threads used in each for but didn't get it right.
#pragma omp parallel
{
#pragma omp for collapse(2)
for (int j = 0; j < HEIGHT; ++j) {
for (int k = 0; k < WIDTH; ++k) {
//my previous code here
}
}
#pragma omp for collapse(2)
for (int j = 0; j < HEIGHT; ++j) {
for (int k = 0; k < WIDTH; ++k) {
//my previous code here
}
}
#pragma omp for collapse(2)
for (int j = 0; j < HEIGHT; ++j) {
for (int k = 0; k < WIDTH; ++k) {
//my previous code here
}
}
}
[UPDATE 2]
Apart from unrolling the loop I have tried parallelizing the outer loop (worst performance boost than unrolling) and collapsing the two inner loops (more or less same performance boost as unrolling). This are the times I am getting.
Serial: ~130 milliseconds
Loop unrolling: ~49 ms
Collapsing two innermost loops: ~55 ms
Parallel outermost loop: ~83 ms
What do you think is the safest option? I mean, which should be generally the best for most systems, not only my computer?
The problem with OpenMP is that it's very high-level, meaning that you can't access low-level functionality, such as spawning the thread, and then reusing it. So let me make it clear what you can and what you can't do:
Assuming you don't need any mutex to protect against race conditions, here are your options:
You parallelize your outer-most loop, and that will use 3 threads, and that's the most peaceful solution you're gonna have
You parallelize the first inner loop with, and then you'll have a performance boost only if the overhead of spawning a new thread for every WIDTH element is much smaller the efforts required to perform the most inner loop.
Parallelizing the most inner loop, but this is the worst solution in the world, because you'll respawn the threads 3*HEIGHT times. Never do that!
Not use OpenMP, and use something low-level, such as std::thread, where you can create your own Thread Pool, and push all the operations you want to do in a queue.
Hope this helps to put things in perspective.
Here's another option, one which recognises that distributing the iterations of the outermost loop when there are only 3 of them might lead to very poor load balancing,
i=0
#pragma omp parallel for
for(int j = 0; j < HEIGHT; ++j){
for(int k = 0; k < WIDTH; ++k){
...
}
i=1
#pragma omp parallel for
for(int j = 0; j < HEIGHT; ++j){
for(int k = 0; k < WIDTH; ++k){
...
}
i=2
#pragma omp parallel for
for(int j = 0; j < HEIGHT; ++j){
for(int k = 0; k < WIDTH; ++k){
...
}
Warning -- check the syntax yourself, this is no more than a sketch of manual loop unrolling.
Try combining this and collapsing the j and k loops.
Oh, and don't complain about code duplication, you've told us you're being scored partly on performance improvements.
You probably want to parallelize this example for simd so the compiler can vectorize, collapse the loops because you use j and k only in the expression matrix[j][k], and because there are no dependencies on any other element of the matrix. If nothing modifies somevalue1, etc., they should be uniform. Time your loop to be sure those really do improve your speed.

OpenMP collapse gives wrong results

I have an 3D array z, where every element has the value 1.
Now I do:
#pragma omp parallel for collapse(3) shared(z)
for (int i=0; i < SIZE; ++i) {
for (int j=0; j < SIZE; ++j) {
for (int k=0; k < SIZE; ++k) {
for (int n=0; n < ITERATIONS-1; ++n) {
z[i][j][k] += 1;
}
}
}
}
This should add ITERATIONS to each element and it does. If I then change the collapse(3) to collapse(4) (because there are 4 for-loops) I don't get the right result.
Shouldn't I be able to collapse all four loops?
The issue is that the 4th loop isn't parallelisable the same way the 3 first are. Just to convince yourself, look at it with only the last loop in mind. It would become:
int zz = z[i][j][k];
for (int n=0; n < ITERATIONS-1; ++n) {
zz += 1;
}
z[i][j][k] = zz;
In order to parallelise it, you would need to add a reduction(+:zz) directive, right?
Well, same story for your collapse(4). But adding reduction(+:z), if all possible which I'm not sure, would raise some issues:
The reduction clause for arrays in C or C++ is only supported for OpenMP 4.5 onwards, and I don't know of any compiler supporting it at the moment (although I'm sure some do).
It would probably make the code much slower anyway, due to the complex mechanism of managing the reduction aspect.
So bottom line is: just stick to collapse(3) or less as you need, or parallelise you loop differently.

Can race conditions lower the code's performance?

I'm running the following code for matrix multiplication the performance of which I'm supposed to measure:
for (int j = 0; j < COLUMNS; j++)
#pragma omp for schedule(dynamic, 10)
for (int k = 0; k < COLUMNS; k++)
for (int i = 0; i < ROWS; i++)
matrix_r[i][j] += matrix_a[i][k] * matrix_b[k][j];
Yes, I know it's really slow, but that's not the point - it's purely for performance measuring purposes. I'm running 3 versions of the code depending on where I put the #pragma omp directive, and therefore depending on where the parallelization happens. The code is run in Microsoft Visual Studio 2012 in release mode and profiled in CodeXL.
One thing I've noticed from the measurements is that the option in the code snippet (with parallelization before the k loop) is the slowest, then the version with the directive before the j loop, then the one with it before the i loop. The presented version is also the one which calculates a wrong result because of race conditions - multiple threads accessing the same cell of the result matrix at the same time. I understand why the i loop version is the fastest - all the particular threads process only part of the range of the i variable, increasing the temporal locality. However, I don't understand what causes the k loop version to be the slowest - does it have something to do with the fact that it produces the wrong result?
Of course race conditions can slow the code down. When two or more threads access the same part of memory (same cache line), that part must be loaded into the cache of the given cores over and over again as the the other thread invalidates the content of the cache by writing into it. They compete for a shared resource.
When two variables located too close in memory are written and read by more threads, it also results in a slowdown. This is known as false sharing. In your case it is even worse, they are not just too close, they even coincide.
Your assumption is correct. But if we are talking about performance, and not just validating your assumption, there is more to the story.
The order of your indexes is a big issue, multi-threaded or not. Given than the distance between mat[x][y] and mat[x][y+1] is one, while the distance between mat[x][y] and mat[x+1][y] is dim(mat[x]) You want x to be the outer index and y the inner to have the minimal distance between iteration. Given __[i][j] += __[i][k] * __[k][j]; you see that the proper order for spacial locality is i -> k -> j.
Whatever the order, there is one value which can be saved for later. Given your snippet
for (int j = 0; j < COLUMNS; j++)
for (int k = 0; k < COLUMNS; k++)
for (int i = 0; i < ROWS; i++)
matrix_r[i][j] += matrix_a[i][k] * matrix_b[k][j];
matrix_b[k][j] value will be fetched from memory i times. You could have started from
for (int j = 0; j < COLUMNS; j++)
for (int k = 0; k < COLUMNS; k++)
int temp = matrix_b[k][j];
for (int i = 0; i < ROWS; i++)
matrix_r[i][j] += matrix_a[i][k] * temp;
But given that you are writing to matrix_r[i][j], the best access to optimize is matrix_r[i][j], given that writing is slower than reading
Unnecessary write accesses to memory
for (int i = 0; i < ROWS; i++)
matrix_r[i][j] += matrix_a[i][k] * matrix_b[k][j];
will write to the memory of matrix_r[i][j] ROWS times. Using a temporary variable would reduce the accesses to one.
for (int i = 0; i < ...; j++)
for (int j = 0; j < ...; k++)
int temp = 0;
for (int k = 0; k < ...; i++)
temp += matrix_a[i][k] * matrix_b[k][j];
matrix_r[i][j] = temp;
This decreases write accesses from n^3 to n^2.
Now you are using threads. To maximize the efficiency of multithreading you should isolate as much a thread memory access from the others. One way to do it would be to give each thread a column, and prefect that column once. One simple way would be to have the transpose of matrix_b such that
matrix_r[i][j] += matrix_a[i][k] * matrix_b[k][j]; becomes
matrix_r[i][j] += matrix_a[i][k] * matrix_b_trans[j][k];
such that the most inner loop on k always deal with contiguous memory respective to matrix_a and matrix_b_trans
for (int i = 0; i < ROWS; j++)
for (int j = 0; j < COLS; k++)
int temp = 0;
for (int k = 0; k < SAMEDIM; i++)
temp += matrix_a[i][k] * matrix_b_trans[j][k];
matrix_r[i][j] = temp;