Can separate CUDA kernels be merged using __syncthreads()? - c++

Say I have this toy code:
#define N (1024*1024)
#define M (1000000)
__global__ void cudakernel1(float *buf)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
buf[i] = 1.0f * i / N;
for(int j = 0; j < M; j++)
buf[i] *= buf[i];
}
__global__ void cudakernel2(float *buf)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
for(int j = 0; j < M; j++)
buf[i] += buf[i];
}
int main()
{
float data[N];
float *d_data;
cudaMalloc(&d_data, N * sizeof(float));
cudakernel1<<<N/256, 256>>>(d_data);
cudakernel2<<<N/256, 256>>>(d_data);
cudaMemcpy(data, d_data, N * sizeof(float), cudaMemcpyDeviceToHost);
cudaFree(d_data);
}
Can I merge the two kernels like so:
#define N (1024*1024)
#define M (1000000)
__global__ void cudakernel1_plus_2(float *buf)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
buf[i] = 1.0f * i / N;
for(int j = 0; j < M; j++)
buf[i] *= buf[i];
__syncthreads();
for(int j = 0; j < M; j++)
buf[i] += buf[i];
}
int main()
{
float data[N];
float *d_data;
cudaMalloc(&d_data, N * sizeof(float));
cudakernel1_plus_2<<<N/256, 256>>>(d_data);
cudaMemcpy(data, d_data, N * sizeof(float), cudaMemcpyDeviceToHost);
cudaFree(d_data);
}
Is the general case that two consecutive kernels which take the same block and thread parameters can be merged with an intermediate __syncthreads() true?
(My real case is 6 consecutive non-trivial kernels that have a lot of set-up and tear-down overhead).

The simplest, most general answer is no. I only need to find one example for which the paradigm breaks to support that. Let's remind ourselves that:
__syncthreads() is a block level execution barrier, but not a device-wide execution barrier. The only defined device-wide execution barrier is the kernel launch (assuming we're talking about issuing kernels into the same stream, for sequential execution).
threadblocks of a particular kernel launch can execute in any order.
Let's say we have 2 functions:
reverse the elements of a vector
Sum the vector elements
Let's assume the vector reversal is not an in-place operation (the output is distinct from the input), and that each threadblock handles a block-sized chunk of the vector, reading the elements and storing to the appropriate location in the output vector.
To keep it really simple, we'll imagine we only have (need) two threadblocks. For the first step, block 0 copies the left hand side of the vector to the right hand side (reversing the order) and block 1 copies right-to-left:
1 2 3 4 5 6 7 8
|blk 0 |blk 1 |
\ | /
X
/| \
v | v
8 7 6 5 4 3 2 1
For the second step, in classical parallel reduction fashion, block zero sums the left hand elements of the output vector, and block 1 sums the right hand elements:
8 7 6 5 4 3 2 1
\ / \ /
blk0 blk1
26 10
As long as the first function is issued in kernel1 and the second function is issued in kernel2, into the same stream after kernel1, this all just works. For each kernel, it does not matter if block 0 executes before block 1, or vice-versa.
If we combine the operations so that we have a single kernel, and block 0 copies/reverses the first half of the vector to the second half of the output vector, then executes a __syncthreads(), then sums the first half of the output vector, things are likely to break. If block 0 executes before block 1, then the first step will be fine (copy/reversal of vector) but the second step will be operating on an output array half that has not been populated yet, because the block 1 has not begun executing yet. The computed sum will be wrong.
Without trying to give formal proofs, we can see that in the above case where there is data movement from one block's "domain" to another block's "domain", we run the risk of breaking things, because the previous device-wide sync (kernel launch) was necessary for correctness. However, if we can limit the "domain" of a block so that any data consumed by subsequent operations is produced only by previous operations in that block, then a __syncthreads() may be sufficient to allow this strategy with correctness. (The previous silly example could easily be reworked to allow this, simply by having block 0 be responsible for the first half of the output vector, thus copying from the second half of the input vector, and vice versa for the other block.)
Finally, if we limit data scope to a single thread, then we can make such combinations without even using __syncthreads(). These last two cases might have characteristics of "embarassingly parallel" problems, which exhibit a high degree of independence.

Related

Cuda number of elements is larger than assigned threads

I am new to CUDA programming.
I am curious that what happens if the number of elements is larger than the number of threads?
In this simple vector_add example
__global__
void add(int n, float *x, float *y)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < n)
y[i] = x[i] + y[i];
}
Say the number of array elements is 10,000,000. And we call this function using 64 blocks and 256 threads per block:
int n = 1e8;
int grid_size = 64;
int block_sie = 256;
Then, only 64*256 = 16384 threads are assigned, what would happen to the rest of the array elements?
what would happen to the rest of the array elements?
Nothing at all. They wouldn't be touched and would remain unchanged. Of course, your x array elements don't change anyway. So we are referring to y here. The values of y[0..16383] would reflect the result of the vector add. The values of y[16384..9999999] would be unchanged.
For this reason (to conveniently handle arbitrary data set sizes independent of the chosen grid size), people sometimes suggest a grid-stride-loop kernel design.

Problem with initialising 2D vector in C++

I was implementing a solution for this problem to get a feel for the language. My reasoning is as follows:
Notice that the pattern on the diagonal is 2*n+1.
The elements to the left and upwards are alternating arithmetic progressions or additions/subtractions of the elements from the diagonal to the boundary.
Create a 2D vector and instantiate all the diagonal elements. Then create a dummy variable to fill in the remaining parts by add/subtract the diagonal elements.
My code is as follows:
#include <vector>
using namespace std;
const long value = 1e9;
vector<vector<long>> spiral(value, vector<long> (value));
long temp;
void build(){
spiral[0][0] = 1;
for(int i = 1; i < 5e8; i++){
spiral[i][i]= 2*i+1;
temp = i;
long counter = temp;
while(counter){
if(temp % 2 ==0){
spiral[i][counter]++;
spiral[counter][i]--;
counter--;
temp--;
}else{
spiral[i][counter]--;
spiral[counter][i]++;
counter--;
temp--;
}
}
}
}
int main(){
spiral[0][0] = 1;
build();
int y, x;
cin >> y >> x;
cout << spiral[y][x] << endl;
}
The problem is that the programme doesn't output any thing. I can't figure out why my vector won't print any elements. I've tested it with spiral[1][1] and all I get is some obscure assembler message after waiting 5 or 10 minutes. What's wrong with my reasoning?
EDIT: Full output is:
and
A long is probably 4 or 8 bytes for you (e.g. commonly 4 bytes on Windows, 4 bytes on x86 Linux, and 8 bytes on x64 Linux), so lets assume 4. 1e9 * 4 is 4 gigabytes of continuous memory for each vector<long> (value).
Then the outer vector creates another 1e9 copies of that, which is 4 exabytes (or 4 million terabytes) given a 32bit long or double for 64bit and ignoring the overhead size of each std::vector. It is highly unlikely that you have that much memory and swapfile, and being a global this is attempted before main() is called.
So you are not going to be able to store all this data directly, you will need to think about what data actually needs to be stored to get the result you desire.
If you run under a debugger set to stop on exceptions, you might see a std::bad_alloc getting thrown, with the call stack indicating the cause (e.g. Visual Studio will display something like "dynamic initializer for 'spiral'" in the call stack), but it is possible on Linux the OS will just kill it first, as Linux can over-commit memory (so new etc. succeeds), then when some program goes to use memory (an actual read or write) it fails (over committed, nothing free) and it SIGKILL's something to free memory (this doesn't seem entirely predictable, I copy-pasted your code onto Ubuntu 18 and on command line got "terminate called after throwing an instance of 'std::bad_alloc'").
The problem actually asks you to find an analytical formula for the solution, not to simulate the pattern. All you need to do is to carefully analyze the pattern:
unsigned int get_n(unsigned int row, unsigned int col) {
assert(row >= 1 && col >= 1);
const auto n = std::max(row, col);
if (n % 2 == 0)
std::swap(row, col);
if (col == n)
return n * n + 1 - row;
else
return (n - 1) * (n - 1) + col;
}
Math is your friend, here, not std::vector. One of the constraints of this puzzle is a memory limit of 512MB, but a vector big enough for all the tests would require several GB of memory.
Consider how the square is filled. If you choose the maximum between the given x and y (call it w), you have "delimited" a square of size w2. Now you have to consider the outer edge of this square to find the actual index.
E.g. Take x = 6 and y = 3. The maximum is 6 (even, remember the zig zag pattern), so the number is (6 - 1)2 + 3 = 28
* * * * * 26
* * * * * 27
* * * * * [28]
* * * * * 29
* * * * * 30
36 35 34 33 32 31
Here, a proof of concept.

Cache, row major and column major

I've been testing the differences of time it takes to sum the element of a matrix in row major order
std::vector<double> v( n * n );
// Timing begins
double sum{ 0.0 };
for (std::size_t i = 0; i < n; i++) {
for (std::size_t j = 0; j < n; j++) {
sum += v[i * n + j];
}
}
// Timing ends
and in column major order
std::vector<double> v( n * n );
// Timing begins
double sum{ 0.0 };
for (std::size_t j = 0; j < n; j++) {
for (std::size_t i = 0; i < n; i++) {
sum += v[i * n + j];
}
}
// Timing ends
The code has been compiled with
g++ -std=c++11 -Ofast -fno-tree-vectorize -DNDEBUG main.cpp -o main
We expect the timings of the row major order (blue) to be significantly faster than the column major order (yellow). If I plot the time it takes to run this algorithm (in nanoseconds) divided by the size in bytes of the array, I get the following graph on my computer which has a core-i7.
The x-axis displays n, and the y-axis displays the time in nanoseconds for the sumation divided by the size (in bytes) of v. Everything seems normal. The huge difference in between the two starts around n = 850 for which the size of the matrix is about 6MB which is exactly the size of my L3 cache. The column major order is 10 times slower than the row major order for large n. I am pleased with the results.
Next thing I do is run the same program on Amazon Web Services where they have an E5-2670. Here are the results of the same program.
The column major order is about 10 times slower than the row major order for 700 <= n <= 2000, but for n >= 2100, the cost per bytes of the column major order suddenly drops and it is just 2 times slower than the row major order!!! Does anyone have an explanation for this strange behaviour?
PS: For those who are interested, the full code is available here: https://www.dropbox.com/s/778hwpuriwqbi6o/InsideLoop.zip?dl=0

OpenCL crash on big 2d range

In my program, i need to run the kernel once on every item of the large 2d-array. The program works correctly for small ranges - up to around 50x50, sometimes up to 100x100.
For bigger datasets however, calling the kernel causes the video card driver to crash.
I have tested this program on two computers with different AMD cards, and they exhibit the exact same behaviour. Other, one-dimensional kernels work properly, even for huge datasets of ~10 000 x 10 000 items.
Also, removing the i variable from the matrix[i + (N + 1) * j] expression causes the kernel to work without errors.
Am i setting the range incorrectly, making a mistake in the kernel, or maybe the problem lies elsewhere?
enqueued range:
cl::EnqueueArgs args(queue,cl::NDRange(offset, offset+1),cl::NDRange(N+1, N),cl::NullRange);
kernel:
void kernel sub(global float* matrix, global const float* vec, int N, int offset) {
int i = get_global_id(0);
int j = get_global_id(1);
matrix[i + (N + 1) * j] -= matrix[i + (N + 1) * offset] * vec[j];
}
One of possible reasons - if your kernel is running for too long, driver may drop it. Dice up problem area into smaller blocks.
Consider this, for a 100x100 input array you will use N=100, hence the maximum value of i in your kernel will be 100 because of the N+1 used in the enqueue args, while the maximum for j will be 99. I have assumed that offset = 0. Therefore i + (N + 1) * j = 100 + 101*99 = 10099 which is outside of your 2D array.
When offset = 1, the minimums for i and j will be 1 and 2 respectively, while the maximums will be 101 and 100. Therefore i + (N + 1) * j = 101 + 101*100 = 10201.
In my experience, GPUs are not very good at catching segmentation faults when accessing global memory. Your attempt at purposefully creating one may work on some cards sometimes but no guarantees.
The problem could be caused by local-work-size and global-work-size. It is important while using two dimensional arrays to properly calculate them. It could be that for big values your global_id(0) is bigger than you specified in clEnqueueNDRangeKernel().

Efficient layout and reduction of virtual 2d data (abstract)

I use C++ and CUDA/C and want to write code for a specific problem and I ran into a quite tricky reduction problem.
My experience in parallel programming isn't negligible but quite limited and I cannot totally forsee the specificity of this problem.
I doubt there is a convenient or even "easy" way to handle the problems I am facing but perhaps I am wrong.
If there are any resources (i.e. articles, books, web-links, ...) or key-words covering this or similar problems, please let me know.
I tried to generalize the whole case as good as possible and keep it abstract instead of posting too much code.
The Layout ...
I have a system of N inital elements and N result elements. (I'll use N=8 for example but N can be any integral value greater than three.)
static size_t const N = 8;
double init_values[N], result[N];
I need to calculate almost every (not all i'm afraid) unique permutation of the init-values without self-interference.
This means calculation f(init_values[0],init_values[1]), f(init_values[0],init_values[2]), ..., f(init_values[0],init_values[N-1]), f(init_values[1],init_values[2]), ..., f(init_values[1],init_values[N-1]), ... and so on.
This is in fact a virtual triangular matrix which has the shape seen in the following illustration.
P 0 1 2 3 4 5 6 7
|---------------------------------------
0| x
|
1| 0 x
|
2| 1 2 x
|
3| 3 4 5 x
|
4| 6 7 8 9 x
|
5| 10 11 12 13 14 x
|
6| 15 16 17 18 19 20 x
|
7| 21 22 23 24 25 26 27 x
Each element is a function of the respective column and row elements in init_values.
P[i] (= P[row(i)][col(i]) = f(init_values[col(i)], init_values[row(i)])
i.e.
P[11] (= P[5][1]) = f(init_values[1], init_values[5])
There are (N*N-N)/2 = 28 possible, unique combinations (Note: P[1][5]==P[5][1], so we only have a lower (or upper) triangular matrix) using the example N = 8.
The basic problem
The result array is computed from P as a sum of the row elements minus the sum of the respective column elements.
For example the result at position 3 will be calculated as a sum of row 3 minus the sum of column three.
result[3] = (P[3]+P[4]+P[5]) - (P[9]+P[13]+P[18]+P[24])
result[3] = sum_elements_row(3) - sum_elements_column(3)
I tried to illustrate it in a picture with N = 4.
As a consequence the following is true:
N-1 operations (potential concurrent writes) will be performed on each result[i]
result[i] will have N-(i+1) writes from subtractions and i additions
Outgoing from each P[i][j] there will be a subtraction to r[j] and a addition to r[i]
This is where the main problems come into place:
Using one thread to compute each P and updating the result directly will result in multiple kernels trying to write to the same result location (N-1 threads each).
Storing the whole matrix P for a subsequent reduction step on the other hand is very expensive in terms of memory consumption and therefore impossible for very large systems.
The idea of having a unqiue, shared result vector for each thread-block is impossible, too.
(N of 50k makes 2.5 billion P elements and therefore [assuming a maximum number of 1024 threads per block] a minimal number of 2.4 million blocks consuming over 900GiB of memory if each block has its own result array with 50k double elements.)
I think I could handle reduction for a more static behaviour but this problem is rather dynamic in terms of potential concurrent memory write-access.
(Or is it possible to handle it by some "basic" type of reduction?)
Adding some complications ...
Unfortunatelly, depending on (arbitrary user) input, which is independant of the initial values, some elements of P need to be skipped.
Let's assume we need to skip permutations P[6], P[14] and P[18]. Therefore we have 24 combinations left, which need to be calculated.
How to tell the kernel which values need to be skipped?
I came up with three approaches, each having notable downsides if N is very large (like several ten thousands of elements).
1. Store all combinations ...
... with their respective row and column index struct combo { size_t row,col; };, that need to be calculated in a vector<combo> and operate on this vector. (used by the current implementation)
std::vector<combo> elements;
// somehow fill
size_t const M = elements.size();
for (size_t i=0; i<M; ++i)
{
// do the necessary computations using elements[i].row and elements[i].col
}
This solution consumes is consuming lots of memory since only "several" (may even be ten thousands of elements but that's not much in contrast to several billion in total) but it avoids
indexation computations
finding of removed elements
for each element of P which is the downside of the second approach.
2. Operate on all elements of P and find removed elements
If I want to operate on each element of P and avoid nested loops (which i couldn't reproduce very well in cuda) I need to do something like this:
size_t M = (N*N-N)/2;
for (size_t i=0; i<M; ++i)
{
// calculate row indices from `i`
double tmp = sqrt(8.0*double(i+1))/2.0 + 0.5;
double row_d = floor(tmp);
size_t current_row = size_t(row_d);
size_t current_col = size_t(floor(row_d*(ict-row_d)-0.5));
// check whether the current combo of row and col is not to be removed
if (!removes[current_row].exists(current_col))
{
// do the necessary computations using current_row and current_col
}
}
The vector removes is very small in contrast to the elements vector in the first example but the additional computations to obtain current_row, current_col and the if-branch are very inefficient.
(Remember we're still talking about billions of evaluations.)
3. Operate on all elements of P and remove elements afterwards
Another idea I had was to calculate all valid and invalid combinations independently.
But unfortunately, due to summation errors the following statement is true:
calc_non_skipped() != calc_all() - calc_skipped()
Is there a convenient, known, high performance way to get the desired results from the initial values?
I know that this question is rather complicated and perhaps limited in relevance. Nevertheless, I hope some illuminative answers will help me to solve my problems.
The current implementation
Currently this is implemented as CPU Code with OpenMP.
I first set up a vector of the above mentioned combos storing every P that needs to be computed and pass it to a parallel for loop.
Each thread is provided with a private result vector and a critical section at the end of the parallel region is used for a proper summation.
First, I was puzzled for a moment why (N**2 - N)/2 yielded 27 for N=7 ... but for indices 0-7, N=8, and there are 28 elements in P. Shouldn't try to answer questions like this so late in the day. :-)
But on to a potential solution: Do you need to keep the array P for any other purpose? If not, I think you can get the result you want with just two intermediate arrays, each of length N: one for the sum of the rows and one for the sum of the columns.
Here's a quick-and-dirty example of what I think you're trying to do (subroutine direct_approach()) and how to achieve the same result using the intermediate arrays (subroutine refined_approach()):
#include <cstdlib>
#include <cstdio>
const int N = 7;
const float input_values[N] = { 3.0F, 5.0F, 7.0F, 11.0F, 13.0F, 17.0F, 23.0F };
float P[N][N]; // Yes, I'm wasting half the array. This way I don't have to fuss with mapping the indices.
float result1[N] = { 0.0F, 0.0F, 0.0F, 0.0F, 0.0F, 0.0F, 0.0F };
float result2[N] = { 0.0F, 0.0F, 0.0F, 0.0F, 0.0F, 0.0F, 0.0F };
float f(float arg1, float arg2)
{
// Arbitrary computation
return (arg1 * arg2);
}
float compute_result(int index)
{
float row_sum = 0.0F;
float col_sum = 0.0F;
int row;
int col;
// Compute the row sum
for (col = (index + 1); col < N; col++)
{
row_sum += P[index][col];
}
// Compute the column sum
for (row = 0; row < index; row++)
{
col_sum += P[row][index];
}
return (row_sum - col_sum);
}
void direct_approach()
{
int row;
int col;
for (row = 0; row < N; row++)
{
for (col = (row + 1); col < N; col++)
{
P[row][col] = f(input_values[row], input_values[col]);
}
}
int index;
for (index = 0; index < N; index++)
{
result1[index] = compute_result(index);
}
}
void refined_approach()
{
float row_sums[N];
float col_sums[N];
int index;
// Initialize intermediate arrays
for (index = 0; index < N; index++)
{
row_sums[index] = 0.0F;
col_sums[index] = 0.0F;
}
// Compute the row and column sums
// This can be parallelized by computing row and column sums
// independently, instead of in nested loops.
int row;
int col;
for (row = 0; row < N; row++)
{
for (col = (row + 1); col < N; col++)
{
float computed = f(input_values[row], input_values[col]);
row_sums[row] += computed;
col_sums[col] += computed;
}
}
// Compute the result
for (index = 0; index < N; index++)
{
result2[index] = row_sums[index] - col_sums[index];
}
}
void print_result(int n, float * result)
{
int index;
for (index = 0; index < n; index++)
{
printf(" [%d]=%f\n", index, result[index]);
}
}
int main(int argc, char * * argv)
{
printf("Data reduction test\n");
direct_approach();
printf("Result 1:\n");
print_result(N, result1);
refined_approach();
printf("Result 2:\n");
print_result(N, result2);
return (0);
}
Parallelizing the computation is not so easy, since each intermediate value is a function of most of the inputs. You can compute the sums individually, but that would mean performing f(...) multiple times. The best suggestion I can think of for very large values of N is to use more intermediate arrays, computing subsets of the results, then summing the partial arrays to yield the final sums. I'd have to think about that one when I'm not so tired.
To cope with the skip issue: If it's a simple matter of "don't use input values x, y, and z", you can store x, y, and z in a do_not_use array and check for those values when looping to compute the sums. If the values to be skipped are some function of row and column, you can store those as pairs and check for the pairs.
Hope this gives you ideas for your solution!
Update, now that I'm awake: Dealing with "skip" depends a lot on what data needs to be skipped. Another possibility for the first case - "don't use input values x, y, and z" - a much faster solution for large data sets would be to add a level of indirection: create yet another array, this one of integer indices, and store only the indices of the good inputs. F'r instance, if invalid data is in inputs 2 and 5, the valid array would be:
int valid_indices[] = { 0, 1, 3, 4, 6 };
Interate over the array valid_indices, and use those indices to retrieve the data from your input array to compute the result. On the other paw, if the values to skip depend on both indices of the P array, I don't see how you can avoid some kind of lookup.
Back to parallelizing - No matter what, you'll be dealing with (N**2 - N)/2 computations
of f(). One possibility is to just accept that there will be contention for the sum
arrays, which would not be a big issue if computing f() takes substantially longer than
the two additions. When you get to very large numbers of parallel paths, contention will
again be an issue, but there should be a "sweet spot" balancing the number of parallel
paths against the time required to compute f().
If contention is still an issue, you can partition the problem several ways. One way is
to compute a row or column at a time: for a row at a time, each column sum can be
computed independently and a running total can be kept for each row sum.
Another approach would be to divide the data space and, thus, the computation into
subsets, where each subset has its own row and column sum arrays. After each block
is computed, the independent arrays can then be summed to produce the values you need
to compute the result.
This probably will be one of those naive and useless answers, but it also might help. Feel free to tell me that I'm utterly and completely wrong and I have misunderstood the whole affair.
So... here we go!
The Basic Problem
It seems to me that you can define you result function a little differently and it will lift at least some contention off your intermediate values. Let's suppose that your P matrix is lower-triangular. If you (virtually) fill the upper triangle with the negative of the lower values (and the main diagonal with all zeros,) then you can redefine each element of your result as the sum of a single row: (shown here for N=4, and where -i means the negative of the value in the cell marked as i)
P 0 1 2 3
|--------------------
0| x -0 -1 -3
|
1| 0 x -2 -4
|
2| 1 2 x -5
|
3| 3 4 5 x
If you launch independent threads (executing the same kernel) to calculate the sum of each row of this matrix, each thread will write a single result element. It seems that your problem size is large enough to saturate your hardware threads and keep them busy.
The caveat, of course, is that you'll be calculating each f(x, y) twice. I don't know how expensive that is, or how much the memory contention was costing you before, so I cannot judge whether this is a worthwhile trade-off to do or not. But unless f was really really expensive, I think it might be.
Skipping Values
You mention that you might have tens of thousands elements of the P matrix that you need to ignore in your calculations (effectively skip them.)
To work with the scheme I've proposed above, I believe you should store the skipped elements as (row, col) pairs, and you have to add the transposed of each coordinate pair too (so you'll have twice the number of skipped values.) So your example skip list of P[6], P[14] and P[18] becomes P(4,0), P(5,4), P(6,3) which then becomes P(4,0), P(5,4), P(6,3), P(0,4), P(4,5), P(3,6).
Then you sort this list, first based on row and then column. This makes our list to be P(0,4), P(3,6), P(4,0), P(4,5), P(5,4), P(6,3).
If each row of your virtual P matrix is processed by one thread (or a single instance of your kernel or whatever,) you can pass it the values it needs to skip. Personally, I would store all these in a big 1D array and just pass in the first and last index that each thread would need to look at (I would also not store the row indices in the final array that I passed in, since it can be implicitly inferred, but I think that's obvious.) In the example above, for N = 8, the begin and end pairs passed to each thread will be: (note that the end is one past the final value needed to be processed, just like STL, so an empty list is denoted by begin == end)
Thread 0: 0..1
Thread 1: 1..1 (or 0..0 or whatever)
Thread 2: 1..1
Thread 3: 1..2
Thread 4: 2..4
Thread 5: 4..5
Thread 6: 5..6
Thread 7: 6..6
Now, each thread goes on to calculate and sum all the intermediate values in a row. While it is stepping through the indices of columns, it is also stepping through this list of skipped values and skipping any column number that comes up in the list. This is obviously an efficient and simple operation (since the list is sorted by column too. It's like merging.)
Pseudo-Implementation
I don't know CUDA, but I have some experience working with OpenCL, and I imagine the interfaces are similar (since the hardware they are targeting are the same.) Here's an implementation of the kernel that does the processing for a row (i.e. calculates one entry of result) in pseudo-C++:
double calc_one_result (
unsigned my_id, unsigned N, double const init_values [],
unsigned skip_indices [], unsigned skip_begin, unsigned skip_end
)
{
double res = 0;
for (unsigned col = 0; col < my_id; ++col)
// "f" seems to take init_values[column] as its first arg
res += f (init_values[col], init_values[my_id]);
for (unsigned row = my_id + 1; row < N; ++row)
res -= f (init_values[my_id], init_values[row]);
// At this point, "res" is holding "result[my_id]",
// including the values that should have been skipped
unsigned i = skip_begin;
// The second condition is to check whether we have reached the
// middle of the virtual matrix or not
for (; i < skip_end && skip_indices[i] < my_id; ++i)
{
unsigned col = skip_indices[i];
res -= f (init_values[col], init_values[my_id]);
}
for (; i < skip_end; ++i)
{
unsigned row = skip_indices[i];
res += f (init_values[my_id], init_values[row]);
}
return res;
}
Note the following:
The semantics of init_values and function f are as described by the question.
This function calculates one entry in the result array; specifically, it calculates result[my_id], so you should launch N instances of this.
The only shared variable it writes to is result[my_id]. Well, the above function doesn't write to anything, but if you translate it to CUDA, I imagine you'd have to write to that at the end. However, no one else writes to that particular element of result, so this write will not cause any contention of data race.
The two input arrays, init_values and skipped_indices are shared among all the running instances of this function.
All accesses to data are linear and sequential, except for the skipped values, which I believe is unavoidable.
skipped_indices contain a list of indices that should be skipped in each row. It's contents and structure are as described above, with one small optimization. Since there was no need, I have removed the row numbers and left only the columns. The row number will be passed into the function as my_id anyways and the slice of the skipped_indices array that should be used by each invocation is determined using skip_begin and skip_end.
For the example above, the array that is passed into all invocations of calc_one_result will look like this:[4, 6, 0, 5, 4, 3].
As you can see, apart from the loops, the only conditional branch in this code is skip_indices[i] < my_id in the third for-loop. Although I believe this is innocuous and totally predictable, even this branch can be easily avoided in the code. We just need to pass in another parameter called skip_middle that tells us where the skipped items cross the main diagonal (i.e. for row #my_id, the index at skipped_indices[skip_middle] is the first that is larger than my_id.)
In Conclusion
I'm by no means an expert in CUDA and HPC. But if I have understood your problem correctly, I think this method might eliminate any and all contentions for memory. Also, I don't think this will cause any (more) numerical stability issues.
The cost of implementing this is:
Calling f twice as many times in total (and keeping track of when it is called for row < col so you can multiply the result by -1.)
Storing twice as many items in the list of skipped values. Since the size of this list is in the thousands (and not billions!) it shouldn't be much of a problem.
Sorting the list of skipped values; which again due to its size, should be no problem.
(UPDATE: Added the Pseudo-Implementation section.)