I'm new to Cuda programming and I'm implementing the classical Floyd APSP Algorithm. This algorithm consists in 3 nested loops and all the code inside the two inner loops can be executed in parallel.
As main parts of my code, here is the kernel code:
__global__ void dfloyd(double *dM, size_t k, size_t n)
{
unsigned int x = threadIdx.x + blockIdx.x * blockDim.x;
unsigned int y = threadIdx.y + blockIdx.y * blockDim.y;
unsigned int index = y * n + x;
double d;
if (x < n && y < n)
{
d=dM[x+k*n] + dM[k+y*n];
if (d<dM[index])
dM[index]=d;
}
}
and here is the part from the main function where the kernels are launched (for readability I omitted error handling code):
double *dM;
cudaMalloc((void **)&dM, sizeof_M);
cudaMemcpy(dM, hM, sizeof_M, cudaMemcpyHostToDevice);
int dimx = 32;
int dimy = 32;
dim3 block(dimx, dimy);
dim3 grid((n + block.x - 1) / block.x, (n + block.y - 1) / block.y);
for (size_t k=0; k<n; k++)
{
dfloyd<<<grid, block>>>(dM, k, n);
cudaDeviceSynchronize();
}
cudaMemcpy(hM, dM, sizeof_M, cudaMemcpyDeviceToHost);
[For the understanding, dM is referring to the distance matrix stored in the device side and hM in the host side and n is referring to the number of nodes.]
Kernels inside the k-loop have to be executed serially, this explains why I write the cudaDeviceSynchronize() instruction after each kernel execution.
However, I notice that putting this synchro instruction outside the loop leads to the same result.
Now, my question. Do the two following pieces of code
for (size_t k=0; k<n; k++)
{
dfloyd<<<grid, block>>>(dM, k, n);
cudaDeviceSynchronize();
}
and
for (size_t k=0; k<n; k++)
{
dfloyd<<<grid, block>>>(dM, k, n);
}
cudaDeviceSynchronize();
are equivalent?
They are not equivalent but will give the same results. The first one will make the host wait after each kernel call until the kernel has returned, while the other one will make it wait only once.
Maybe the confusing part is why does it work; in CUDA, two consecutive kernel calls on the same stream (in your case, default stream) are guaranteed to be executed serially.
Performance wise, it is advised to use the second version, as synchronisation with the host adds overhead.
Edit: in that specific case, you do not even need to call cudaDeviceSynchronize() because the cudaMemcpy will synchronize.
Related
I have the following code that I'm trying to implement in cuda but I'm having a problem of flattening a 3D array to 1D in cuda
C++ code
for(int i=0; i<w; i++)
for(int j=0; j<h; j++)
for(int k=0; k<d; k++)
arr[h*w*i+ w*j+ k] = (h*w*i+ w*j+ k)*2;
This is what I have so far in Cuda
int w = h = d;
int N = 64;
__global__ void getIndex(float* A)
{
int i = blockIdx.x;
int j = blockIdx.y;
int k = blockIdx.z;
A[h*w*i+ w*j+ k] = h*w*i+ w*j+ k;
}
int main(int argc, char **argv)
{
float *d_A;
cudaMalloc((void **)&d_A, w * h * d * sizeof(float) );
getIndex <<<N,1>>> (d_A);
}
But I'm not getting the result I'm expecting, I do not know how to get the right i,j and k indices
Consider a 3D problem of size w x h x d. (This could be a simple array which has to be set like in your question or any other 3D problem that is easy to parallelize.) I will use your simple set-task for demonstration purpose.
The easiest way to handle this with a CUDA kernel is to launch one thread per array entry, that is w*h*d threads. This answer discusses why one thread per element may not always be the best solution.
Now let us have a look at the following lines of code
dim3 numThreads(w,h,d);
getIndex <<<1, numThreads>>> (d_A, w, h, d);
Here we are launching a kernel with a total of w*h*d threads.
The kernel can than be implemented as
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int i = threadIdx.x;
int j = threadIdx.y;
int k = threadIdx.z;
A[h*d*i+ d*j+ k] = h*d*i+ d*j+ k;
}
But there is a problem with this kernel and the kernel call: The number of threads per thread block is limited (also the number of "threads in a specific direction" is bounded = the z direction is generally most bounded). As we are only calling one thread block our problem size cannot be exceed these certain limits (e.g. w*h*d <= 1024).
This is what threadblocks are for. You can practically launch a kernel with as many threads as you want. (This is not true but the limits for the maximal amount of threadblocks are not likely to be exhausted.)
Calling the kernel this way:
dim3 numBlocks(w/8,h/8,d/8);
dim3 numThreads(8,8,8);
getIndex <<<numBlocks, numThreads>>> (d_A, w, h, d);
will launch the kernel for w/8 * h/8 * d/8 thread blocks while every block contains 8*8*8 threads. So in total w*h*d threads will be called.
Now we have to adjust our kernel accordingly:
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int bx = blockIdx.x;
int by = blockIdx.y;
int bz = blockIdx.z;
int tx = threadIdx.x;
int ty = threadIdx.y;
int tz = threadIdx.z;
A[h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz)] = h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz);
}
Note:
You can write a more general kernel using blockDim.x instead of the fixed size 8 and gridDim.x to calculate w via gridDim.x*blockDim.x. The other two dimensions are handled likewise.
In the proposed example all three dimensions w, h and d have to be multiples of 8. You can also generalize the kernel to allow every dimensions. (Then you have to parse all three dimensions to the kernel to check if the calculated position is still in range of the problem.)
As already mentioned, it may be more efficient to edit more than one entry of the array per thread. This again have to be considered when calling the kernel. A wrapper function which takes the problem size and the data and calls the kernel with the right block and thread configuration may be useful.
I am new to C++/CUDA. I tried implementing the parallel algorithm "reduce" with ability to handle any type of inputsize, and threadsize without increasing asymptotic parallel runtime by recursing over the output of the kernel (in the kernel wrapper).
e.g. Implementing Max Reduce in Cuda the top answer to this question, his/hers implementation will essentially be sequential when threadsize is small enough.
However, I keep getting a "Segmentation fault" when I compile and run it ..?
>> nvcc -o mycode mycode.cu
>> ./mycode
Segmentail fault.
Compiled on a K40 with cuda 6.5
Here is the kernel, basically same as the SO post I linked the checker for "out of bounds" is different:
#include <stdio.h>
/* -------- KERNEL -------- */
__global__ void reduce_kernel(float * d_out, float * d_in, const int size)
{
// position and threadId
int pos = blockIdx.x * blockDim.x + threadIdx.x;
int tid = threadIdx.x;
// do reduction in global memory
for (unsigned int s = blockDim.x / 2; s>0; s>>=1)
{
if (tid < s)
{
if (pos+s < size) // Handling out of bounds
{
d_in[pos] = d_in[pos] + d_in[pos+s];
}
}
}
// only thread 0 writes result, as thread
if (tid==0)
{
d_out[blockIdx.x] = d_in[pos];
}
}
The kernel wrapper I mentioned to handle when 1 block will not contain all of the data.
/* -------- KERNEL WRAPPER -------- */
void reduce(float * d_out, float * d_in, const int size, int num_threads)
{
// setting up blocks and intermediate result holder
int num_blocks = ((size) / num_threads) + 1;
float * d_intermediate;
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
// recursively solving, will run approximately log base num_threads times.
do
{
reduce_kernel<<<num_blocks, num_threads>>>(d_intermediate, d_in, size);
// updating input to intermediate
cudaMemcpy(d_in, d_intermediate, sizeof(float)*num_blocks, cudaMemcpyDeviceToDevice);
// Updating num_blocks to reflect how many blocks we now want to compute on
num_blocks = num_blocks / num_threads + 1;
// updating intermediate
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
}
while(num_blocks > num_threads); // if it is too small, compute rest.
// computing rest
reduce_kernel<<<1, num_blocks>>>(d_out, d_in, size);
}
Main program to initialize in/out and create bogus data for testing.
/* -------- MAIN -------- */
int main(int argc, char **argv)
{
// Setting num_threads
int num_threads = 512;
// Making bogus data and setting it on the GPU
const int size = 1024;
const int size_out = 1;
float * d_in;
float * d_out;
cudaMalloc(&d_in, sizeof(float)*size);
cudaMalloc((void**)&d_out, sizeof(float)*size_out);
const int value = 5;
cudaMemset(d_in, value, sizeof(float)*size);
// Running kernel wrapper
reduce(d_out, d_in, size, num_threads);
printf("sum is element is: %.f", d_out[0]);
}
There are a few things I would point out with your code.
As a general rule/boilerplate, I always recommend using proper cuda error checking and run your code with cuda-memcheck, any time you are having trouble with a cuda code. However those methods wouldn't help much with the seg fault, although they may help later (see below).
The actual seg fault is occurring on this line:
printf("sum is element is: %.f", d_out[0]);
you've broken a cardinal CUDA programming rule: host pointers must not be dereferenced in device code, and device pointers must not be dereferenced in host code. This latter condition applies here. d_out is a device pointer (allocated via cudaMalloc). Such pointers have no meaning if you attempt to dereference them in host code, and doing so will lead to a seg fault.
The solution is to copy the data back to the host before printing it out:
float result;
cudaMemcpy(&result, d_out, sizeof(float), cudaMemcpyDeviceToHost);
printf("sum is element is: %.f", result);
Using cudaMalloc in a loop, on the same variable, without doing any cudaFree operations, is not good practice, and may lead to out-of-memory errors in long-running loops, and may also lead to programs with memory leaks, if such a construct is used in a larger program:
do
{
...
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
}
while...
in this case I think a better approach and trivial fix would be to cudaFree d_intermediate right before you re-allocate:
do
{
...
cudaFree(d_intermediate);
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
}
while...
This might not be doing what you think it is:
const int value = 5;
cudaMemset(d_in, value, sizeof(float)*size);
probably you are aware of this, but cudaMemset, like memset, operates on byte quantities. So you are filling the d_in array with a value corresponding to 0x05050505 (and I have no idea what that bit pattern corresponds to when interpreted as a float quantity). Since you refer to bogus values, you may be cognizant of this already. But it's a common error (e.g. if you were actually trying to initialize the array with the value of 5 in every float location), so I thought I would point it out.
Your code has other issues as well (which you will discover if you make the above fixes then run your code with cuda-memcheck). To learn about how to do good parallel reductions, I would recommend studying the CUDA parallel reduction sample code and presentation. Parallel reductions in global memory are not recommended for performance reasons.
For completeness, here are some of the additional issues I found:
Your kernel code needs an appropriate __syncthreads() statement to ensure that the work of all threads in a block are complete before any threads go onto the next iteration of the for-loop.
Your final write to global memory in the kernel needs to also be conditioned on the read-location being in-bounds. Otherwise, your strategy of always launching an extra block would allow the read from this line to be out-of-bounds (cuda-memcheck will show this).
The reduction logic in your loop in the reduce function is generally messed up and needed to be re-worked in several ways.
I'm not saying this code is defect-free, but it seems to work for the given test case and produce the correct answer (1024):
#include <stdio.h>
/* -------- KERNEL -------- */
__global__ void reduce_kernel(float * d_out, float * d_in, const int size)
{
// position and threadId
int pos = blockIdx.x * blockDim.x + threadIdx.x;
int tid = threadIdx.x;
// do reduction in global memory
for (unsigned int s = blockDim.x / 2; s>0; s>>=1)
{
if (tid < s)
{
if (pos+s < size) // Handling out of bounds
{
d_in[pos] = d_in[pos] + d_in[pos+s];
}
}
__syncthreads();
}
// only thread 0 writes result, as thread
if ((tid==0) && (pos < size))
{
d_out[blockIdx.x] = d_in[pos];
}
}
/* -------- KERNEL WRAPPER -------- */
void reduce(float * d_out, float * d_in, int size, int num_threads)
{
// setting up blocks and intermediate result holder
int num_blocks = ((size) / num_threads) + 1;
float * d_intermediate;
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
cudaMemset(d_intermediate, 0, sizeof(float)*num_blocks);
int prev_num_blocks;
// recursively solving, will run approximately log base num_threads times.
do
{
reduce_kernel<<<num_blocks, num_threads>>>(d_intermediate, d_in, size);
// updating input to intermediate
cudaMemcpy(d_in, d_intermediate, sizeof(float)*num_blocks, cudaMemcpyDeviceToDevice);
// Updating num_blocks to reflect how many blocks we now want to compute on
prev_num_blocks = num_blocks;
num_blocks = num_blocks / num_threads + 1;
// updating intermediate
cudaFree(d_intermediate);
cudaMalloc(&d_intermediate, sizeof(float)*num_blocks);
size = num_blocks*num_threads;
}
while(num_blocks > num_threads); // if it is too small, compute rest.
// computing rest
reduce_kernel<<<1, prev_num_blocks>>>(d_out, d_in, prev_num_blocks);
}
/* -------- MAIN -------- */
int main(int argc, char **argv)
{
// Setting num_threads
int num_threads = 512;
// Making non-bogus data and setting it on the GPU
const int size = 1024;
const int size_out = 1;
float * d_in;
float * d_out;
cudaMalloc(&d_in, sizeof(float)*size);
cudaMalloc((void**)&d_out, sizeof(float)*size_out);
//const int value = 5;
//cudaMemset(d_in, value, sizeof(float)*size);
float * h_in = (float *)malloc(size*sizeof(float));
for (int i = 0; i < size; i++) h_in[i] = 1.0f;
cudaMemcpy(d_in, h_in, sizeof(float)*size, cudaMemcpyHostToDevice);
// Running kernel wrapper
reduce(d_out, d_in, size, num_threads);
float result;
cudaMemcpy(&result, d_out, sizeof(float), cudaMemcpyDeviceToHost);
printf("sum is element is: %.f\n", result);
}
I need to execute a function about 10^11 times. The function is self-contained and requires one integer as input, let's call it f(n). The range of n is in fact 0 < n < 10^11. We can ignore inclusion of endpoints, I just need the concept about running something of this magnitude in terms of indexes on CUDA.
I want to run this function using CUDA, but I have troubles conceptually. Namely, I know how to simulate my n, mentioned above, using the blocks and threads indexes. As shown in slide 40 of, nVidia Tutorial But, what happens when n>TotalNumberOfThreadsPer_CUDA_Call.
Essentially, does the thread count and block count reset for every call I make to run functions on CUDA? If so, is there a simple way to simulate n, as described earlier, for arbitrarily large n?
Thanks.
A common pattern when you want to process more elements than there are threads is to simply loop over your data in grid-sized chunks:
__global__ void kernel(int* data, size_t size) {
for (size_t idx = blockIdx.x * blockDim.x + threadIdx.x;
idx < size;
idx += gridDim.x * blockDim.x) {
// do something with data[idx] ...
}
}
Another option is to launch several consecutive kernels with a start offset:
__global__ void kernel(int* data, size_t size, size_t offset) {
size_t idx = blockIdx.x * blockDim.x + threadIdx.x + offset;
if (idx < size) {
// do something with data[idx] ...
}
}
// Host code
dim3 gridSize = ...;
dim3 blockSize = ...;
for (size_t offset = 0; offset < totalWorkSize; offset += gridSize * blockSize) {
kernel<<<gridSize, blockSize>>>(data, totalWorkSize, offset);
}
In both cases, you can process an "arbitrarily large" number of elements. You're still limited by size_t, so for 10^11 elements you will need to compile your code for 64 bits.
If you have to store the data instead of just computing it, you will need to do it in an iterative method. 10^11 values of any type are not going to fit in GPU memory.
I haven't compiled this code, but hopefully you'll get the gist.
__device__ double my_function(int value);
__global__ void my_kernel(int* data, size_t offset, size_t chunk_size) {
size_t idx = blockIdx.x * blockDim.x + threadIdx.x;
size_t stride = gridDim.x*blockDim.x;
while(idx<chunk_size){
data[idx]=my_function(idx+offset);
idx+=stride;
}
}
void runKernel(size_t num_values){
size_t block_size = 128;
size_t grid_size = 1024;
size_t free_mem, total_mem;
cudaMemGetInfo(&free, &total);
size_t chunk_size = sizeof(double)/free_mem;
double *data;
cudaMalloc(&data, chunk_size);
for(size_t i=0; i<num_values; i+=chunk_size){
my_kernel<<<grid_size, block_size>>>(data, i, chunk_size);
//copy to host and process
//or call another kernel on device to process further
}
//process remainder of values that need to be run assuming num_values%chunk_size!=0
}
so this is a followup to a question i had, at the moment in a CPU version of some Code, i have many things that look like the following:
for(int i =0;i<N;i++){
dgemm(A[i], B[i],C[i], Size[i][0], Size[i][1], Size[i][2], Size[i][3], 'N','T');
}
where A[i] will be a 2D matrix of some size.
I would like to be able to do this on a GPU using CULA (I'm not just doing multiplies, so i need the Linear ALgebra operations in CULA), so for example:
for(int i =0;i<N;i++){
status = culaDeviceDgemm('T', 'N', Size[i][0], Size[i][0], Size[i][0], alpha, GlobalMat_d[i], Size[i][0], NG_d[i], Size[i][0], beta, GG_d[i], Size[i][0]);
}
but I would like to store my B's on the GPU in advance at the start of the program as they dont change, so I need to have a vector that contains pointers to the set of vectors that make up my B's.
i currently have the following code that compiles:
double **GlobalFVecs_d;
double **GlobalFPVecs_d;
extern "C" void copyFNFVecs_(double **FNFVecs, int numpulsars, int numcoeff){
cudaError_t err;
GlobalFPVecs_d = (double **)malloc(numpulsars * sizeof(double*));
err = cudaMalloc( (void ***)&GlobalFVecs_d, numpulsars*sizeof(double*) );
checkCudaError(err);
for(int i =0; i < numpulsars;i++){
err = cudaMalloc( (void **) &(GlobalFPVecs_d[i]), numcoeff*numcoeff*sizeof(double) );
checkCudaError(err);
err = cudaMemcpy( GlobalFPVecs_d[i], FNFVecs[i], sizeof(double)*numcoeff*numcoeff, cudaMemcpyHostToDevice );
checkCudaError(err);
}
err = cudaMemcpy( GlobalFVecs_d, GlobalFPVecs_d, sizeof(double*)*numpulsars, cudaMemcpyHostToDevice );
checkCudaError(err);
}
but if i now try and access it with:
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid;//((G + dimBlock.x - 1) / dimBlock.x,(N + dimBlock.y - 1) / dimBlock.y);
dimGrid.x=(numcoeff + dimBlock.x - 1)/dimBlock.x;
dimGrid.y = (numcoeff + dimBlock.y - 1)/dimBlock.y;
for(int i =0; i < numpulsars; i++){
CopyPPFNF<<<dimGrid, dimBlock>>>(PPFMVec_d, GlobalFVecs_d[i], numpulsars, numcoeff, i);
}
it seg faults here, is this not how to get at the data?
The kernal function that i'm calling is just:
__global__ void CopyPPFNF(double *FNF_d, double *PPFNF_d, int numpulsars, int numcoeff, int thispulsar) {
// Each thread computes one element of C
// by accumulating results into Cvalue
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
int subrow=row-thispulsar*numcoeff;
int subcol=row-thispulsar*numcoeff;
__syncthreads();
if(row >= (thispulsar+1)*numcoeff || col >= (thispulsar+1)*numcoeff) return;
if(row < thispulsar*numcoeff || col < thispulsar*numcoeff) return;
FNF_d[row * numpulsars*numcoeff + col] += PPFNF_d[subrow*numcoeff+subcol];
}
What am i not doing right? Note eventually I would also like to do as the first example, calling cula functions on each GlobalFVecs_d[i], but for now not even this works.
Do you think this is the best way to go about doing this? If it were possible to just pass CULA functions a slice of a large continuous vector I could do that to, but i don't know if it supports that.
Cheers
Lindley
change this:
CopyPPFNF<<<dimGrid, dimBlock>>>(PPFMVec_d, GlobalFVecs_d[i], numpulsars, numcoeff, i);
to this:
CopyPPFNF<<<dimGrid, dimBlock>>>(PPFMVec_d, GlobalFPVecs_d[i], numpulsars, numcoeff, i);
and I believe it will work.
Your methodology of handling pointers is mostly correct. However, when you put GlobalFVecs_d[i] in the parameter list, you are forcing the kernel setup code (running on the host) to take GlobalFVecs_d (a device pointer, created with cudaMalloc), add an appropriately scaled i to the pointer value, and then dereference the resultant pointer to retrieve the value to pass as a parameter to the kernel. But we are not allowed to dereference device pointers in host code.
However, because your methodology was mostly correct, you have a convenient parallel array of the same pointers that resides on the host. This array (GlobalFPVecs_d) is something that we are allowed to dereference into, in host code, to retrieve the resultant device pointer, to pass to the kernel.
It's an interesting bug because normally kernels do not seg fault (although they may throw an error), so a seg fault on a kernel invocation line is unusual. But in this case, the seg fault is occurring in the kernel setup code, not the kernel itself.
Guys I apologize if the title is confusing. I though long and hard and couldn't come up with proper way to phrase the question in a single line. So here's more detail. I am doing a basic image subtraction where the second image has been modified and I need to find the ratio of how much change was done to the image. for this I used the following code. Both images are 128x1024.
for(int i = 0; i < 128; i++)
{
for(int j = 0; j < 1024; j++)
{
den++;
diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
if(diff[i * 1024 + j] < error)
{
num++;
}
}
}
ratio = num/den;
The above code works fine on the CPU but I want to try to do this on CUDA. For this I can setup CUDA to do the basic subtraction of the images (code below) but I can't figure out how to do the conditional if statement to get my ratio out.
__global__ void calcRatio(float *orig, float *modified, int size, float *result)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
if(index < size)
result[index] = orig[index] - modified[index];
}
So, up to this point it works but I cannot figure out how to parrallelize the num and den counters in each thread to calculate the ratio at the end of all the thread executions. To me it feels like the num and den counders are independent of the threads as every time I have tried to use them it seems they get incremented only once.
Any help will be appreciated as I am just starting out in CUDA and every example I see online never seems to apply to what I need to do.
EDIT: Fixed my naive code. Forgot to type one of the main condition in the code. It was a long long day.
for(int i = 0; i < 128; i++)
{
for(int j = 0; j < 1024; j++)
{
if(modified[i * 1024 + j] < 400.0) //400.0 threshold value to ignore noise
{
den++;
diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
if(diff[i * 1024 + j] < error)
{
num++;
}
}
}
}
ratio = num/den;
The operation you need to use to perform global summation across all the threads is known as a "parallel reduction". While you could use atomic operations to do this, I would not recommend it. There is a reduction kernel and a very good paper discussing the technique in the CUDA SDK, it is worth reading.
If I were writing code to do what you want, it would probably look like this:
template <int blocksize>
__global__ void calcRatio(float *orig, float *modified, int size, float *result,
int *count, const float error)
{
__shared__ volatile float buff[blocksize];
int index = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
int count = 0;
for(int i=index; i<n; i+=stride) {
val = orig[index] - modified[index];
count += (val < error);
result[index] = val;
}
buff[threadIdx.x] = count;
__syncthreads();
// Parallel reduction in shared memory using 1 warp
if (threadId.x < warpSize) {
for(int i=threadIdx.x + warpSize; i<blocksize; i+= warpSize) {
buff[threadIdx.x] += buff[i];
if (threadIdx.x < 16) buff[threadIdx.x] +=buff[threadIdx.x + 16];
if (threadIdx.x < 8) buff[threadIdx.x] +=buff[threadIdx.x + 8];
if (threadIdx.x < 4) buff[threadIdx.x] +=buff[threadIdx.x + 4];
if (threadIdx.x < 2) buff[threadIdx.x] +=buff[threadIdx.x + 2];
if (threadIdx.x == 0) count[blockIdx.x] = buff[0] + buff[1];
}
}
The first stanza does what your serial code does - computes a difference and a thread local total of elements which are less than error. Note I have written this version so that each thread is designed to process more than one entry of the input data. This has been done to help offset the computational cost of the parallel reduction that follows, and the idea is that you would use fewer blocks and threads than there were input data set entries.
The second stanza is the reduction itself, done in shared memory. It is effectively a "tree like" operation where the size of the set of thread local subtotals within a single block of threads is first summed down to 32 subtotals, then the subtotals are combined until there is the final subtotal for the block, and that is then stored is the total for the block. You will wind up with a small list of sub totals in count, one for each block you launched, which can be copied back to the host and the final result you need calculated there.
Please note I coded this in the browser and haven't compiled it, there might be errors, but it should give an idea about how an "advanced" version of what you are trying to do would work.
The denominator is pretty simple, since it's just the size.
The numerator is more troublesome, since its value for a given thread depends on all previous values. You're going to have to do that operation serially.
The thing you're looking for is probably atomicAdd. It's very slow, though.
I think you'd find this question relevant. Your num is basically global data.
CUDA array-to-array sum
Alternatively, you could dump the results of the error check into an array. Counting the results could then be parallelized. It would be a little tricky, but I think something like this would scale up: http://tekpool.wordpress.com/2006/09/25/bit-count-parallel-counting-mit-hakmem/