CUDA - Optimize mean of matrix rows calculation using shared memory - c++

I am trying to optimize the computation of the mean of each row in my 512w x 1024h image, and then subtract the mean from the row from which it was computed. I wrote a piece of code which does it in 1.86 ms, but I want to reduce the speed. This piece of code works fine, but does not use shared memory, and it utilizes for loops. I want to do away with them.
__global__ void subtractMean (const float *__restrict__ img, float *lineImg, int height, int width) {
// height = 1024, width = 512
int tidy = threadIdx.x + blockDim.x * blockIdx.x;
float sum = 0.0f;
float sumDiv = 0.0f;
if(tidy < height) {
for(int c = 0; c < width; c++) {
sum += img[tidy*width + c];
}
sumDiv = (sum/width)/2;
//__syncthreads();
for(int cc = 0; cc < width; cc++) {
lineImg[tidy*width + cc] = img[tidy*width + cc] - sumDiv;
}
}
__syncthreads();
I called the above kernel using:
subtractMean <<< 2, 512 >>> (originalImage, rowMajorImage, actualImHeight, actualImWidth);
However, the following code I wrote uses shared memory to optimize. But, it does not work as expected. Any thoughts on what the problem might be?
__global__ void subtractMean (const float *__restrict__ img, float *lineImg, int height, int width) {
extern __shared__ float perRow[];
int idx = threadIdx.x; // set idx along x
int stride = width/2;
while(idx < width) {
perRow[idx] = 0;
idx += stride;
}
__syncthreads();
int tidx = threadIdx.x; // set idx along x
int tidy = blockIdx.x; // set idx along y
if(tidy < height) {
while(tidx < width) {
perRow[tidx] = img[tidy*width + tidx];
tidx += stride;
}
}
__syncthreads();
tidx = threadIdx.x; // reset idx along x
tidy = blockIdx.x; // reset idx along y
if(tidy < height) {
float sumAllPixelsInRow = 0.0f;
float sumDiv = 0.0f;
while(tidx < width) {
sumAllPixelsInRow += perRow[tidx];
tidx += stride;
}
sumDiv = (sumAllPixelsInRow/width)/2;
tidx = threadIdx.x; // reset idx along x
while(tidx < width) {
lineImg[tidy*width + tidx] = img[tidy*width + tidx] - sumDiv;
tidx += stride;
}
}
__syncthreads();
}
The shared memory function was called using:
subtractMean <<< 1024, 256, sizeof(float)*512 >>> (originalImage, rowMajorImage, actualImHeight, actualImWidth);

2 blocks is hardly enough to saturate GPU use. You are going towards the right approach with utilizing more blocks, however, you are using Kepler and I would like to present an option that does not use shared memory at all.
Start with 32 threads in a block (this can be changed later using 2D blocks)
With those 32 threads you should do something along the lines of this:
int rowID = blockIdx.x;
int tid = threadIdx.x;
int stride= blockDim.x;
int index = threadIdx.x;
float sum=0.0;
while(index<width){
sum+=img[width*rowID+index];
index+=blockDim.x;
}
at this point you will have 32 threads that have a partial sum in each of them. You next need to add them all together. You can do this without the use of shared memory (since we are within a warp) by utilizing a shuffle reduction. For details on that look here: http://devblogs.nvidia.com/parallelforall/faster-parallel-reductions-kepler/ what you want is the shuffle warp reduce, but you need to change it to use the full 32 threads.
Now that thread 0 in each warp has the sum of every row, you can divide that by the width cast to a float, and broadcast it to the rest of the warp using shfl using shfl(average, 0);. http://docs.nvidia.com/cuda/cuda-c-programming-guide/#warp-description
With the average found and the warps synchronized implicitly and explicitly (with shfl), you can continue on in a similar method with the subtract.
Possible further optimizations would be to include more than one warp in a block to improve occupancy, and to manually unroll the loops over the width to improve instruction level parallelism.
Good Luck.

Related

Flattening a 3D array to 1D in cuda

I have the following code that I'm trying to implement in cuda but I'm having a problem of flattening a 3D array to 1D in cuda
C++ code
for(int i=0; i<w; i++)
for(int j=0; j<h; j++)
for(int k=0; k<d; k++)
arr[h*w*i+ w*j+ k] = (h*w*i+ w*j+ k)*2;
This is what I have so far in Cuda
int w = h = d;
int N = 64;
__global__ void getIndex(float* A)
{
int i = blockIdx.x;
int j = blockIdx.y;
int k = blockIdx.z;
A[h*w*i+ w*j+ k] = h*w*i+ w*j+ k;
}
int main(int argc, char **argv)
{
float *d_A;
cudaMalloc((void **)&d_A, w * h * d * sizeof(float) );
getIndex <<<N,1>>> (d_A);
}
But I'm not getting the result I'm expecting, I do not know how to get the right i,j and k indices
Consider a 3D problem of size w x h x d. (This could be a simple array which has to be set like in your question or any other 3D problem that is easy to parallelize.) I will use your simple set-task for demonstration purpose.
The easiest way to handle this with a CUDA kernel is to launch one thread per array entry, that is w*h*d threads. This answer discusses why one thread per element may not always be the best solution.
Now let us have a look at the following lines of code
dim3 numThreads(w,h,d);
getIndex <<<1, numThreads>>> (d_A, w, h, d);
Here we are launching a kernel with a total of w*h*d threads.
The kernel can than be implemented as
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int i = threadIdx.x;
int j = threadIdx.y;
int k = threadIdx.z;
A[h*d*i+ d*j+ k] = h*d*i+ d*j+ k;
}
But there is a problem with this kernel and the kernel call: The number of threads per thread block is limited (also the number of "threads in a specific direction" is bounded = the z direction is generally most bounded). As we are only calling one thread block our problem size cannot be exceed these certain limits (e.g. w*h*d <= 1024).
This is what threadblocks are for. You can practically launch a kernel with as many threads as you want. (This is not true but the limits for the maximal amount of threadblocks are not likely to be exhausted.)
Calling the kernel this way:
dim3 numBlocks(w/8,h/8,d/8);
dim3 numThreads(8,8,8);
getIndex <<<numBlocks, numThreads>>> (d_A, w, h, d);
will launch the kernel for w/8 * h/8 * d/8 thread blocks while every block contains 8*8*8 threads. So in total w*h*d threads will be called.
Now we have to adjust our kernel accordingly:
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int bx = blockIdx.x;
int by = blockIdx.y;
int bz = blockIdx.z;
int tx = threadIdx.x;
int ty = threadIdx.y;
int tz = threadIdx.z;
A[h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz)] = h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz);
}
Note:
You can write a more general kernel using blockDim.x instead of the fixed size 8 and gridDim.x to calculate w via gridDim.x*blockDim.x. The other two dimensions are handled likewise.
In the proposed example all three dimensions w, h and d have to be multiples of 8. You can also generalize the kernel to allow every dimensions. (Then you have to parse all three dimensions to the kernel to check if the calculated position is still in range of the problem.)
As already mentioned, it may be more efficient to edit more than one entry of the array per thread. This again have to be considered when calling the kernel. A wrapper function which takes the problem size and the data and calls the kernel with the right block and thread configuration may be useful.

cuda multiple image erosion not work

i'm trying to implement multiple black(0) and white(255) image erosion with cuda,i use a square (5x5)structure element.The kernel that i had implemented take an unsigned char array buffer in which are stored nImg images 200X200 px . To allow erosion of multiple image simultaneosly i make a grid with 3D structure:
each block has the dimension of the strel (5x5)
the grid has height = image_height/blockDim.y , width = image_width/blockDim.x , z = nImg
i've try to implement it extending that sample.
the problem is that if i store the pixels that a block of threads consider into a shared buffer shared between the threads of the block;
to allow fast memory access, the algorithm doesn't work properly.I try to change the bindex that for me make mistake,but i cannot found a solution.
any suggestion?
here's my code:
//strel size
#define STREL_W 5
#define STREL_H 5
// distance from the cente of strel to strel width or height
#define R (STREL_H/2)
//size of the 2D region that each block consider i.e all the neighborns that each thread in a block consider
#define BLOCK_W (STREL_W+(2*R))
#define BLOCK_H (STREL_H+(2*R))
__global__ void erode_multiple_img_SM(unsigned char * buffer_in,
unsigned char * buffer_out,
int w,
int h ){
//array stored in shared memory,that contain all pixel neighborns that each thread in a block consider
__shared__ unsigned char fast_acc_arr[BLOCK_W*BLOCK_H];
// map thread in a 3D structure
int col = blockIdx.x * STREL_W + threadIdx.x -R ;
int row = blockIdx.y * STREL_H + threadIdx.y -R ;
int plane = blockIdx.z * blockDim.z + threadIdx.z;
// check if a foreground px of strel is not contain in a region of the image with size of strel (if only one px is not contain the image is eroded)
bool is_contain = true;
// clamp to edge of image
col = max(0,col);
col = min(col,w-1);
row = max(0,row);
row = min(row,h-1);
//map each thread in one dim coord to map 3D structure(grid) with image buffer(1D)
unsigned int index = (plane * h * w) + (row * w) + col;
unsigned int bindex = threadIdx.y * blockDim.y + threadIdx.x;
//each thread copy its pixel of the block to shared memory (shared with thread of a block)
fast_acc_arr[bindex] = buffer_in[index];
__syncthreads();
//the strel must be contain in image, thread.x and thread.y are the coords of the center of the mask that correspond to strel in image, and it must be contain in image
if((threadIdx.x >= R) && (threadIdx.x < BLOCK_W-R) && (threadIdx.y >= R) && (threadIdx.y <BLOCK_H-R)){
for(int dy=-R; dy<=R; dy++){
if(is_contain == false)
break;
for (int dx = -R ; dx <= R; dx++) {
//if only one element in mask is different from the value of strel el --> the strel is not contain in the mask --> the center of the mask is eroded (and it's no necessary to consider the other el of the mask this is the motivation of the break)
if (fast_acc_arr[bindex + (dy * blockDim.x) + dx ] != 255 ){
buffer_out[index ] = 0;
is_contain = false;
break;
}
}
}
// if the strel is contain into the image the the center is not eroded
if(is_contain == true)
buffer_out[index] = 255;
}
}
that are my kernel settings:
dim3 block(5,5,1);
dim3 grid(200/(block.x),200/(block.y),nImg);
my kernel call:
erode_multiple_img_SM<<<grid,block>>>(dimage_src,dimage_dst,200,200);
my image input and output:
input: output(150 buff element):
code without shared memory(low speed):
__global__ void erode_multiple_img(unsigned char * buffer_in,
unsigned char * buffer_out,
int w,int h ){
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
int plane = blockIdx.z * blockDim.z +threadIdx.z;
bool is_contain = true;
col = max(0,col);
col = min(col,w-1);
row = max(0,row);
row = min(row,h-1);
for(int dy=-STREL_H/2; dy<=STREL_H/2; dy++){
if(is_contain == false)
break;
for (int dx = -STREL_W/2 ; dx <= STREL_W/2; dx++) {
if (buffer_in[(plane * h * w) +( row + dy) * w + (col + dx) ] !=255 ){
buffer_out[(plane * h * w) + row * w + col ] = 0;
is_contain = false;
break;
}
}
}
if(is_contain == true)
buffer_out[(plane * h * w) + row * w +col ] = 255;
}
UPDATED ALGORITHM
i try to follow that samples to do convolution.I change the input image, now has 512x512 size and i wrote that algorithm:
#define STREL_SIZE 5
#define TILE_W 16
#define TILE_H 16
#define R (STREL_H/2)
#define BLOCK_W (TILE_W+(2*R))
#define BLOCK_H (TILE_H+(2*R))
__global__ void erode_multiple_img_SM_v2(unsigned char * buffer_in,
unsigned char * buffer_out,
int w,int h ){
// Data cache: threadIdx.x , threadIdx.y
__shared__ unsigned char data[TILE_W +STREL_SIZE ][TILE_W +STREL_SIZE ];
// global mem address of this thread
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
int plane = blockIdx.z * blockDim.z +threadIdx.z;
int gLoc = (plane*h/w)+ row*w +col;
bool is_contain = true;
// load cache (32x32 shared memory, 16x16 threads blocks)
// each threads loads four values from global memory into shared mem
int x, y; // image based coordinate
if((col<w)&&(row<h)) {
data[threadIdx.x][threadIdx.y]=buffer_in[gLoc];
if (threadIdx.y > (h-STREL_SIZE))
data[threadIdx.x][threadIdx.y + STREL_SIZE]=buffer_in[gLoc + STREL_SIZE];
if (threadIdx.x >(w-STREL_SIZE))
data[threadIdx.x + STREL_SIZE][threadIdx.y]=buffer_in[gLoc+STREL_SIZE];
if ((threadIdx.x >(w-STREL_SIZE)) && (threadIdx.y > (h-STREL_SIZE)))
data[threadIdx.x+STREL_SIZE][threadIdx.y+STREL_SIZE] = buffer_in[gLoc+2*STREL_SIZE];
//wait for all threads to finish read
__syncthreads();
//buffer_out[gLoc] = data[threadIdx.x][threadIdx.y];
unsigned char min_value = 255;
for(x=0;x<STREL_SIZE;x++){
for(y=0;y<STREL_SIZE;y++){
min_value = min( (data[threadIdx.x+x][threadIdx.y+y]) , min_value);
}
}
buffer_out[gLoc]= min_value;
}
}
my kernel settings now are:
dim3 block(16,16);
dim3 grid(512/(block.x),512/(block.y),nImg);
input:
output:
seems that the pixels of the apron are not copyied in the ouput buffer
You may want to read the following links for more detailed description and better example code on how to implement an image convolution CUDA kernel function.
http://igm.univ-mlv.fr/~biri/Enseignement/MII2/Donnees/convolutionSeparable.pdf
https://www.evl.uic.edu/sjames/cs525/final.html
Basically using a convolution filter of the size (5 x 5) does not mean setting the size of the thread block to be (5 x 5).
Typically, for a non-separable convolution, you could use a thread block of the size (16 x 16), to calculate a block of (16 x 16) pixels on the output image. To achieve this you need to read a block of ((2+16+2) x (2+16+2)) pixels from the input image to the shared memory, using the (16 x 16) threads collaboratively.

Efficiently find minimum of large array using Opencl

I am working on the implementation of a hierarchical clustering algorithm in opencl. For each step, I have find the minimum value in a very large array (approx. 10^8 entries) so that I know which elements have to be combined into a new cluster. The identification of the minimum value must be done 9999 times. With my current kernels, it takes about 200 seconds to find the minimum value (accumulated over all iterations).
How I approached the problem is by dividing the array into 2560 equally sized fragments (there are 2560 stream processors on my Radeon 7970) and to find the minimum of each fragment individually. The I run a second kernel that combines these minima into a global minimum.
It there any more efficient way to approach this problem? The initial idea was to speed up HCA by using OpenCL but with the amount of time that the identification of the minimum takes it is much longer than the matlab HCA on the CPU. What am I doing wrong?
__kernel void findMinValue(__global float * myArray, __global double * mins, __global int * elementsToWorkOn, __global int * arraysize){
int gid = get_global_id(0);
int minloc = 0;
float mymin = INFINITY;
int eltoWorkOn = *elementsToWorkOn;
int offset = gid*eltoWorkOn;
int target = offset + eltoWorkOn;
if (offset<*arraysize){
//make sure the array size is not exceeded
if (target > *arraysize){
target = *arraysize;
}
//find minimum for the kernel
for (int i = offset; i < target; i++){
if (*(myArray + i) < mymin){
mymin = *(myArray + i);
minloc = i;
}
}
}
*(mins + gid * 2) = minloc;
*(mins + gid * 2 + 1) = mymin;
}
__kernel void getGlobalMin(__global double * mins, __global double * gmin, __global int * pixelsInImage){
int nWorkitems = 2560;
float globalMin = INFINITY;
double globalMinLoc;
float tempMin;
for (int i = 0; i < nWorkitems; i++){
tempMin = *(mins + 2 * i + 1);
if (tempMin < globalMin){
globalMin = tempMin;
globalMinLoc = *(mins + 2 * i);
}
}
*(gmin + 0) = globalMinLoc;
*(gmin + 1) = globalMin;
}
UPDATE
I redesigned the findMinValue Kernel based on your suggestions. The memory access is now coalescent and I divided the work into work groups, so that I can reduce the amount of global memory accesses. Before, every kernel wrote its minimum value to the global mins buffer. Now only one kernel per worg group writes one value (i.e. the group minimum). Furthermore, I increased the global work size in order to hide memory latency.
These changes allowed to reduce the time required for identifying the minima from >200s to only 59s! Thank you very much for your help!
Is there anything else I could have missed while optimizing the kernel? Do you have any further suggestions? I could not figure out how to use setArg(). Do I have to pass a pointer to the int value to it (like this: err = clSetKernelArg(kernel[2], 3, sizeof(int), &variable);). How woudl the kernel declaration look in this case?
Here is my new Kernel:
__kernel void findMinValue(__global float * myArray, __global double * mins, __global int * arraysize,__global int * elToWorkOn,__global int * dummy){
int gid = get_global_id(0);
int lid = get_local_id(0);
int groupID = get_group_id(0);
int lsize = get_local_size(0);
int gsize = get_global_id(0);
int minloc = 0;
int arrSize = *arraysize;
int elPerGroup = *elToWorkOn;
float mymin = INFINITY;
__local float lmins[128];
//initialize local memory
*(lmins + lid) = INFINITY;
__local int lminlocs[128];
//this private value will reduce global memory access in the for loop (temp = *(myArray + i);)
float temp;
//ofset and target of the for loop
int offset = elPerGroup*groupID + lid;
int target = elPerGroup*(groupID + 1);
//prevent that target<arrsize (may happen due to rounding errors or arrSize not a multiple of elPerGroup
target = min(arrSize, target);
//find minimum for the kernel
//offset is different for each lid, leading to sequential memory access
if (offset < arrSize){
for (int i = offset; i < target; i += lsize){
temp = *(myArray + i);
if (temp < mymin){
mymin = temp;
minloc = i;
}
}
//store kernel minimum in local memory
*(lminlocs + lid) = minloc;
*(lmins + lid) = mymin;
//find work group minimum (reduce global memory accesses)
lsize = lsize >> 1;
while (lsize > 0){
if (lid < lsize){
if (*(lmins + lid)> *(lmins + lid + lsize)){
*(lmins + lid) = *(lmins + lid + lsize);
*(lminlocs + lid) = *(lminlocs + lid + lsize);
}
}
lsize = lsize >> 1;
}
}
//write group minimum to global buffer
if (lid == 0){
*(mins + groupID * 2 + 0) = *(lminlocs + 0);
*(mins + groupID * 2 + 1) = *(lmins + 0);
}
}
If each work item iterates through a global array there is ZERO coalescing of reads. If you change it so each work items strides by the warp or wavefront size then you'd get a huge speed gain.
It is much more efficient to access consecutive memory rather than scattered memory by the WI. In addition, you should sum in work groups first, then pass it to global memory. And use single setArg() of ints, and not buffers for that purpose.
At least, you should do it this way:
__kernel void findMinValue(__global float * myArray, __global double * mins, __global int arraysize){
int gid = get_global_id(0);
int minloc = 0;
float mymin = INFINITY;
//find minimum for the kernel
for (int i = gid ; i < arraysize; i+= get_global_size(0)){
if (*(myArray + i) < mymin){
mymin = *(myArray + i);
minloc = i;
}
}
*(mins + gid * 2) = minloc;
*(mins + gid * 2 + 1) = mymin;
}
The coalescent memory access sped up the calculation by roughly factor 4. That was, however, still to slow for our purpose. The brute force method by recalculating the minima of all entries was just not suitable.
I therefore changed the algorithm, so that it would retain only the minimum (+its location) of each row. After changing the 2 rows and columns in each iteration, the row minima are updated if required and then the global minimum is obtained by finding the minimum of the row minima. Therefore, if we had a 22500*22500 matrix, I only needed to get the minimum of 22500 entries as opposed to 506250000. Of course this implementation requires additional calculations but in the end we could reduce the amount of time spent searching for mimima from 200s (non-coalescent) over 59s (coalescent) all the way down do 8s.
I hope this will help someone in the future :-)

Wrong results with CUDA threads writing on private locations in global memory

EDIT 3:
I need each thread to write and read a private location in global memory. Below I post a working code showing my problem. In the following, I'll list the main variables and structures involved.
Variables:
srcArr_h (host) --> srcArr_d (device) : array of random floats in the range [0, COLORLEVELS] with dimensions given by ARRDIM
auxD (device) : array of dimension ARRDIM * ARRDIM holding the final result in device
auxH (host) : array of dimension ARRDIM * ARRDIM holding the final result in host
c_glob_d (device) : array that reserves a private location of COLORLEVELS floats for each thread, with size given by num_threads * COLORLEVELS
idx (device) : identification number of current thread
My problem: in the kernel, I update c_glob[idx] for each value ic (ic∈ [0, COLORLEVELS]), i.e. c_glob[idx][ic]. I use c_glob[idx][COLORLEVELS] to compute the final result g0 stored in auxD. My problem is that my final results are wrong. Results copied to auxH show that I get numbers at least one order of magnitude bigger then expected or even weird numbers suggesting my operation is likely to overflow.
Help: what am I doing wrong? How can I make each thread to write and read each private location in global memory? Right now I'm debugging with ARRDIM = 512, but my goal is to make it work for ARRDIM~ 10^4, thus creating a c_glob array for 10^4*10^4 threads). I guess I will have issues with the total number of threads allowed per run.. So I was wondering if you could suggest any other solution to my problem.
Thank you.
#include <string>
#include <stdint.h>
#include <iostream>
#include <stdio.h>
#include "cuPrintf.cu"
using namespace std;
#define ARRDIM 512
#define COLORLEVELS 4
__global__ void gpuKernel
(
float *sa, float *aux,
size_t memPitchAux, int w,
float *c_glob
)
{
float sc_loc[COLORLEVELS];
float g0=0.0f;
int tidx = blockIdx.x * blockDim.x + threadIdx.x;
int tidy = blockIdx.y * blockDim.y + threadIdx.y;
int idx = tidy * memPitchAux/4 + tidx;
for(int ic=0; ic<COLORLEVELS; ic++)
{
sc_loc[ic] = ((float)(ic*ic));
}
for(int is=0; is<COLORLEVELS; is++)
{
int ic = fabs(sa[tidy*w +tidx]);
c_glob[tidy * COLORLEVELS + tidx + ic] += 1.0f;
}
for(int ic=0; ic<COLORLEVELS; ic++)
{
g0 += c_glob[tidy * COLORLEVELS + tidx + ic]*sc_loc[ic];
}
aux[idx] = g0;
}
int main(int argc, char* argv[])
{
/*
* array src host and device
*/
int heightSrc = ARRDIM;
int widthSrc = ARRDIM;
cudaSetDevice(0);
float *srcArr_h, *srcArr_d;
size_t nBytesSrcArr = sizeof(float)*heightSrc * widthSrc;
srcArr_h = (float *)malloc(nBytesSrcArr); // Allocate array on host
cudaMalloc((void **) &srcArr_d, nBytesSrcArr); // Allocate array on device
cudaMemset((void*)srcArr_d,0,nBytesSrcArr); // set to zero
int totArrElm = heightSrc*widthSrc;
for(int ic=0; ic<totArrElm; ic++)
{
srcArr_h[ic] = (float)(rand() % COLORLEVELS);
}
cudaMemcpy( srcArr_d, srcArr_h,nBytesSrcArr,cudaMemcpyHostToDevice);
/*
* auxiliary buffer auxD to save final results
*/
float *auxD;
size_t auxDPitch;
cudaMallocPitch((void**)&auxD,&auxDPitch,widthSrc*sizeof(float),heightSrc);
cudaMemset2D(auxD, auxDPitch, 0, widthSrc*sizeof(float), heightSrc);
/*
* auxiliary buffer auxH allocation + initialization on host
*/
size_t auxHPitch;
auxHPitch = widthSrc*sizeof(float);
float *auxH = (float *) malloc(heightSrc*auxHPitch);
/*
* kernel launch specs
*/
int thpb_x = 16;
int thpb_y = 16;
int blpg_x = (int) widthSrc/thpb_x;
int blpg_y = (int) heightSrc/thpb_y;
int num_threads = blpg_x * thpb_x + blpg_y * thpb_y;
/*
* c_glob: array that reserves a private location of COLORLEVELS floats for each thread
*/
int cglob_w = COLORLEVELS;
int cglob_h = num_threads;
float *c_glob_d;
size_t c_globDPitch;
cudaMallocPitch((void**)&c_glob_d,&c_globDPitch,cglob_w*sizeof(float),cglob_h);
cudaMemset2D(c_glob_d, c_globDPitch, 0, cglob_w*sizeof(float), cglob_h);
/*
* kernel launch
*/
dim3 dimBlock(thpb_x,thpb_y, 1);
dim3 dimGrid(blpg_x,blpg_y,1);
gpuKernel<<<dimGrid,dimBlock>>>(srcArr_d,auxD, auxDPitch, widthSrc, c_glob_d);
cudaThreadSynchronize();
cudaMemcpy2D(auxH,auxHPitch,
auxD,auxDPitch,
auxHPitch, heightSrc,
cudaMemcpyDeviceToHost);
cudaThreadSynchronize();
float min = auxH[0];
float max = auxH[0];
float f;
string str;
for(int i=0; i<widthSrc*heightSrc; i++)
{
if(min > auxH[i])
min = auxH[i];
if(max < auxH[i])
max = auxH[i];
}
cudaFree(srcArr_d);
cudaFree(auxD);
cudaFree(c_glob_d);
}
You decided neither not to show the whole code nor a reduced size thereof reproducing your problem. Therefore, it has not been possible to make tests and verify the possible solution below.
I think you have spot the source of the problem: multiple threads are trying to write to the same memory locations in parallel. This is a situation leading to race conditions. For an example, see the fourth slide of the presentation "CUDA C: race conditions, atomics, locks, mutex, and warps".
Race conditions have a brute-force solution: atomic functions. They are described at Section B.12 of the CUDA C Programming Guide. So you can try to fix your problem by changing the line
c[ic] += 1.0f;
to
atomicAdd(&c[ic],1);
You will pay this fix with performance: atomic operations serialize the code to avoid race conditions.
I have mentioned that atomic functions are a brute-force solution to your problem because it can be that, by properly rethinking the implementation, you can find a way to avoid them. But this is not possible to say as of now due to the very few details you provided.

CUDA combining thread independent(??) variables during execution

Guys I apologize if the title is confusing. I though long and hard and couldn't come up with proper way to phrase the question in a single line. So here's more detail. I am doing a basic image subtraction where the second image has been modified and I need to find the ratio of how much change was done to the image. for this I used the following code. Both images are 128x1024.
for(int i = 0; i < 128; i++)
{
for(int j = 0; j < 1024; j++)
{
den++;
diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
if(diff[i * 1024 + j] < error)
{
num++;
}
}
}
ratio = num/den;
The above code works fine on the CPU but I want to try to do this on CUDA. For this I can setup CUDA to do the basic subtraction of the images (code below) but I can't figure out how to do the conditional if statement to get my ratio out.
__global__ void calcRatio(float *orig, float *modified, int size, float *result)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
if(index < size)
result[index] = orig[index] - modified[index];
}
So, up to this point it works but I cannot figure out how to parrallelize the num and den counters in each thread to calculate the ratio at the end of all the thread executions. To me it feels like the num and den counders are independent of the threads as every time I have tried to use them it seems they get incremented only once.
Any help will be appreciated as I am just starting out in CUDA and every example I see online never seems to apply to what I need to do.
EDIT: Fixed my naive code. Forgot to type one of the main condition in the code. It was a long long day.
for(int i = 0; i < 128; i++)
{
for(int j = 0; j < 1024; j++)
{
if(modified[i * 1024 + j] < 400.0) //400.0 threshold value to ignore noise
{
den++;
diff[i * 1024 + j] = orig[i * 1024 + j] - modified[i * 1024 + j];
if(diff[i * 1024 + j] < error)
{
num++;
}
}
}
}
ratio = num/den;
The operation you need to use to perform global summation across all the threads is known as a "parallel reduction". While you could use atomic operations to do this, I would not recommend it. There is a reduction kernel and a very good paper discussing the technique in the CUDA SDK, it is worth reading.
If I were writing code to do what you want, it would probably look like this:
template <int blocksize>
__global__ void calcRatio(float *orig, float *modified, int size, float *result,
int *count, const float error)
{
__shared__ volatile float buff[blocksize];
int index = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
int count = 0;
for(int i=index; i<n; i+=stride) {
val = orig[index] - modified[index];
count += (val < error);
result[index] = val;
}
buff[threadIdx.x] = count;
__syncthreads();
// Parallel reduction in shared memory using 1 warp
if (threadId.x < warpSize) {
for(int i=threadIdx.x + warpSize; i<blocksize; i+= warpSize) {
buff[threadIdx.x] += buff[i];
if (threadIdx.x < 16) buff[threadIdx.x] +=buff[threadIdx.x + 16];
if (threadIdx.x < 8) buff[threadIdx.x] +=buff[threadIdx.x + 8];
if (threadIdx.x < 4) buff[threadIdx.x] +=buff[threadIdx.x + 4];
if (threadIdx.x < 2) buff[threadIdx.x] +=buff[threadIdx.x + 2];
if (threadIdx.x == 0) count[blockIdx.x] = buff[0] + buff[1];
}
}
The first stanza does what your serial code does - computes a difference and a thread local total of elements which are less than error. Note I have written this version so that each thread is designed to process more than one entry of the input data. This has been done to help offset the computational cost of the parallel reduction that follows, and the idea is that you would use fewer blocks and threads than there were input data set entries.
The second stanza is the reduction itself, done in shared memory. It is effectively a "tree like" operation where the size of the set of thread local subtotals within a single block of threads is first summed down to 32 subtotals, then the subtotals are combined until there is the final subtotal for the block, and that is then stored is the total for the block. You will wind up with a small list of sub totals in count, one for each block you launched, which can be copied back to the host and the final result you need calculated there.
Please note I coded this in the browser and haven't compiled it, there might be errors, but it should give an idea about how an "advanced" version of what you are trying to do would work.
The denominator is pretty simple, since it's just the size.
The numerator is more troublesome, since its value for a given thread depends on all previous values. You're going to have to do that operation serially.
The thing you're looking for is probably atomicAdd. It's very slow, though.
I think you'd find this question relevant. Your num is basically global data.
CUDA array-to-array sum
Alternatively, you could dump the results of the error check into an array. Counting the results could then be parallelized. It would be a little tricky, but I think something like this would scale up: http://tekpool.wordpress.com/2006/09/25/bit-count-parallel-counting-mit-hakmem/