I'm trying to create a cuda program that counts the number of true values (defined by non-zero values) in a long vector through a reduction algorithm. I'm getting funny results. I get either 0 or (ceil(N/threadsPerBlock)*threadsPerBlock), neither is correct.
__global__ void count_reduce_logical(int * l, int * cntl, int N){
// suml is assumed to blockDim.x long and hold the partial counts
__shared__ int cache[threadsPerBlock];
int cidx = threadIdx.x;
int tid = threadIdx.x + blockIdx.x*blockDim.x;
int cnt_tmp=0;
while(tid<N){
if(l[tid]!=0)
cnt_tmp++;
tid+=blockDim.x*gridDim.x;
}
cache[cidx]=cnt_tmp;
__syncthreads();
//reduce
int k =blockDim.x/2;
while(k!=0){
if(threadIdx.x<k)
cache[cidx] += cache[cidx];
__syncthreads();
k/=2;
}
if(cidx==0)
cntl[blockIdx.x] = cache[0];
}
The host code then collects the cntl results and finishes summation. This is going to be part of a larger project where the data is already on the GPU, so it makes sense to do the computations there, if they work correctly.
You can count the nonzero-values with a single line of code using Thrust. Here's a code snippet that counts the number of 1s in a device_vector.
#include <thrust/count.h>
#include <thrust/device_vector.h>
...
// put three 1s in a device_vector
thrust::device_vector<int> vec(5,0);
vec[1] = 1;
vec[3] = 1;
vec[4] = 1;
// count the 1s
int result = thrust::count(vec.begin(), vec.end(), 1);
// result == 3
If your data does not live inside a device_vector you can still use thrust::count by wrapping the raw pointers.
In your reduction you're doing:
cache[cidx] += cache[cidx];
Don't you want to be poking at the other half of the block's local values?
Related
I am a newbie with CUDA. I'm learning some basic things because I want to use CUDA in other project. I have wrote this code in order to add all the elements from a squared matrix 8x8 which has been filled with 1's so the result must be 64.
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
const int SIZE = 64;
__global__ void add_matrix_values(int* matrix, int sum, int c)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
int j = threadIdx.y + blockIdx.x * blockDim.x;
sum += matrix[i*c+j];
}
int main()
{
int* device_matrix;
int* host_matrix;
int c = 8; //Squared matrix cxc
int device_c = 8;
int device_sum = 0;
int host_sum = 0;
//Allocate host memory
host_matrix = (int*)malloc(sizeof(int)*SIZE);
//Fill the matrix values with 1's
for(auto i = 0; i < SIZE; i++)
host_matrix[i] = 1;
//Allocate device memory
cudaMalloc((void**) &device_matrix,sizeof(int)*SIZE);
cudaMalloc((void**) &device_sum, sizeof(int));
cudaMalloc((void**) &device_c,sizeof(int));
//Fill device_matrix with host_matrix values
cudaMemcpy(&device_matrix,&host_matrix,sizeof(int)*SIZE,cudaMemcpyHostToDevice);
//Initialize device_sum with a 0
cudaMemcpy(&device_sum,&host_sum,sizeof(int),cudaMemcpyHostToDevice);
//Initialize device_c with the correct value
cudaMemcpy(&device_c,&c,sizeof(int),cudaMemcpyHostToDevice);
//4 blocks with 16 threads every single block ¿Is this correct?
add_matrix_values<<<4,16>>>(device_matrix, device_sum,device_c);
cudaMemcpy(&host_sum,&device_sum,sizeof(int),cudaMemcpyDeviceToHost);
std::cout<<"The value is: "<<host_sum<<std::endl;
cudaFree(device_matrix);
free(host_matrix);
return 0;
}
The result must be 64 but I'm getting wrong numbers.
migue#migue ~/Escritorio ./program
The value is: 32762
migue#migue ~/Escritorio ./program
The value is: 32608
migue#migue ~/Escritorio ./program
The value is: 32559
I dont't know what I'm doing wrong. It could be the gridSize and the blockSize ? or It could be the i and j operation in the cuda Kernel ?
I dont understand very well that terms.
There are a number of issues:
You are creating a 1-D grid (grid configuration, block configuration) so your 2-D indexing in kernel code (i,j, or x,y) doesn't make any sense
You are passing sum by value. You cannot retrieve a result that way. Changes in the kernel to sum won't be reflected in the calling environment. This is a C++ concept, not specific to CUDA. Use a properly allocated pointer instead.
In a CUDA multithreading environment, you cannot have multiple threads update the same location/value without any control. CUDA does not sort out that kind of access for you. You must use a parallel reduction technique, and a simplistic approach here could be to use atomics. You can find many questions here on the cuda tag discussing parallel reductions.
You're generally confusing pass by value and pass by pointer. Items passed by value can be ordinary host variables. You generally don't need a cudaMalloc allocation for those. You also don't use cudaMalloc on any kind of variable except a pointer.
Your use of cudaMemcpy is incorrect. There is no need to take the address of the pointers.
The following code has the above items addressed:
$ cat t135.cu
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
const int SIZE = 64;
__global__ void add_matrix_values(int* matrix, int *sum, int c)
{
int i = threadIdx.x + blockIdx.x * blockDim.x;
atomicAdd(sum, matrix[i]);
}
int main()
{
int* device_matrix;
int* host_matrix;
int device_c = 8;
int *device_sum;
int host_sum = 0;
//Allocate host memory
host_matrix = (int*)malloc(sizeof(int)*SIZE);
//Fill the matrix values with 1's
for(auto i = 0; i < SIZE; i++)
host_matrix[i] = 1;
//Allocate device memory
cudaMalloc((void**) &device_matrix,sizeof(int)*SIZE);
cudaMalloc((void**) &device_sum, sizeof(int));
//Fill device_matrix with host_matrix values
cudaMemcpy(device_matrix,host_matrix,sizeof(int)*SIZE,cudaMemcpyHostToDevice);
//Initialize device_sum with a 0
cudaMemcpy(device_sum,&host_sum,sizeof(int),cudaMemcpyHostToDevice);
//4 blocks with 16 threads every single block ¿Is this correct?
add_matrix_values<<<4,16>>>(device_matrix, device_sum,device_c);
cudaMemcpy(&host_sum,device_sum,sizeof(int),cudaMemcpyDeviceToHost);
std::cout<<"The value is: "<<host_sum<<std::endl;
cudaFree(device_matrix);
free(host_matrix);
return 0;
}
$ nvcc -o t135 t135.cu
$ cuda-memcheck ./t135
========= CUDA-MEMCHECK
The value is: 64
========= ERROR SUMMARY: 0 errors
$
Link to his slides:
http://docs.nvidia.com/cuda/samples/6_Advanced/reduction/doc/reduction.pdf
Here's his code for the first version of parallel reduction:
__global__ void reduce0(int *g_idata, int *g_odata) {
extern __shared__ int sdata[];
// each thread loads one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_idata[i];
__syncthreads();
// do reduction in shared mem
for(unsigned int s=1; s < blockDim.x; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
which he later optimizes. How is this not just summing all of the ints for each thread block and placing the answer in another vector? Is that what it's meant to do? Isn't *g_odata a vector itself since it's placing the sum at each "blockIdx.x" point in the vector? How do you get the vector g_idata to sum to one single number?
How is this not just summing all of the ints for each thread block and placing the answer in another vector?
It is doing exactly that.
Is that what it's meant to do?
Yes.
Isn't g_odata a vector itself since it's placing the sum at each "blockIdx.x" point in the vector?
Yes, it is the vector containing the block-level sums.
How do you get the vector g_idata to sum to one single number?
Call the kernel twice. Once on the original data set, and once on the vector output from the previous call (the block-level sums). Note that this second step uses only a single block and requires that you can launch enough threads per block to cover the entire vector, one thread per sum from the previous step. If you review the cuda sample code that is intended to accompany that presentation that you linked, you will find such a calling sequence, for example at lines 304 and 333 of reduction.cpp. The second call to reduce<T> performs the reduction that sums the partial block sums, as indicated in the comment on line 324:
304:reduce<T>(n, numThreads, numBlocks, whichKernel, d_idata, d_odata);
// check if kernel execution generated an error
getLastCudaError("Kernel execution failed");
if (cpuFinalReduction)
{
// sum partial sums from each block on CPU
// copy result from device to host
checkCudaErrors(cudaMemcpy(h_odata, d_odata, numBlocks*sizeof(T), cudaMemcpyDeviceToHost));
for (int i=0; i<numBlocks; i++)
{
gpu_result += h_odata[i];
}
needReadBack = false;
}
else
{
324: // sum partial block sums on GPU
int s=numBlocks;
int kernel = whichKernel;
while (s > cpuFinalThreshold)
{
int threads = 0, blocks = 0;
getNumBlocksAndThreads(kernel, s, maxBlocks, maxThreads, blocks, threads);
333: reduce<T>(s, threads, blocks, kernel, d_odata, d_odata);
note that the output d_odata from the first reduction at line 304 is passed as the input to the second reduction on line 333.
Also note that the necessity for, and this method of kernel-decomposition is covered in the presentation you linked on slides 3 - 5.
EDIT 3:
I need each thread to write and read a private location in global memory. Below I post a working code showing my problem. In the following, I'll list the main variables and structures involved.
Variables:
srcArr_h (host) --> srcArr_d (device) : array of random floats in the range [0, COLORLEVELS] with dimensions given by ARRDIM
auxD (device) : array of dimension ARRDIM * ARRDIM holding the final result in device
auxH (host) : array of dimension ARRDIM * ARRDIM holding the final result in host
c_glob_d (device) : array that reserves a private location of COLORLEVELS floats for each thread, with size given by num_threads * COLORLEVELS
idx (device) : identification number of current thread
My problem: in the kernel, I update c_glob[idx] for each value ic (ic∈ [0, COLORLEVELS]), i.e. c_glob[idx][ic]. I use c_glob[idx][COLORLEVELS] to compute the final result g0 stored in auxD. My problem is that my final results are wrong. Results copied to auxH show that I get numbers at least one order of magnitude bigger then expected or even weird numbers suggesting my operation is likely to overflow.
Help: what am I doing wrong? How can I make each thread to write and read each private location in global memory? Right now I'm debugging with ARRDIM = 512, but my goal is to make it work for ARRDIM~ 10^4, thus creating a c_glob array for 10^4*10^4 threads). I guess I will have issues with the total number of threads allowed per run.. So I was wondering if you could suggest any other solution to my problem.
Thank you.
#include <string>
#include <stdint.h>
#include <iostream>
#include <stdio.h>
#include "cuPrintf.cu"
using namespace std;
#define ARRDIM 512
#define COLORLEVELS 4
__global__ void gpuKernel
(
float *sa, float *aux,
size_t memPitchAux, int w,
float *c_glob
)
{
float sc_loc[COLORLEVELS];
float g0=0.0f;
int tidx = blockIdx.x * blockDim.x + threadIdx.x;
int tidy = blockIdx.y * blockDim.y + threadIdx.y;
int idx = tidy * memPitchAux/4 + tidx;
for(int ic=0; ic<COLORLEVELS; ic++)
{
sc_loc[ic] = ((float)(ic*ic));
}
for(int is=0; is<COLORLEVELS; is++)
{
int ic = fabs(sa[tidy*w +tidx]);
c_glob[tidy * COLORLEVELS + tidx + ic] += 1.0f;
}
for(int ic=0; ic<COLORLEVELS; ic++)
{
g0 += c_glob[tidy * COLORLEVELS + tidx + ic]*sc_loc[ic];
}
aux[idx] = g0;
}
int main(int argc, char* argv[])
{
/*
* array src host and device
*/
int heightSrc = ARRDIM;
int widthSrc = ARRDIM;
cudaSetDevice(0);
float *srcArr_h, *srcArr_d;
size_t nBytesSrcArr = sizeof(float)*heightSrc * widthSrc;
srcArr_h = (float *)malloc(nBytesSrcArr); // Allocate array on host
cudaMalloc((void **) &srcArr_d, nBytesSrcArr); // Allocate array on device
cudaMemset((void*)srcArr_d,0,nBytesSrcArr); // set to zero
int totArrElm = heightSrc*widthSrc;
for(int ic=0; ic<totArrElm; ic++)
{
srcArr_h[ic] = (float)(rand() % COLORLEVELS);
}
cudaMemcpy( srcArr_d, srcArr_h,nBytesSrcArr,cudaMemcpyHostToDevice);
/*
* auxiliary buffer auxD to save final results
*/
float *auxD;
size_t auxDPitch;
cudaMallocPitch((void**)&auxD,&auxDPitch,widthSrc*sizeof(float),heightSrc);
cudaMemset2D(auxD, auxDPitch, 0, widthSrc*sizeof(float), heightSrc);
/*
* auxiliary buffer auxH allocation + initialization on host
*/
size_t auxHPitch;
auxHPitch = widthSrc*sizeof(float);
float *auxH = (float *) malloc(heightSrc*auxHPitch);
/*
* kernel launch specs
*/
int thpb_x = 16;
int thpb_y = 16;
int blpg_x = (int) widthSrc/thpb_x;
int blpg_y = (int) heightSrc/thpb_y;
int num_threads = blpg_x * thpb_x + blpg_y * thpb_y;
/*
* c_glob: array that reserves a private location of COLORLEVELS floats for each thread
*/
int cglob_w = COLORLEVELS;
int cglob_h = num_threads;
float *c_glob_d;
size_t c_globDPitch;
cudaMallocPitch((void**)&c_glob_d,&c_globDPitch,cglob_w*sizeof(float),cglob_h);
cudaMemset2D(c_glob_d, c_globDPitch, 0, cglob_w*sizeof(float), cglob_h);
/*
* kernel launch
*/
dim3 dimBlock(thpb_x,thpb_y, 1);
dim3 dimGrid(blpg_x,blpg_y,1);
gpuKernel<<<dimGrid,dimBlock>>>(srcArr_d,auxD, auxDPitch, widthSrc, c_glob_d);
cudaThreadSynchronize();
cudaMemcpy2D(auxH,auxHPitch,
auxD,auxDPitch,
auxHPitch, heightSrc,
cudaMemcpyDeviceToHost);
cudaThreadSynchronize();
float min = auxH[0];
float max = auxH[0];
float f;
string str;
for(int i=0; i<widthSrc*heightSrc; i++)
{
if(min > auxH[i])
min = auxH[i];
if(max < auxH[i])
max = auxH[i];
}
cudaFree(srcArr_d);
cudaFree(auxD);
cudaFree(c_glob_d);
}
You decided neither not to show the whole code nor a reduced size thereof reproducing your problem. Therefore, it has not been possible to make tests and verify the possible solution below.
I think you have spot the source of the problem: multiple threads are trying to write to the same memory locations in parallel. This is a situation leading to race conditions. For an example, see the fourth slide of the presentation "CUDA C: race conditions, atomics, locks, mutex, and warps".
Race conditions have a brute-force solution: atomic functions. They are described at Section B.12 of the CUDA C Programming Guide. So you can try to fix your problem by changing the line
c[ic] += 1.0f;
to
atomicAdd(&c[ic],1);
You will pay this fix with performance: atomic operations serialize the code to avoid race conditions.
I have mentioned that atomic functions are a brute-force solution to your problem because it can be that, by properly rethinking the implementation, you can find a way to avoid them. But this is not possible to say as of now due to the very few details you provided.
I am working on image processing algorithm using CUDA. In my algorithm i want to find sum of all pixels of image using CUDA kernel. so i made kernel method in cuda for measure sum of all pixels of 16 bit gray scale image, but i got wrong answer.
So i make simple program in cuda for find sum of 1 to 100 numbers and my code is below.
In my code i got not exact sum of that 1 to 100 numbers using GPU, but i got exact sum of that 1 to 100 numbers using CPU. So what i had done in that code ?
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <stdio.h>
#include <conio.h>
#include <malloc.h>
#include <limits>
#include <math.h>
using namespace std;
__global__ void computeMeanValue1(double *pixels,double *sum){
int x = threadIdx.x;
sum[0] = sum[0] + (pixels[(x)]);
__syncthreads();
}
int main(int argc, char **argv)
{
double *data;
double *dev_data;
double *dev_total;
double *total;
data=new double[(100) * sizeof(double)];
total=new double[(1) * sizeof(double)];
double cpuSum=0.0;
for(int i=0;i<100;i++){
data[i]=i+1;
cpuSum=cpuSum+data[i];
}
cout<<"CPU total = "<<cpuSum<<std::endl;
cudaMalloc( (void**)&dev_data, 100 * sizeof(double));
cudaMalloc( (void**)&dev_total, 1 * sizeof(double));
cudaMemcpy(dev_data, data, 100 * sizeof(double), cudaMemcpyHostToDevice);
computeMeanValue1<<<1,100>>>(dev_data,dev_total);
cudaDeviceSynchronize();
cudaMemcpy(total, dev_total, 1* sizeof(double), cudaMemcpyDeviceToHost);
cout<<"GPU total = "<<total[0]<<std::endl;
cudaFree(dev_data);
cudaFree(dev_total);
free(data);
free(total);
getch();
return 0;
}
All your threads are writing to the same memory location at the same time.
sum[0] = sum[0] + (pixels[(x)]);
You can't do this and expect to get the correct result. Your kernel needs to take a different approach to avoid writing to the same memory from different threads. The pattern usually employed for doing this is reduction. Simply put with a reduction each thread is responsible for summing a block of elements within the array and then storing the result. By employing a series of these reduction operations its possible to sum the entire contents of the array.
__global__ void block_sum(const float *input,
float *per_block_results,
const size_t n)
{
extern __shared__ float sdata[];
unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;
// load input into __shared__ memory
float x = 0;
if(i < n)
{
x = input[i];
}
sdata[threadIdx.x] = x;
__syncthreads();
// contiguous range pattern
for(int offset = blockDim.x / 2;
offset > 0;
offset >>= 1)
{
if(threadIdx.x < offset)
{
// add a partial sum upstream to our own
sdata[threadIdx.x] += sdata[threadIdx.x + offset];
}
// wait until all threads in the block have
// updated their partial sums
__syncthreads();
}
// thread 0 writes the final result
if(threadIdx.x == 0)
{
per_block_results[blockIdx.x] = sdata[0];
}
}
Each thread writes to a different location in sdata[threadIdx.x] there is no race condition. Threads are free to access other elements in sdata because they only read from them so there are no race conditions. Note the use of __syncthreads() to ensure that the operations to load data into sdata are complete before the threads start to read the data and the second call to __syncthreads() to ensure that all the summation operations have completed before copying the final result from sdata[0]. Note that only thread 0 writes its result to per_block_results[blockIdx.x], so there is no race condition there either.
You can find the complete sample code for the above on Google Code (I did not write this). This slide deck has a reasonable summary of reductions in CUDA. It includes diagrams which really help in understanding how the interleaved memory reads and writes do not conflict with each other.
You can find lots of other material on efficient implementations of reduction on GPUs. Ensuring that your implementation makes most efficient use of memory is key to getting the best performance out of a memory bound operation like reduction.
In GPU code, we have multiple threads executing in parallel. If all of those threads attempt to update the same location in memory, we have undefined behavior, unless we use special operations, called atomics to do the update.
In your case, since sum is updated by all threads, and sum is a double quantity, we can use the special custom atomic function described in the programming guide to accomplish this.
If I replace your kernel code with the following:
__device__ double atomicAdd(double* address, double val)
{
unsigned long long int* address_as_ull =
(unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed,
__double_as_longlong(val +
__longlong_as_double(assumed)));
} while (assumed != old);
return __longlong_as_double(old);
}
__global__ void computeMeanValue1(double *pixels,double *sum){
int x = threadIdx.x;
atomicAdd(sum, pixels[x]);
}
And initialize the sum value to zero before the kernel:
double gpuSum = 0.0;
cudaMemcpy(dev_total, &gpuSum, sizeof(double), cudaMemcpyHostToDevice);
Then I think you'll get matching results.
As #AdeMiller pointed out, the faster way to perform parallel sums like this is via classical parallel reduction.
There is a CUDA sample code that demonstrates this and an accompanying presentation that covers the methodology.
I am a newbie in OpenCL. However, I understand the C/C++ basics and the OOP.
My question is as follows: is it somehow possible to run the sum computation task in parallel? Is it theoretically possible? Below I will describe what I've tried to do:
The task is, for example:
double* values = new double[1000]; //let's pretend it has some random values inside
double sum = 0.0;
for(int i = 0; i < 1000; i++) {
sum += values[i];
}
What I tried to do in OpenCL kernel (and I feel it is wrong because perhaps it accesses the same "sum" variable from different threads/tasks at the same time):
__kernel void calculate2dim(__global float* vectors1dim,
__global float output,
const unsigned int count) {
int i = get_global_id(0);
output += vectors1dim[i];
}
This code is wrong. I will highly appreciate if anyone answers me if it is theoretically possible to run such tasks in parallel and if it is - how!
If you want to sum the values of your array in a parallel fashion, you should make sure you reduce contention and make sure there's no data dependencies across threads.
Data dependencies will cause threads to have to wait for each other, creating contention, which is what you want to avoid to get true parallellization.
One way you could do that is to split your array into N arrays, each containing some subsection of your original array, and then calling your OpenCL kernel function with each different array.
At the end, when all kernels have done the hard work, you can just sum up the results of each array into one. This operation can easily be done by the CPU.
The key is to not have any dependencies between the calculations done in each kernel, so you have to split your data and processing accordingly.
I don't know if your data has any actual dependencies from your question, but that is for you to figure out.
The piece of code I've provided for reference should do the job.
E.g. you have N elements, and size of your workgroup is WS = 64. I assume that N is multiple of 2*WS (this is important, one workgroup calculates sum of 2*WS elements). Then you need to run kernel specifying:
globalSizeX = 2*WS*(N/(2*WS));
As a result sum array will have partial sums of 2*WS elements. ( e.g. sum[1] - will contain sum of elements whose indices are from 2*WS to 4*WS-1).
If your globalSizeX is 2*WS or less (which means that you have only one workgroup), then you are done. Just use sum[0] as a result.
If not - you need to repeat procedure, this time using sum array as input array and output to other array (create 2 arrays and ping-pong between them). And so on untill you will have only one workgroup.
Search also for Hilli Steele / Blelloch parallel algorithms.
This article could be useful as well
Here is the actual example:
__kernel void par_sum(__global unsigned int* input, __global unsigned int* sum)
{
int li = get_local_id(0);
int groupId = get_group_id(0);
__local int our_h[2 * get_group_size(0)];
our_h[2*li + 0] = hist[2*get_group_size(0)*blockId + 2*li + 0];
our_h[2*li + 1] = hist[2*get_group_size(0)*blockId + 2*li + 1];
// sweep up
int width = 2;
int num_el = 2*get_group_size(0)/width;
int wby2 = width>>1;
for(int i = 2*BLK_SIZ>>1; i>0; i>>=1)
{
barrier(CLK_LOCL_MEM_FENCE);
if(li < num_el)
{
int idx = width*(li+1) - 1;
our_h[idx] = our_h[idx] + our_h[(idx - wby2)];
}
width<<=1;
wby2 = width>>1;
num_el>>=1;
}
barrier(CLK_LOCL_MEM_FENCE);
// down-sweep
if(0 == li)
sum[groupId] = our_h[2*get_group_size(0)-1]; // save sum
}