Having parallelized with OpenMP before, I'm trying to wrap my head around CUDA, which doesn't seem too intuitive to me. At this point, I'm trying to understand exactly how to loop through an array in a parallelized fashion.
Cuda by Example is a great start.
The snippet on page 43 shows:
__global__ void add( int *a, int *b, int *c ) {
int tid = blockIdx.x; // handle the data at this index
if (tid < N)
c[tid] = a[tid] + b[tid];
}
Whereas in OpenMP the programmer chooses the number of times the loop will run and OpenMP splits that into threads for you, in CUDA you have to tell it (via the number of blocks and number of threads in <<<...>>>) to run it sufficient times to iterate through your array, using a thread ID number as an iterator. In other words you can have a CUDA kernel always run 10,000 times which means the above code will work for any array up to N = 10,000 (and of course for smaller arrays you're wasting cycles dropping out at if (tid < N)).
For pitched memory (2D and 3D arrays), the CUDA Programming Guide has the following example:
// Host code
int width = 64, height = 64;
float* devPtr; size_t pitch;
cudaMallocPitch(&devPtr, &pitch, width * sizeof(float), height);
MyKernel<<<100, 512>>>(devPtr, pitch, width, height);
// Device code
__global__ void MyKernel(float* devPtr, size_t pitch, int width, int height)
{
for (int r = 0; r < height; ++r) {
float* row = (float*)((char*)devPtr + r * pitch);
for (int c = 0; c > width; ++c) {
float element = row[c];
}
}
}
This example doesn't seem too useful to me. First they declare an array that is 64 x 64, then the kernel is set to execute 512 x 100 times. That's fine, because the kernel does nothing other than iterate through the array (so it runs 51,200 loops through a 64 x 64 array).
According to this answer the iterator for when there are blocks of threads going on will be
int tid = (blockIdx.x * blockDim.x) + threadIdx.x;
So if I wanted to run the first snippet in my question for a pitched array, I could just make sure I had enough blocks and threads to cover every element including the padding that I don't care about. But that seems wasteful.
So how do I iterate through a pitched array without going through the padding elements?
In my particular application I have a 2D FFT and I'm trying to calculate arrays of the magnitude and angle (on the GPU to save time).
After reviewing the valuable comments and answers from JackOLantern, and re-reading the documentation, I was able to get my head straight. Of course the answer is "trivial" now that I understand it.
In the code below, I define CFPtype (Complex Floating Point) and FPtype so that I can quickly change between single and double precision. For example, #define CFPtype cufftComplex.
I still can't wrap my head around the number of threads used to call the kernel. If it's too large, it simply won't go into the function at all. The documentation doesn't seem to say anything about what number should be used - but this is all for a separate question.
The key in getting my whole program to work (2D FFT on pitched memory and calculating magnitude and argument) was realizing that even though CUDA gives you plenty of "apparent" help in allocating 2D and 3D arrays, everything is still in units of bytes. It's obvious in a malloc call that the sizeof(type) must be included, but I totally missed it in calls of the type allocate(width, height). Noob mistake, I guess. Had I written the library I would have made the type size a separate parameter, but whatever.
So given an image of dimensions width x height in pixels, this is how it comes together:
Allocating memory
I'm using pinned memory on the host side because it's supposed to be faster. That's allocated with cudaHostAlloc which is straightforward. For pitched memory, you need to store the pitch for each different width and type, because it could change. In my case the dimensions are all the same (complex to complex transform) but I have arrays that are real numbers so I store a complexPitch and a realPitch. The pitched memory is done like this:
cudaMallocPitch(&inputGPU, &complexPitch, width * sizeof(CFPtype), height);
To copy memory to/from pitched arrays you cannot use cudaMemcpy.
cudaMemcpy2D(inputGPU, complexPitch, //destination and destination pitch
inputPinned, width * sizeof(CFPtype), //source and source pitch (= width because it's not padded).
width * sizeof(CFPtype), height, cudaMemcpyKind::cudaMemcpyHostToDevice);
FFT plan for pitched arrays
JackOLantern provided this answer, which I couldn't have done without. In my case the plan looks like this:
int n[] = {height, width};
int nembed[] = {height, complexPitch/sizeof(CFPtype)};
result = cufftPlanMany(
&plan,
2, n, //transform rank and dimensions
nembed, 1, //input array physical dimensions and stride
1, //input distance to next batch (irrelevant because we are only doing 1)
nembed, 1, //output array physical dimensions and stride
1, //output distance to next batch
cufftType::CUFFT_C2C, 1);
Executing the FFT is trivial:
cufftExecC2C(plan, inputGPU, outputGPU, CUFFT_FORWARD);
So far I have had little to optimize. Now I wanted to get magnitude and phase out of the transform, hence the question of how to traverse a pitched array in parallel. First I define a function to call the kernel with the "correct" threads per block and enough blocks to cover the entire image. As suggested by the documentation, creating 2D structures for these numbers is a great help.
void GPUCalcMagPhase(CFPtype *data, size_t dataPitch, int width, int height, FPtype *magnitude, FPtype *phase, size_t magPhasePitch, int cudaBlockSize)
{
dim3 threadsPerBlock(cudaBlockSize, cudaBlockSize);
dim3 numBlocks((unsigned int)ceil(width / (double)threadsPerBlock.x), (unsigned int)ceil(height / (double)threadsPerBlock.y));
CalcMagPhaseKernel<<<numBlocks, threadsPerBlock>>>(data, dataPitch, width, height, magnitude, phase, magPhasePitch);
}
Setting the blocks and threads per block is equivalent to writing the (up to 3) nested for-loops. So you have to have enough blocks * threads to cover the array, and then in the kernel you must make sure that you are not exceeding the array size. By using 2D elements for threadsPerBlock and numBlocks, you avoid having to go through the padding elements in the array.
Traversing a pitched array in parallel
The kernel uses the standard pointer arithmetic from the documentation:
__global__ void CalcMagPhaseKernel(CFPtype *data, size_t dataPitch, int width, int height,
FPtype *magnitude, FPtype *phase, size_t magPhasePitch)
{
int threadX = threadIdx.x + blockDim.x * blockIdx.x;
if (threadX >= width)
return;
int threadY = threadIdx.y + blockDim.y * blockIdx.y;
if (threadY >= height)
return;
CFPtype *threadRow = (CFPtype *)((char *)data + threadY * dataPitch);
CFPtype complex = threadRow[threadX];
FPtype *magRow = (FPtype *)((char *)magnitude + threadY * magPhasePitch);
FPtype *magElement = &(magRow[threadX]);
FPtype *phaseRow = (FPtype *)((char *)phase + threadY * magPhasePitch);
FPtype *phaseElement = &(phaseRow[threadX]);
*magElement = sqrt(complex.x*complex.x + complex.y*complex.y);
*phaseElement = atan2(complex.y, complex.x);
}
The only wasted threads here are for the cases where the width or height are not multiples of the number of threads per block.
Related
The question is quite straightforward. I'll also explain what I do in case there is a faster way to do this without optimizing this specific way.
I go through an image and its rgb values. I have bins of size 256 for each color. So for every pixel I calculate the 3 bins of its rgb values. The bins essentially give me the index to access data for the specific color in a large vector. With this data, I do some calculations which are irrelevant. What I want to optimize is the accessing part.
Keep in mind that the large vector has an extra dimension. Every pixel belongs to some defined areas of the image. For every area it belongs to, it has an element in the big vector. So, if a pixel belongs in 4 areas(eg 3,9,12,13) then the data I want to access is: data[colorIndex][3],data[colorIndex][9],data[colorIndex][12],data[colorIndex][13].
I think that's enough to explain the code which is the following:
//Just filling with data for the sake of the example
int cols = 200; int rows = 200;
cv::Mat image(200, 200, CV_8UC3);
image.setTo(Scalar(100, 100, 100));
int numberOfAreas = 50;
//For every pixel (first dimension) we have a vector<int> containing ones for every area the pixel belongs to.
//For this example, every pixel belongs to every area.
vector<vector<int>> areasThePixelBelongs(200 * 200, vector<int>(numberOfAreas, 1));
int numberOfBins = 32;
int sizeOfBin = 256 / numberOfBins;
vector<vector<float>> data(pow(numberOfBins, 3), vector<float>(numberOfAreas, 1));
//Filling complete
//Part I need to optimize
uchar* matPointer;
for (int y = 0; y < rows; y++) {
matPointer = image.ptr<uchar>(y);
for (int x = 0; x < cols; x++) {
int red = matPointer[x * 3 + 2];
int green = matPointer[x * 3 + 1];
int blue = matPointer[x * 3];
int binNumberRed = red / sizeOfBin;
int binNumberGreen = green / sizeOfBin;
int binNumberBlue = blue / sizeOfBin;
//Instead of a 3d vector where I access the elements like: color[binNumberRed][binNumberGreen][binNumberBlue]
//I use a 1d vector where I just have to calculate the 1d index as follows
int index = binNumberRed * numberOfBins * numberOfBins + binNumberGreen * numberOfBins + binNumberBlue;
vector<int>& areasOfPixel = areasThePixelBelongs[y*cols+x];
int numberOfPixelAreas = areasOfPixel.size();
for (int i = 0; i < numberOfPixelAreas; i++) {
float valueOfInterest = data[index][areasOfPixel[i]];
//Some calculations here...
}
}
}
Would it be better accessing each mat element as a Vec3b? I think I'm essentially accessing an element 3 times for each pixel using uchar. Would accessing one Vec3b be faster?
First of all vector<vector<T>> is not efficiently stored in memory as it is not contiguous. This as often a big impact on performance and should be avoided as mush as possible (especially when the inner arrays are of the same size). Instead of this, you can use std::array for fixed-size arrays or a flatten std::vector (with the size dim1 * dim2 * ... dimN).
Moreover, the loop is a good candidate for parallelization. You can parallelize this code easily with OpenMP. This assumes Some calculations here can be implemented in a thread-safe way (you should be careful about shared writes if any). If this code is embarrassingly-parallel, then the resulting parallel code can be much faster. Still, using multi-threading introduces some overhead which may be too big compared to the overall computation time (which is highly dependent of the content in Some calculations here).
Finally, regarding the content in Some calculations here it may or may not be possible to adapt the code so the compiler use SIMD instructions. The data[index][areasOfPixel[i]] will likely prevent most compiler to do that, but the following computation could be. Note that software prefetching and gather instructions may help to speed up a bit the data[index][areasOfPixel[i]] operation.
Note that the way you access pixels should not have a significant impact on the runtime as the computation should be bounded by the speed of the inner loop iterating on areas containing some unknown code (unless this unknown code actually access pixels too).
I'm implementing an algorithm to convert an image to grayscale using CUDA. I've got it working right now, but I'm looking for ways to improve performance.
Right now, the entire color image is transferred to the device memory, after which each thread calculates the gray pixel value by looking up the corresponding three (r,g,b) color values.
I have already made sure that the access of global memory is coalesced, though this did not really improve my performance (a 36 mb image took 0.003 s less after the memory access was coalesced...). Right now, I'm wondering whether using shared memory could improve my performance. Here's what I have right now:
My CUDA kernel:
__global__ void darkenImage(const unsigned char * inputImage,
unsigned char * outputImage, const int width, const int height, int iteration){
int x = ((blockIdx.x * blockDim.x) + (threadIdx.x + (iteration * MAX_BLOCKS * nrThreads))) * 3;
if(x+2 < (3 * width*height)){
float grayPix = 0.0f;
float r = static_cast< float >(inputImage[x]);
float g = static_cast< float >(inputImage[x+1]);
float b = static_cast< float >(inputImage[x+2]);
grayPix = __fadd_rn(__fadd_rn(__fmul_rn(0.3f, r),__fmul_rn(0.59f, g)), __fmul_rn(0.11f, b));
grayPix = fma(grayPix,0.6f,0.5f);
outputImage[(x/3)] = static_cast< unsigned char >(grayPix);
}
}
My question really is, because there is no memory shared between any two threads, using shared memory shouldn't really help here now should it? Or did I misunderstand?
Regards,
Linus
If you are not using the same value more than once, using shared memory (cache) will not improve the performance. But you can try to remove the iteration parameter and process more data with each block. Try to have a single kernel launch and a loop within the kernel so that each thread can calculate more than one output data.
No you are correct that shared memory won't help because you are not accessing the data more than once.
//This is my kernel function
__global__ void createSCM(Pixel*pixelMat, //image
int imgRows, //image dimensions
int imgCols,
int*matrizSCM, //Coocurrence matrix
int numNiveles, //coocurrence matrix levels = 256
int delta_R, //value = {-1,0 or 1}
int delta_C) //value = {-1,0 or 1}
{
int i = blockIdx.y*blockDim.y+threadIdx.y;
int j = blockIdx.x*blockDim.x+threadIdx.x;
int cols = numNiveles;
int posx,posy;
if ( (j + delta_C) < imgCols && (i + delta_R) < imgRows &&
((j + delta_C) >= 0) && ((i + delta_R) >= 0) )
{
posx = pixelMat[i*imgCols+j].channel_0;
posy = pixelMat[(i + delta_R)*imgCols+(j + delta_C)].channel_0;
matrizSCM[posx*cols+posy]++;
matrizSCM[posy*cols+posx]++;
}
}
struct Pixel {
int channel_0;
};
I have counting errors in the coocurrence matrix, because
pixelMat[i*imgCols+j] and pixelMat[(i + delta_R)*imgCols+(j + delta_C)]
are accessing to different positions with the same thread.
This is my kernel call
int Grid_Dim_x=imagenTest.rows, Grid_Dim_y=imagenTest.cols;
int Block_Dim_x=1, Block_Dim_y=1;
dim3 Grid(Grid_Dim_x, Grid_Dim_y);
dim3 Block(Block_Dim_x,Block_Dim_x);
createSCM<<<Grid,Block>>>(...)
There is just one thread on each block, and each block represents a pixel
is there a nice solution to this problem?
Thanks :)
Reading from different memory cells of immutable input incurs no parallel hazard that you would have to deal with. The problem lies within the matrizSCM where the same memory cell can be incremented by multiple threads at once.
An atomicAdd(addr,1) is a quick fix --- it should make the algorithm correct, but it may be fairly slow. Making it correct should be the first step; then you can look on available examples on the web of histogram computation and parallel reduction algorithm and check if it can be applied to your problem.
Finally, as Robert pointed out in the comment, launching just one thread in a block is very inefficient. You need a multiple of 32 to utilize the hardware SIMD unit, and usually about 256 threads to hide various memory latencies.
Also, if your image is big and you still need thousands of 256-thread blocks, you may consider launching less blocks (around 60-120) but having each block process multiple pixels sequentially. If you do that, you might be able to put a copy of matrixSCM in shared memory. This will make a separate copy of matrixSCM for each block, resulting in less atomic conflicts between the blocks. Obviously, at the end of the kernel, your block will still need to "submit" the partial result into the global one, but that would be a single step operation.
I am developing a CUDA ray-plane intersection kernel.
Let's suppose, my plane (face) struct is:
typedef struct _Face {
int ID;
int matID;
int V1ID;
int V2ID;
int V3ID;
float V1[3];
float V2[3];
float V3[3];
float reflect[3];
float emmision[3];
float in[3];
float out[3];
int intersects[RAYS];
} Face;
I pasted the whole struct so you can get an idea of it's size. RAYS equals 625 in current configuration. In the following code assume that the size of faces array is i.e. 1270 (generally - thousands).
Now until today I have launched my kernel in a very naive way:
const int tpb = 64; //threads per block
dim3 grid = (n +tpb-1)/tpb; // n - face count in array
dim3 block = tpb;
//.. some memory allocation etc.
theKernel<<<grid,block>>>(dev_ptr, n);
and inside the kernel I had a loop:
__global__ void theKernel(Face* faces, int faceCount) {
int offset = threadIdx.x + blockIdx.x*blockDim.x;
if(offset >= faceCount)
return;
Face f = faces[offset];
//..some initialization
int RAY = -1;
for(float alpha=0.0f; alpha<=PI; alpha+= alpha_step ){
for(float beta=0.0f; beta<=PI; beta+= beta_step ){
RAY++;
//..calculation per ray in (alpha,beta) direction ...
faces[offset].intersects[RAY] = ...; //some assignment
This is about it. I looped through all the directions and updated the faces array. I worked correctly, but was hardly any faster than CPU code.
So today I tried to optimize the code, and launch the kernel with a much bigger number of threads. Instead of having 1 thread per face I want 1 thread per face's ray (meaning 625 threads work for 1 face). The modifications were simple:
dim3 grid = (n*RAYS +tpb-1)/tpb; //before launching . RAYS = 625, n = face count
and the kernel itself:
__global__ void theKernel(Face *faces, int faceCount){
int threadNum = threadIdx.x + blockIdx.x*blockDim.x;
int offset = threadNum/RAYS; //RAYS is a global #define
int rayNum = threadNum - offset*RAYS;
if(offset >= faceCount || rayNum != 0)
return;
Face f = faces[offset];
//initialization and the rest.. again ..
And this code does not work at all. Why? Theoretically, only the 1st thread (of the 625 per Face) should work, so why does this result in bad (hardly any) computation?
Kind regards,
e.
The maximum size of a grid in any dimension is 65535 (CUDA programming guide, Appendix F). If your grid size was 1000 before the change, you have increased it to 625000. That's bigger than the limit, so the kernel won't run correctly.
If you define the grid size as
dim3 grid((n + tpb - 1) / tpb, RAYS);
then all grid dimensions will be smaller than the limit. You'll also have to change the way blockIdx is used in the kernel.
As Heatsink pointed out you are probably exceeding available resources. Good idea is to check after kernel execution whether there was no error.
Here is C++ code I use:
#include <cutil_inline.h>
void
check_error(const char* str, cudaError_t err_code) {
if (err_code != ::cudaSuccess)
std::cerr << str << " -- " << cudaGetErrorString(err_code) << "\n";
}
Then when I invole kernel:
my_kernel <<<block_grid, thread_grid >>>(args);
check_error("my_kernel", cudaGetLastError());
I have some matrices with unknown sizes varying from 10-20.000 in both directions.
I designed a CUDA kernel with (x;y) blocks and (x;y) threads.
Since matrices width/height aren't multiple of my dimensions, it was a terrible pain to get things work and the code is becoming more and more complicated to get coalescence memory reads.
Besides all of that, the kernel is growing in size using more and more registers to check for correctness... so I think this is not the way I should adopt.
My question is: what if I totally eliminate blocks and just create a grid of x;y threads? Will a SM unit have problems without many blocks?
Can I eliminate blocks and use a large amount of threads or is the block subdivision necessary?
You can't really just make a "grid of threads", since you have to organize threads into blocks and you can have a maximum of 512 threads per block. However, you could effectively do this by using 1 thread per block, which will result in a X by Y grid of 1x1 blocks. However, this will result in pretty terrible performance due to several factors:
According to the CUDA Programming Guide, a SM can handle a maximum of 8 blocks at any time. This will limit you to 8 threads per SM, which isn't enough to fill even a single warp. If you have, say, 48 CUDA cores, you will only be able to handle 384 threads at any given time.
With only 8 threads available on a SM, there will be too few warps to hide memory latencies. The GPU will spend most of its time waiting for memory accesses to complete, rather than doing any computations.
You will be unable to coalesce memory reads and writes, resulting in poor memory bandwidth usage.
You will be effectively unable to leverage shared memory, as this is a shared resource between threads in a block.
While having to ensure correctness for threads in a block is annoying, your performance will be vastly better than your "grid of threads" idea.
Here's the code i use to divide a given task requiring num_threads into block and grid. Yes, you might end up launching to many blocks (but only very few) and you will probably end up having more actual threads than required, but it's easy and efficient this way. See the second code example below for my simple in-kernel boundary check.
PS: I always have block_size == 128 because it has been a good tradeoff between multicore occupancy, register usage, shared memory requirements and coalescent access for all of my kernels.
Code to calculate a good grid size (host):
#define GRID_SIZE 65535
//calculate grid size (store result in grid/block)
void kernelUtilCalcGridSize(unsigned int num_threads, unsigned int block_size, dim3* grid, dim3* block) {
//block
block->x = block_size;
block->y = 1;
block->z = 1;
//number of blocks
unsigned int num_blocks = kernelUtilCeilDiv(num_threads, block_size);
unsigned int total_threads = num_blocks * block_size;
assert(total_threads >= num_threads);
//calculate grid size
unsigned int gy = kernelUtilCeilDiv(num_blocks, GRID_SIZE);
unsigned int gx = kernelUtilCeilDiv(num_blocks, gy);
unsigned int total_blocks = gx * gy;
assert(total_blocks >= num_blocks);
//grid
grid->x = gx;
grid->y = gy;
grid->z = 1;
}
//ceil division (rounding up)
unsigned int kernelUtilCeilDiv(unsigned int numerator, unsigned int denominator) {
return (numerator + denominator - 1) / denominator;
}
Code to calculate the unique thread id and check boundaries (device):
//some kernel
__global__ void kernelFoo(unsigned int num_threads, ...) {
//calculate unique id
const unsigned int thread_id = threadIdx.x;
const unsigned int block_id = blockIdx.x + blockIdx.y * gridDim.x;
const unsigned int unique_id = thread_id + block_id * blockDim.x;
//check range
if (unique_id >= num_threads) return;
//do the actual work
...
}
I don't think that's a lot of effort/registers/lines-of-code to check for correctness.