How to efficiently repeat a vector to a matrix in cuda? - c++

I want to repeat a vector to form a matrix in cuda, avoiding too many memcopy. Both vector and matrix are allocated on GPU.
For example:
I have a vector:
a = [1 2 3 4]
expand it to a matrix:
b = [1 2 3 4;
1 2 3 4;
.......
1 2 3 4]
What I have tried is to assign each element of b. But this involves a lot of GPU memory to GPU memory copy.
I know this is easy in matlab (using repmat), but how to do it in cuda efficiently? I didn't find any routine in cublas.

EDIT based on the comments, I've updated the code to a version that will handle either row-major or column-major underlying storage.
Something like this should be reasonably fast:
// for row_major, blocks*threads should be a multiple of vlen
// for column_major, blocks should be equal to vlen
template <typename T>
__global__ void expand_kernel(const T* vector, const unsigned vlen, T* matrix, const unsigned mdim, const unsigned col_major=0){
if (col_major){
int idx = threadIdx.x+blockIdx.x*mdim;
T myval = vector[blockIdx.x];
while (idx < ((blockIdx.x+1)*mdim)){
matrix[idx] = myval;
idx += blockDim.x;
}
}
else{
int idx = threadIdx.x + blockDim.x * blockIdx.x;
T myval = vector[idx%vlen];
while (idx < mdim*vlen){
matrix[idx] = myval;
idx += gridDim.x*blockDim.x;
}
}
}
This assumes your matrix is of dimensions mdim rows x vlen columns (seems to be what you have outlined in the question.)
You can tune the grid and block dimensions to find out what works fastest for your particular GPU. For the row-major case, start with 256 or 512 threads per block, and set the number of blocks equal to or greater than 4 times the number of SMs in your GPU. Choose the product of grid and block dimensions to be equal to an integer multiple of your vector length vlen. If this is difficult, choosing an arbitrary, but "large" threadblock size, such as 250 or 500, should not result in much lost efficiency.
For the column-major case, choose 256 or 512 threads per block, and choose the number of blocks equal to vlen, the vector length. If vlen > 65535, you will need to compile this for compute capability 3.0 or higher. If vlen is small, perhaps less than 32, the efficiency of this method may be significantly reduced. Some mitigation will be found if you increase the threads per block to the maximum for your GPU, either 512 or 1024. There may be other "expand" realizations that may be better suited to the column-major "narrow" matrix case. For example, a straightforward modification to the column-major code would allow two blocks per vector element, or four blocks per vector element, and the total launched blocks would then be 2*vlen or 4*vlen, for example.
Here's a fully worked example, along with a run of bandwidth test, to demonstrate that the above code achieves ~90% of the throughput indicated by bandwidthTest:
$ cat t546.cu
#include <stdio.h>
#define W 512
#define H (512*1024)
// for row_major, blocks*threads should be a multiple of vlen
// for column_major, blocks should be equal to vlen
template <typename T>
__global__ void expand_kernel(const T* vector, const unsigned vlen, T* matrix, const unsigned mdim, const unsigned col_major=0){
if (col_major){
int idx = threadIdx.x+blockIdx.x*mdim;
T myval = vector[blockIdx.x];
while (idx < ((blockIdx.x+1)*mdim)){
matrix[idx] = myval;
idx += blockDim.x;
}
}
else{
int idx = threadIdx.x + blockDim.x * blockIdx.x;
T myval = vector[idx%vlen];
while (idx < mdim*vlen){
matrix[idx] = myval;
idx += gridDim.x*blockDim.x;
}
}
}
template <typename T>
__global__ void check_kernel(const T* vector, const unsigned vlen, T* matrix, const unsigned mdim, const unsigned col_major=0){
unsigned i = 0;
while (i<(vlen*mdim)){
unsigned idx = (col_major)?(i/mdim):(i%vlen);
if (matrix[i] != vector[idx]) {printf("mismatch at offset %d\n",i); return;}
i++;}
}
int main(){
int *v, *m;
cudaMalloc(&v, W*sizeof(int));
cudaMalloc(&m, W*H*sizeof(int));
int *h_v = (int *)malloc(W*sizeof(int));
for (int i = 0; i < W; i++)
h_v[i] = i;
cudaMemcpy(v, h_v, W*sizeof(int), cudaMemcpyHostToDevice);
// test row-major
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start);
expand_kernel<<<44, W>>>(v, W, m, H);
cudaEventRecord(stop);
float et;
cudaEventSynchronize(stop);
cudaEventElapsedTime(&et, start, stop);
printf("row-majortime: %fms, bandwidth: %.0fMB/s\n", et, W*H*sizeof(int)/(1024*et));
check_kernel<<<1,1>>>(v, W, m, H);
cudaDeviceSynchronize();
// test col-major
cudaEventRecord(start);
expand_kernel<<<W, 256>>>(v, W, m, H, 1);
cudaEventRecord(stop);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&et, start, stop);
printf("col-majortime: %fms, bandwidth: %.0fMB/s\n", et, W*H*sizeof(int)/(1024*et));
check_kernel<<<1,1>>>(v, W, m, H, 1);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -arch=sm_20 -o t546 t546.cu
$ ./t546
row-majortime: 13.066944ms, bandwidth: 80246MB/s
col-majortime: 12.806720ms, bandwidth: 81877MB/s
$ /usr/local/cuda/samples/bin/x86_64/linux/release/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: Quadro 5000
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 5864.2
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 6333.1
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 88178.6
Result = PASS
$
CUDA 6.5, RHEL 5.5
This can also be implemented using a CUBLAS Rank-1 update function but it will be considerably slower than the above method.

Related

Multi-GPU batched 1D FFTs: only a single GPU seems to work

I have three Tesla V100s on RHEL 8 with CUDA toolkit version 10.2.89.
I'm attempting to compute a batch of 1D FFTs of the columns of a row-major matrix. In the example below, the matrix is 16x8, so with three GPUs I'd expect GPU 0 to perform the FFTs of the first 3 columns, GPU 1 to perform FFTs of the next 3, and GPU 2 to perform FFTs of the final 2.
The plan created in the example works as expected on a single GPU, but when running on three only the first three columns are computed (correctly), the remainder are untouched.
When I inspect the descriptor that is filled by cufftXtMalloc, I see that it has allocated space for 123 elements on GPUs 0 and 1, and 122 on GPU 2. This seems weird: I would expect 48=16*3 on GPUs 0 and 1 and 32=16*2 on GPU 2. Indeed this is the size of the workspaces filled by cufftMakePlanMany. When I inspect the data that was copied, elements 0-122 are in the buffer on GPU 0, and elements 123-127 are at the beginning of the buffer on GPU 1. The remainder of that buffer and the buffer on GPU 2 are junk.
In addition, when I increase the number of rows to 1024, I get a SIGABRT on the cufftXtFree call with the message 'free(): corrupted unsorted chunks'.
#include "cufft.h"
#include "cufftXt.h"
#include <vector>
#include <cuComplex.h>
#include <cassert>
#define CUDA_CHECK(x) assert(x == cudaSuccess)
#define CUFFT_CHECK(x) assert(x == CUFFT_SUCCESS)
int main() {
static const int numGPUs = 3;
int gpus[numGPUs] = {0, 1, 2};
int nr = 16;
int nc = 8;
// Fill with junk data
std::vector<cuFloatComplex> h_x(nr * nc);
for (int i = 0; i < nr * nc; ++i) {
h_x[i].x = static_cast<float>(i);
}
cufftHandle plan;
CUFFT_CHECK(cufftCreate(&plan));
CUFFT_CHECK(cufftXtSetGPUs(plan, numGPUs, gpus));
std::vector<size_t> workSizes(numGPUs);
int n[] = {nr};
CUFFT_CHECK(cufftMakePlanMany(plan,
1, // rank
n, // n
n, // inembed
nc, // istride
1, // idist
n, // onembed
nc, // ostride
1, // odist
CUFFT_C2C,
nc,
workSizes.data()));
cudaLibXtDesc *d_x;
CUFFT_CHECK(cufftXtMalloc(plan, &d_x, CUFFT_XT_FORMAT_INPLACE));
CUFFT_CHECK(cufftXtMemcpy(plan, d_x, (void *)h_x.data(), CUFFT_COPY_HOST_TO_DEVICE));
CUFFT_CHECK(cufftXtExecDescriptorC2C(plan, d_x, d_x, CUFFT_FORWARD));
std::vector<cuFloatComplex> h_out(nr * nc);
CUFFT_CHECK(cufftXtMemcpy(plan, (void *)h_out.data(), d_x, CUFFT_COPY_DEVICE_TO_HOST));
CUFFT_CHECK(cufftXtFree(d_x));
CUFFT_CHECK(cufftDestroy(plan));
CUDA_CHECK(cudaDeviceReset());
return 0;
}
Thanks to #RobertCrovella for the answer:
As of CUDA 10.2.89 according to the documentation strided input and output are not supported for multi-GPU transforms.

Cuda: XOR single bitset with array of bitsets

I want to XOR a single bitset with a bunch of other bitsets (~100k) and count the set bits of every xor-result. The size of a single bitset is around 20k bits.
The bitsets are already converted to arrays of unsigned int to be able to use the intrinsic __popc()-function. The 'bunch' is already residing contiguously in device-memory.
My current kernel code looks like this:
// Grid/Blocks used for kernel invocation
dim3 block(32);
dim3 grid((bunch_size / 31) + 32);
__global__ void kernelXOR(uint * bitset, uint * bunch, int * set_bits, int bitset_size, int bunch_size) {
int tid = blockIdx.x*blockDim.x + threadIdx.x;
if (tid < bunch_size){ // 1 Thread for each bitset in the 'bunch'
int sum = 0;
uint xor_res = 0;
for (int i = 0; i < bitset_size; ++i){ // Iterate through every uint-block of the bitsets
xor_res = bitset[i] ^ bunch[bitset_size * tid + i];
sum += __popc(xor_res);
}
set_bits[tid] = sum;
}
}
However, compared to a parallelized c++/boost version, I see no benefit using Cuda.
Is there any potential in optimizing this kernel?
Is there any potential in optimizing this kernel?
I see 2 problems here (and they are the first two classical primary optimizations objectives for any CUDA programmer):
You want to try to efficiently use global memory. Your accesses to bitset and bunch are not coalesced. (efficiently use the memory subsystems)
The use of 32 threads per block is generally not recommended and could limit your overall occupancy. One thread per bitset is also potentially problematic. (expose enough parallelism)
Whether addressing those issues will meet your definition of benefit is impossible to say without a comparison test case. Furthermore, simple memory-bound problems like this are rarely interesting in CUDA when considered by themselves. However, we can (probably) improve the performance of your kernel.
We'll use a laundry list of ideas:
have each block handle a bitset, rather than each thread, to enable coalescing
use shared memory to load the comparison bitset, and reuse it
use just enough blocks to saturate the GPU, along with striding loops
use const ... __restrict__ style decoration to possibly benefit from RO cache
Here's a worked example:
$ cat t1649.cu
#include <iostream>
#include <cstdlib>
const int my_bitset_size = 20000/(32);
const int my_bunch_size = 100000;
typedef unsigned uint;
//using one thread per bitset in the bunch
__global__ void kernelXOR(uint * bitset, uint * bunch, int * set_bits, int bitset_size, int bunch_size) {
int tid = blockIdx.x*blockDim.x + threadIdx.x;
if (tid < bunch_size){ // 1 Thread for each bitset in the 'bunch'
int sum = 0;
uint xor_res = 0;
for (int i = 0; i < bitset_size; ++i){ // Iterate through every uint-block of the bitsets
xor_res = bitset[i] ^ bunch[bitset_size * tid + i];
sum += __popc(xor_res);
}
set_bits[tid] = sum;
}
}
const int nTPB = 256;
// one block per bitset, multiple bitsets per block
__global__ void kernelXOR_imp(const uint * __restrict__ bitset, const uint * __restrict__ bunch, int * __restrict__ set_bits, int bitset_size, int bunch_size) {
__shared__ uint sbitset[my_bitset_size]; // could also be dynamically allocated for varying bitset sizes
__shared__ int ssum[nTPB];
// load shared, block-stride loop
for (int idx = threadIdx.x; idx < bitset_size; idx += blockDim.x) sbitset[idx] = bitset[idx];
__syncthreads();
// stride across all bitsets in bunch
for (int bidx = blockIdx.x; bidx < bunch_size; bidx += gridDim.x){
int my_sum = 0;
for (int idx = threadIdx.x; idx < bitset_size; idx += blockDim.x) my_sum += __popc(sbitset[idx] ^ bunch[bidx*bitset_size + idx]);
// block level parallel reduction
ssum[threadIdx.x] = my_sum;
for (int ridx = nTPB>>1; ridx > 0; ridx >>=1){
__syncthreads();
if (threadIdx.x < ridx) ssum[threadIdx.x] += ssum[threadIdx.x+ridx];}
if (!threadIdx.x) set_bits[bidx] = ssum[0];}
}
int main(){
// data setup
uint *d_cbitset, *d_bitsets, *h_cbitset, *h_bitsets;
int *d_r, *h_r, *h_ri;
h_cbitset = new uint[my_bitset_size];
h_bitsets = new uint[my_bitset_size*my_bunch_size];
h_r = new int[my_bunch_size];
h_ri = new int[my_bunch_size];
for (int i = 0; i < my_bitset_size*my_bunch_size; i++){
h_bitsets[i] = rand();
if (i < my_bitset_size) h_cbitset[i] = rand();}
cudaMalloc(&d_cbitset, my_bitset_size*sizeof(uint));
cudaMalloc(&d_bitsets, my_bitset_size*my_bunch_size*sizeof(uint));
cudaMalloc(&d_r, my_bunch_size*sizeof(int));
cudaMemcpy(d_cbitset, h_cbitset, my_bitset_size*sizeof(uint), cudaMemcpyHostToDevice);
cudaMemcpy(d_bitsets, h_bitsets, my_bitset_size*my_bunch_size*sizeof(uint), cudaMemcpyHostToDevice);
// original
// Grid/Blocks used for kernel invocation
dim3 block(32);
dim3 grid((my_bunch_size / 31) + 32);
kernelXOR<<<grid, block>>>(d_cbitset, d_bitsets, d_r, my_bitset_size, my_bunch_size);
cudaMemcpy(h_r, d_r, my_bunch_size*sizeof(int), cudaMemcpyDeviceToHost);
// improved
dim3 iblock(nTPB);
dim3 igrid(640);
kernelXOR_imp<<<igrid, iblock>>>(d_cbitset, d_bitsets, d_r, my_bitset_size, my_bunch_size);
cudaMemcpy(h_ri, d_r, my_bunch_size*sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < my_bunch_size; i++)
if (h_r[i] != h_ri[i]) {std::cout << "mismatch at i: " << i << " was: " << h_ri[i] << " should be: " << h_r[i] << std::endl; return 0;}
std::cout << "Results match." << std::endl;
return 0;
}
$ nvcc -o t1649 t1649.cu
$ cuda-memcheck ./t1649
========= CUDA-MEMCHECK
Results match.
========= ERROR SUMMARY: 0 errors
$ nvprof ./t1649
==18868== NVPROF is profiling process 18868, command: ./t1649
Results match.
==18868== Profiling application: ./t1649
==18868== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 97.06% 71.113ms 2 35.557ms 2.3040us 71.111ms [CUDA memcpy HtoD]
2.26% 1.6563ms 1 1.6563ms 1.6563ms 1.6563ms kernelXOR(unsigned int*, unsigned int*, int*, int, int)
0.59% 432.68us 1 432.68us 432.68us 432.68us kernelXOR_imp(unsigned int const *, unsigned int const *, int*, int, int)
0.09% 64.770us 2 32.385us 31.873us 32.897us [CUDA memcpy DtoH]
API calls: 78.20% 305.44ms 3 101.81ms 11.373us 304.85ms cudaMalloc
18.99% 74.161ms 4 18.540ms 31.554us 71.403ms cudaMemcpy
1.39% 5.4121ms 4 1.3530ms 675.30us 3.3410ms cuDeviceTotalMem
1.26% 4.9393ms 388 12.730us 303ns 530.95us cuDeviceGetAttribute
0.11% 442.37us 4 110.59us 102.61us 125.59us cuDeviceGetName
0.03% 128.18us 2 64.088us 21.789us 106.39us cudaLaunchKernel
0.01% 35.764us 4 8.9410us 2.9670us 18.982us cuDeviceGetPCIBusId
0.00% 8.3090us 8 1.0380us 540ns 1.3870us cuDeviceGet
0.00% 5.9530us 3 1.9840us 310ns 3.9900us cuDeviceGetCount
0.00% 2.8800us 4 720ns 574ns 960ns cuDeviceGetUuid
$
In this case, on my Tesla V100, for your problem size, I witness about a 4x improvement in kernel performance. However the kernel performance here is tiny compared to the cost of data movement. So it's unlikely that these sort of optimizations would make a significant difference in your comparison test case, if this is the only thing you are doing on the GPU.
The code above uses striding-loops at the block level and at the grid level, which means it should behave correctly for almost any choice of threadblock size (multiple of 32 please) as well as grid size. That doesn't mean that any/all choices will perform equally. The choice of the threadblock size is to allow the possibility for nearly full occupancy (so don't choose 32). The choice of the grid size is the number of blocks to achieve full occupancy per SM, times the number of SMs. These should be nearly optimal choices, but according to my testing e.g. a larger number of blocks doesn't really reduce performance, and the performance should be roughly constant for nearly any threadblock size (except 32), assuming the number of blocks is calculated accordingly.

Flattening a 3D array to 1D in cuda

I have the following code that I'm trying to implement in cuda but I'm having a problem of flattening a 3D array to 1D in cuda
C++ code
for(int i=0; i<w; i++)
for(int j=0; j<h; j++)
for(int k=0; k<d; k++)
arr[h*w*i+ w*j+ k] = (h*w*i+ w*j+ k)*2;
This is what I have so far in Cuda
int w = h = d;
int N = 64;
__global__ void getIndex(float* A)
{
int i = blockIdx.x;
int j = blockIdx.y;
int k = blockIdx.z;
A[h*w*i+ w*j+ k] = h*w*i+ w*j+ k;
}
int main(int argc, char **argv)
{
float *d_A;
cudaMalloc((void **)&d_A, w * h * d * sizeof(float) );
getIndex <<<N,1>>> (d_A);
}
But I'm not getting the result I'm expecting, I do not know how to get the right i,j and k indices
Consider a 3D problem of size w x h x d. (This could be a simple array which has to be set like in your question or any other 3D problem that is easy to parallelize.) I will use your simple set-task for demonstration purpose.
The easiest way to handle this with a CUDA kernel is to launch one thread per array entry, that is w*h*d threads. This answer discusses why one thread per element may not always be the best solution.
Now let us have a look at the following lines of code
dim3 numThreads(w,h,d);
getIndex <<<1, numThreads>>> (d_A, w, h, d);
Here we are launching a kernel with a total of w*h*d threads.
The kernel can than be implemented as
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int i = threadIdx.x;
int j = threadIdx.y;
int k = threadIdx.z;
A[h*d*i+ d*j+ k] = h*d*i+ d*j+ k;
}
But there is a problem with this kernel and the kernel call: The number of threads per thread block is limited (also the number of "threads in a specific direction" is bounded = the z direction is generally most bounded). As we are only calling one thread block our problem size cannot be exceed these certain limits (e.g. w*h*d <= 1024).
This is what threadblocks are for. You can practically launch a kernel with as many threads as you want. (This is not true but the limits for the maximal amount of threadblocks are not likely to be exhausted.)
Calling the kernel this way:
dim3 numBlocks(w/8,h/8,d/8);
dim3 numThreads(8,8,8);
getIndex <<<numBlocks, numThreads>>> (d_A, w, h, d);
will launch the kernel for w/8 * h/8 * d/8 thread blocks while every block contains 8*8*8 threads. So in total w*h*d threads will be called.
Now we have to adjust our kernel accordingly:
__global__ void getIndex(float* A, int w, int h, int d) // we actually do not need w
{
int bx = blockIdx.x;
int by = blockIdx.y;
int bz = blockIdx.z;
int tx = threadIdx.x;
int ty = threadIdx.y;
int tz = threadIdx.z;
A[h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz)] = h*d*(8*bx + tx)+ d*(8*by + ty)+ (8*bz + tz);
}
Note:
You can write a more general kernel using blockDim.x instead of the fixed size 8 and gridDim.x to calculate w via gridDim.x*blockDim.x. The other two dimensions are handled likewise.
In the proposed example all three dimensions w, h and d have to be multiples of 8. You can also generalize the kernel to allow every dimensions. (Then you have to parse all three dimensions to the kernel to check if the calculated position is still in range of the problem.)
As already mentioned, it may be more efficient to edit more than one entry of the array per thread. This again have to be considered when calling the kernel. A wrapper function which takes the problem size and the data and calls the kernel with the right block and thread configuration may be useful.

What is the total thread count(executed over time, not parallel) for CUDA?

I need to execute a function about 10^11 times. The function is self-contained and requires one integer as input, let's call it f(n). The range of n is in fact 0 < n < 10^11. We can ignore inclusion of endpoints, I just need the concept about running something of this magnitude in terms of indexes on CUDA.
I want to run this function using CUDA, but I have troubles conceptually. Namely, I know how to simulate my n, mentioned above, using the blocks and threads indexes. As shown in slide 40 of, nVidia Tutorial But, what happens when n>TotalNumberOfThreadsPer_CUDA_Call.
Essentially, does the thread count and block count reset for every call I make to run functions on CUDA? If so, is there a simple way to simulate n, as described earlier, for arbitrarily large n?
Thanks.
A common pattern when you want to process more elements than there are threads is to simply loop over your data in grid-sized chunks:
__global__ void kernel(int* data, size_t size) {
for (size_t idx = blockIdx.x * blockDim.x + threadIdx.x;
idx < size;
idx += gridDim.x * blockDim.x) {
// do something with data[idx] ...
}
}
Another option is to launch several consecutive kernels with a start offset:
__global__ void kernel(int* data, size_t size, size_t offset) {
size_t idx = blockIdx.x * blockDim.x + threadIdx.x + offset;
if (idx < size) {
// do something with data[idx] ...
}
}
// Host code
dim3 gridSize = ...;
dim3 blockSize = ...;
for (size_t offset = 0; offset < totalWorkSize; offset += gridSize * blockSize) {
kernel<<<gridSize, blockSize>>>(data, totalWorkSize, offset);
}
In both cases, you can process an "arbitrarily large" number of elements. You're still limited by size_t, so for 10^11 elements you will need to compile your code for 64 bits.
If you have to store the data instead of just computing it, you will need to do it in an iterative method. 10^11 values of any type are not going to fit in GPU memory.
I haven't compiled this code, but hopefully you'll get the gist.
__device__ double my_function(int value);
__global__ void my_kernel(int* data, size_t offset, size_t chunk_size) {
size_t idx = blockIdx.x * blockDim.x + threadIdx.x;
size_t stride = gridDim.x*blockDim.x;
while(idx<chunk_size){
data[idx]=my_function(idx+offset);
idx+=stride;
}
}
void runKernel(size_t num_values){
size_t block_size = 128;
size_t grid_size = 1024;
size_t free_mem, total_mem;
cudaMemGetInfo(&free, &total);
size_t chunk_size = sizeof(double)/free_mem;
double *data;
cudaMalloc(&data, chunk_size);
for(size_t i=0; i<num_values; i+=chunk_size){
my_kernel<<<grid_size, block_size>>>(data, i, chunk_size);
//copy to host and process
//or call another kernel on device to process further
}
//process remainder of values that need to be run assuming num_values%chunk_size!=0
}

CUDA kernel error when increasing thread number

I am developing a CUDA ray-plane intersection kernel.
Let's suppose, my plane (face) struct is:
typedef struct _Face {
int ID;
int matID;
int V1ID;
int V2ID;
int V3ID;
float V1[3];
float V2[3];
float V3[3];
float reflect[3];
float emmision[3];
float in[3];
float out[3];
int intersects[RAYS];
} Face;
I pasted the whole struct so you can get an idea of it's size. RAYS equals 625 in current configuration. In the following code assume that the size of faces array is i.e. 1270 (generally - thousands).
Now until today I have launched my kernel in a very naive way:
const int tpb = 64; //threads per block
dim3 grid = (n +tpb-1)/tpb; // n - face count in array
dim3 block = tpb;
//.. some memory allocation etc.
theKernel<<<grid,block>>>(dev_ptr, n);
and inside the kernel I had a loop:
__global__ void theKernel(Face* faces, int faceCount) {
int offset = threadIdx.x + blockIdx.x*blockDim.x;
if(offset >= faceCount)
return;
Face f = faces[offset];
//..some initialization
int RAY = -1;
for(float alpha=0.0f; alpha<=PI; alpha+= alpha_step ){
for(float beta=0.0f; beta<=PI; beta+= beta_step ){
RAY++;
//..calculation per ray in (alpha,beta) direction ...
faces[offset].intersects[RAY] = ...; //some assignment
This is about it. I looped through all the directions and updated the faces array. I worked correctly, but was hardly any faster than CPU code.
So today I tried to optimize the code, and launch the kernel with a much bigger number of threads. Instead of having 1 thread per face I want 1 thread per face's ray (meaning 625 threads work for 1 face). The modifications were simple:
dim3 grid = (n*RAYS +tpb-1)/tpb; //before launching . RAYS = 625, n = face count
and the kernel itself:
__global__ void theKernel(Face *faces, int faceCount){
int threadNum = threadIdx.x + blockIdx.x*blockDim.x;
int offset = threadNum/RAYS; //RAYS is a global #define
int rayNum = threadNum - offset*RAYS;
if(offset >= faceCount || rayNum != 0)
return;
Face f = faces[offset];
//initialization and the rest.. again ..
And this code does not work at all. Why? Theoretically, only the 1st thread (of the 625 per Face) should work, so why does this result in bad (hardly any) computation?
Kind regards,
e.
The maximum size of a grid in any dimension is 65535 (CUDA programming guide, Appendix F). If your grid size was 1000 before the change, you have increased it to 625000. That's bigger than the limit, so the kernel won't run correctly.
If you define the grid size as
dim3 grid((n + tpb - 1) / tpb, RAYS);
then all grid dimensions will be smaller than the limit. You'll also have to change the way blockIdx is used in the kernel.
As Heatsink pointed out you are probably exceeding available resources. Good idea is to check after kernel execution whether there was no error.
Here is C++ code I use:
#include <cutil_inline.h>
void
check_error(const char* str, cudaError_t err_code) {
if (err_code != ::cudaSuccess)
std::cerr << str << " -- " << cudaGetErrorString(err_code) << "\n";
}
Then when I invole kernel:
my_kernel <<<block_grid, thread_grid >>>(args);
check_error("my_kernel", cudaGetLastError());