I'm working with a (recurrent) neural network on C++ & OpenCL to get some low-level experience with deep learning. Right now I have a simple forward propagation kernel that's yielding oddly low performance; the setup is memory limited as are most deep learning setups, and based on some crude profiling the memory bandwidth that I'm getting is around 2 GB/s. A call to clGetDeviceInfo() confirms that I'm using my onboard GPU (GTX 960m); I suspect that somehow the memory I'm allocating with clCreateBuffer() is somehow ending up on the CPU which would lead to transfer rates hovering around 2 GB/s as suggested by this article. The buffers I'm allocating shouldn't be too large for the GPU; the largest are 1024*1024*4 bytes = 4 MB (weights) and only 12 of those are created.
The calls to clCreateBuffer(), with some context:
NVector::NVector(int size) {
empty = false;
numNeurons = size;
activationsMem = clCreateBuffer(RNN::clContext, CL_MEM_READ_WRITE, sizeof(float) * numNeurons, NULL, NULL);
parametersMem = clCreateBuffer(RNN::clContext, CL_MEM_READ_WRITE, sizeof(float) * numNeurons, NULL, NULL);
derivativesMem = clCreateBuffer(RNN::clContext, CL_MEM_READ_WRITE, sizeof(float) * numNeurons, NULL, NULL);
}
//...
void NVector::connect(NVector& other) {
int numWeights = other.numNeurons * numNeurons;
cl_mem weightMem = clCreateBuffer(RNN::clContext, CL_MEM_READ_WRITE, sizeof(float) * numWeights, NULL, NULL);
float weightAmplitude = 0.2f;
float* weightData = new float[numWeights];
for (int i = 0; i < numWeights; i++) {
weightData[i] = ((rand() % 256) / 256.0f - 0.5f) * weightAmplitude;
}
clEnqueueWriteBuffer(RNN::clQueue, weightMem, CL_TRUE, 0, sizeof(float) * numWeights, weightData, 0, NULL, NULL);
connections.push_back(&other);
weightsMem.push_back(weightMem);
}
What are some reasons that OpenCL might allocate memory to the CPU instead of the active device? What can I do to force memory to be allocated on the GPU?
EDIT: a simple test yielded this value for memory bandwidth, which is in accordance with the suggested 5-6 GB/s bandwidth between the CPU and GPU.
operating device name: GeForce GTX 960M
2.09715 seconds
1.00663e+10 bytes
4.8e+09 bytes / second
Press any key to continue . . .
Related
I just started learning CUDA and I have a trouble interpreting my experiment results. I wanted to compare CPU vs GPU in a simple program that adds two vectors together. The code is following:
__global__ void add(int *a, int *b, int *c, long long n) {
long long tid = blockIdx.x * blockDim.x + threadIdx.x;
if (tid < n) {
c[tid] = a[tid] + b[tid];
}
}
void add_cpu(int* a, int* b, int* c, long long n) {
for (long long i = 0; i < n; i++) {
c[i] = a[i] + b[i];
}
}
void check_results(int* gpu, int* cpu, long long n) {
for (long long i = 0; i < n; i++) {
if (gpu[i] != cpu[i]) {
printf("Different results!\n");
return;
}
}
}
int main(int argc, char* argv[]) {
long long n = atoll(argv[1]);
int num_of_blocks = atoi(argv[2]);
int num_of_threads = atoi(argv[3]);
int* a = new int[n];
int* b = new int[n];
int* c = new int[n];
int* c_cpu = new int[n];
int *dev_a, *dev_b, *dev_c;
cudaMalloc((void **) &dev_a, n * sizeof(int));
cudaMalloc((void **) &dev_b, n * sizeof(int));
cudaMalloc((void **) &dev_c, n * sizeof(int));
for (long long i = 0; i < n; i++) {
a[i] = i;
b[i] = i * 2;
}
cudaMemcpy(dev_a, a, n * sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(dev_b, b, n * sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(dev_c, c, n * sizeof(int), cudaMemcpyHostToDevice);
StopWatchInterface *timer=NULL;
sdkCreateTimer(&timer);
sdkResetTimer(&timer);
sdkStartTimer(&timer);
add <<<num_of_blocks, num_of_threads>>>(dev_a, dev_b, dev_c, n);
cudaDeviceSynchronize();
sdkStopTimer(&timer);
float time = sdkGetTimerValue(&timer);
sdkDeleteTimer(&timer);
cudaMemcpy(c, dev_c, n * sizeof(int), cudaMemcpyDeviceToHost);
cudaFree(dev_a);
cudaFree(dev_b);
cudaFree(dev_c);
clock_t start = clock();
add_cpu(a, b, c_cpu, n);
clock_t end = clock();
check_results(c, c_cpu, n);
printf("%f %f\n", (double)(end - start) * 1000 / CLOCKS_PER_SEC, time);
return 0;
}
I ran this code in a loop with a bash script:
for i in {1..2560}
do
n="$((1024 * i))"
out=`./vectors $n $i 1024`
echo "$i $out" >> "./vectors.txt"
done
Where 2560 is maximum number of blocks that my GPU supports, and 1024 is the maximum number of threads in block. So I just ran it for maximum block size to the maximum problem size my GPU can handle, with a step of 1 block (1024 ints in vector).
Here is my GPU info:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA GeForce RTX 2070 SUPER"
CUDA Driver Version / Runtime Version 11.3 / 11.0
CUDA Capability Major/Minor version number: 7.5
Total amount of global memory: 8192 MBytes (8589934592 bytes)
(040) Multiprocessors, (064) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1785 MHz (1.78 GHz)
Memory Clock rate: 7001 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 4194304 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 65536 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.3, CUDA Runtime Version = 11.0, NumDevs = 1
Result = PASS
After running the experiment I gathered the results and plotted them:
So what bothers me is this 256 blocks-wide period in the GPU execution time. I have no clue why this happens. Why executing 512 blocks is much slower than executing 513 blocks of threads?
I also checked this with a constant number of blocks (2560) as well as with different block sizes and it always give this period of 256 * 1024 vector size (so for block size 512 its each 512 blocks, not each 256 blocks). So maybe this is something with memory, but I can't figure out what.
I would appreciate any ideas on why this is happening.
This is by no means a complete or precise answer. However I believe the periodic pattern you are observing is at least partly due to a 1-time or first-time kernel launch overhead. Good benchmarking practice usually is to do something other than what you are doing. For example, run the kernel multiple times and take an average. Or do some other kind of statistical measurement.
When I run your code using your script on a GTX 960 GPU, I get the following graph (only plotting the GPU data, vertical axis is in milliseconds):
When I modify your code as follows:
cudaMemcpy(dev_c, c, n * sizeof(int), cudaMemcpyHostToDevice);
// next two lines added:
add <<<num_of_blocks, num_of_threads>>>(dev_a, dev_b, dev_c, n);
cudaDeviceSynchronize();
StopWatchInterface *timer=NULL;
sdkCreateTimer(&timer);
sdkResetTimer(&timer);
sdkStartTimer(&timer);
add <<<num_of_blocks, num_of_threads>>>(dev_a, dev_b, dev_c, n);
cudaDeviceSynchronize();
Doing a "warm-up" run first, then timing the second run, I witness data like this:
So the data without the warm-up shows a periodicity. After the warm-up, the periodicity disappears. I conclude that the periodicity is due to some kind of 1-time or first-time behavior. Some typical things that might be in this category are caching effects and cuda "lazy" initialization effects (for example, the time taken to JIT-compile the GPU code, which is certainly happening in your case, or the time to load the GPU code into GPU memory). I won't be able to go farther with any explanation of what kind of first-time effect exactly is giving rise to the periodicity.
Another observation is that while my data shows an expected "average slope" to each graph, indicating that the kernel duration associated with 2560 blocks is approximately 5 times the kernel duration associated with 512 blocks, I don't see that kind of trend in your data. It ought to be there, however. Your GPU will "saturate" at about 40 blocks. Thereafter, the average kernel duration should increase in approximately a linear fashion, such that the kernel duration associated with 2560 blocks is 4-5x the kernel duration associated with 512 blocks. I can't explain your data in this respect at all, I suspect a graphing or data processing error, or else a characteristic in your environment (e.g. shared GPU with other users, broken CUDA install, etc.) that is not present in my environment, and which I'm unable to guess at.
Finally, my conclusion is that GPU "expected" behavior is more evident in the presence of good benchmarking techniques.
I try to release allocated memory in GPU using OpenCL
int arraySize = 130000000;
cl_int* A = new cl_int[arraySize];
cl::Buffer gpuA(context, CL_MEM_READ_ONLY, sizeof(cl_int) * arraySize);
inA.setDestructorCallback(¬Need);
cl::Event event;
queue.enqueueWriteBuffer(gpuA, CL_TRUE, 0, sizeof(cl_int) * arraySize, A, NULL, &event);
event.setCallback(CL_COMPLETE, &whenWritten);
event.wait();
and after a few sec callback whenWritten is running (writing text Complete)
program memory is incresing and using TaskManager in windows10 - GPU (Dedicated memory usage) I see in chart incresing memory level too.
Very Good ;-)
then i run sleep for 10s
and now I would like to clear memory in GPU
for local variable A I use
delete A; //local memory is decresing
but when I using
clReleaseMemObject(gpuA());
I don't see any changes on GPU memory
What I doing wrong ? What is the best solution for this ?
Thanks for any advice
OK so I replace buffer and clean buffer to C not C++
cl_mem gpuA2 = clCreateBuffer(context(), CL_MEM_READ_ONLY, sizeof(cl_int) * arraySize, NULL, NULL);
clEnqueueWriteBuffer(queue(), gpuA2, CL_TRUE, 0, sizeof(cl_int) * arraySize, A, 0, NULL, NULL);
queue.finish();
clReleaseMemObject(gpuA2);
delete A;
and after run I don't see any difference
My computer has a GeForce 1080Ti. With 11GB of VRAM, I don't expect a memory issue, so I'm at a loss to explain why the following breaks my code.
I execute the kernel on the host with this code.
cl_mem buffer = clCreateBuffer(context.GetContext(), CL_MEM_READ_WRITE, n * n * sizeof(int), NULL, &error);
error = clSetKernelArg(context.GetKernel(myKernel), 1, n * n, m1);
error = clSetKernelArg(context.GetKernel(myKernel), 0, sizeof(cl_mem), &buffer);
error = clEnqueueNDRangeKernel(context.GetCommandQueue(0), context.GetKernel(myKernel), 1, NULL, 10, 10, 0, NULL, NULL);
clFinish(context.GetCommandQueue(0));
error = clEnqueueReadBuffer(context.GetCommandQueue(0), buffer, true, 0, n * n * sizeof(int), results, 0, NULL, NULL);
results is a pointer to an n-by-n int array. m1 is a pointer to an n-by-n-bit array. The variable n is divisible by 8, so we can interpret the array as a char array.
The first ten values of the array are set to 1025 by the kernel (the value isn't important):
__kernel void PopCountProduct (__global int *results)
{
results[get_global_id(0)] = 1025;
}
When I print out the result on the host, the first 10 indices are 1025. All is well and good.
Suddenly it stops working when I introduce an additional argument:
__kernel void PopCountProduct (__global int *results, __global char *m)
{
results[get_global_id(0)] = 1025;
}
Why is this happening? Am I missing something crucial about OpenCL?
You can't pas host pointer to clSetKernelArg in OpenCL 1.2. Similar thing can only be done in OpenCL 2.0+ by clSetKernelArgSVMPointer with SVM pointer if supported. But most probable making a buffer object on GPU and copying host memory to it is what you need.
I developed Pincushion Distortion using CUDA to support real time - more than 40 fps for 3680*2456 Image Sequences.
But it takes 130ms if I use CUDA - nVIDIA GeForce GT 610, 2GB DDR3.
But it takes only 60ms if I use CPU and OpenMP - Core i7 3.4GHz, QuadCore.
Please tell me what to do to speed up.
Thanks.
Full source can be downloaded here.
https://drive.google.com/file/d/0B9SEJgsu0G6QX2FpMnRja0o5STA/view?usp=sharing
https://drive.google.com/file/d/0B9SEJgsu0G6QOGNPMmVQLWpSb2c/view?usp=sharing
The codes are as follows.
__global__
void undistort(int N, float k, int width, int height, int depth, int pitch, float R, float L, unsigned char* in_bits, unsigned char* out_bits)
{
// Get the Index of the Array from GPU Grid/Block/Thread Index and Dimension.
int i, j;
i = blockIdx.y * blockDim.y + threadIdx.y;
j = blockIdx.x * blockDim.x + threadIdx.x;
// If Out of Array
if (i >= height || j >= width)
{
return;
}
// Calculating Undistortion Equation.
// In CPU, We used Fast Approximation equations of atan and sqrt - It makes 2 times faster.
// But In GPU, No need to use Approximation Functions as it is faster.
int cx = width * 0.5;
int cy = height * 0.5;
int xt = j - cx;
int yt = i - cy;
float distance = sqrt((float)(xt*xt + yt*yt));
float r = distance*k / R;
float theta = 1;
if (r == 0)
theta = 1;
else
theta = atan(r)/r;
theta = theta*L;
float tx = theta*xt + cx;
float ty = theta*yt + cy;
// When we correct the frame, its size will be greater than Original.
// So We should Crop it.
if (tx < 0)
tx = 0;
if (tx >= width)
tx = width - 1;
if (ty < 0)
ty = 0;
if (ty >= height)
ty = height - 1;
// Output the Result.
int ux = (int)(tx);
int uy = (int)(ty);
tx = tx - ux;
ty = ty - uy;
unsigned char *p = (unsigned char*)out_bits + i*pitch + j*depth;
unsigned char *q00 = (unsigned char*)in_bits + uy*pitch + ux*depth;
unsigned char *q01 = q00 + depth;
unsigned char *q10 = q00 + pitch;
unsigned char *q11 = q10 + depth;
unsigned char newVal[4] = {0};
for (int k = 0; k < depth; k++)
{
newVal[k] = (q00[k]*(1-tx)*(1-ty) + q01[k]*tx*(1-ty) + q10[k]*(1-tx)*ty + q11[k]*tx*ty);
memcpy(p + k, &newVal[k], 1);
}
}
void wideframe_correction(char* bits, int width, int height, int depth)
{
// Find the device.
// Initialize the nVIDIA Device.
cudaSetDevice(0);
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, 0);
// This works for Calculating GPU Time.
cudaProfilerStart();
// This works for Measuring Total Time
long int dwTime = clock();
// Setting Distortion Parameters
// Note that Multiplying 0.5 works faster than divide into 2.
int cx = (int)(width * 0.5);
int cy = (int)(height * 0.5);
float k = -0.73f;
float R = sqrt((float)(cx*cx + cy*cy));
// Set the Radius of the Result.
float L = (float)(width<height ? width:height);
L = L/2.0f;
L = L/R;
L = L*L*L*0.3333f;
L = 1.0f/(1-L);
// Create the GPU Memory Pointers.
unsigned char* d_img_in = NULL;
unsigned char* d_img_out = NULL;
// Allocate the GPU Memory2D with pitch for fast performance.
size_t pitch;
cudaMallocPitch( (void**) &d_img_in, &pitch, width*depth, height );
cudaMallocPitch( (void**) &d_img_out, &pitch, width*depth, height );
_tprintf(_T("\nPitch : %d\n"), pitch);
// Copy RAM data to VRAM.
cudaMemcpy2D( d_img_in, pitch,
bits, width*depth, width*depth, height,
cudaMemcpyHostToDevice );
cudaMemcpy2D( d_img_out, pitch,
bits, width*depth, width*depth, height,
cudaMemcpyHostToDevice );
// Create Variables for Timing
cudaEvent_t startEvent, stopEvent;
cudaError_t err = cudaEventCreate(&startEvent, 0);
assert( err == cudaSuccess );
err = cudaEventCreate(&stopEvent, 0);
assert( err == cudaSuccess );
// Execution of the version using global memory
float elapsedTime;
cudaEventRecord(startEvent);
// Process image
dim3 dGrid(width / BLOCK_WIDTH + 1, height / BLOCK_HEIGHT + 1);
dim3 dBlock(BLOCK_WIDTH, BLOCK_HEIGHT);
undistort<<< dGrid, dBlock >>> (width*height, k, width, height, depth, pitch, R, L, d_img_in, d_img_out);
cudaThreadSynchronize();
cudaEventRecord(stopEvent);
cudaEventSynchronize( stopEvent );
// Estimate the GPU Time.
cudaEventElapsedTime( &elapsedTime, startEvent, stopEvent);
// Calculate the Total Time.
dwTime = clock() - dwTime;
// Save Image data from VRAM to RAM
cudaMemcpy2D( bits, width*depth,
d_img_out, pitch, width*depth, height,
cudaMemcpyDeviceToHost );
_tprintf(_T("GPU Processing Time(ms) : %d\n"), (int)elapsedTime);
_tprintf(_T("VRAM Memory Read/Write Time(ms) : %d\n"), dwTime - (int)elapsedTime);
_tprintf(_T("Total Time(ms) : %d\n"), dwTime );
// Free GPU Memory
cudaFree(d_img_in);
cudaFree(d_img_out);
cudaProfilerStop();
cudaDeviceReset();
}
i've not read the source code, but there is some things you can't pass through.
your GPU has nearly same performance as your CPU:
Adapt the follwing informations with your real GPU/CPU model.
Specification | GPU | CPU
----------------------------------------
Bandwith | 14,4 GB/sec | 25.6 GB/s
Flops | 155 (FMA) | 135
we can conclude that for memory bounded kernels your GPU will never be faster than your CPU.
GPU informations found here :
http://www.nvidia.fr/object/geforce-gt-610-fr.html#pdpContent=2
CPU informations found here : http://ark.intel.com/products/75123/Intel-Core-i7-4770K-Processor-8M-Cache-up-to-3_90-GHz?q=Intel%20Core%20i7%204770K
and here http://www.ocaholic.ch/modules/smartsection/item.php?page=6&itemid=1005
One does not simply optimize the code just by looking to the source. First of all, you should use Nvidia Profiler https://developer.nvidia.com/nvidia-visual-profiler and see, which part of your code on GPU is the one taking too much time. You might wish to write a UnitTest first however, just to be sure that only the investigated part of your project is tested.
Additionally, you can use CallGrind http://valgrind.org/docs/manual/cl-manual.html to test your CPU code performance.
In general, this is not very surprising that your GPU "optimized" code is slower then "not optimized" one. CUDA cores are usually several times slower than CPU and you have to actually introduce a lot of parallelism to notice a significant speed-up.
EDIT, response to your comment:
As a unit testing framework I strongly recommend GoogleTest. Here you can learn how to use it. Apart from its obvious functionalities (code testing) it allows you to run only specific methods from your class interfaces for performance analysis.
In general, Nvidia profiler is just a tool that runs your code and tells you how much time each of your kernel consume. Please look to their documentation.
By "lot of parallelism" I meant: on your processor you can run 8 x 3.4GHz threads, your GPU has one SM (streaming multiprocessor) with 810MHz clock, lets say 1024 threads per SM (I do not have exact data, but you can run deviceQuery Nvidia script to know the exact parameters), therefore if your GPU code can run (3.4*8)/0.81 = 33 computations in parallel, you will achieve exactly nothing. Execution time of your CPU and GPU code will be the same (neglecting L-cache GPU memory copying, which is expensive). Conclusion: your GPU code should be able to compute at least ~ 40 operations in parallel to introduce any speed-up. On the other hand, lets say that you are able to fully use your GPU potential and you can keep all 1024 on your SM busy all the time. In that case your code will run only (0.81*1024)/(8*3.4) = 30 times faster (approximately, remember that we neglect GPU L-cache operations), which is impossible in most cases, because usually you are not able to parallelize your serial code with such efficiency.
Wish you good luck with your research!
Yes, put nvprof to good use, it's a great tool.
What I could see from your code...
1. Consider using linear thread blocks instead of flat blocks, it could save up some integer operations.
2. Manual correction of image borders and/or thread indices leads to massive divergence and/or impacts coalescing. Consider using texture fetches and/or pre-padding data.
3. memcpy single value from inside the kernel is generally a bad idea.
4. Try to minimize type conversions.
I read about global memory optimization in OpenCL. In one of the slide-shows, a very simple kernel (below) has been used to demonstrate the importance of memory coalescing.
__kernel void measure(__global float* idata, __global float* odata, int offset) {
int xid = get_global_id(0) + offset;
odata[xid] = idata[xid];
}
Please see my code below which measures the running time of the kernel
ret = clFinish(command_queue);
size_t local_item_size = MAX_THREADS;
size_t global_item_size = INPUTSIZE;
struct timeval t0,t1;
gettimeofday(&t0, 0 );
//ret = clFinish(command_queue);
ret = clEnqueueNDRangeKernel(command_queue, measure, 1, NULL,
&global_item_size, &local_item_size, 0, NULL, NULL);
ret = clFlush(command_queue);
ret = clFinish(command_queue);
gettimeofday(&t1,0);
double elapsed = (t1.tv_sec-t0.tv_sec)*1000000 + (t1.tv_usec-t0.tv_usec);
printf("time taken = %lf microseconds\n", elapsed);
I transfer around 0.5 GB of data:
#define INPUTSIZE 1024 * 1024 * 128
int main (int argc, char *argv[])
{
int offset = atoi(argv[1]);
float* input = (float*) malloc(sizeof(float) * INPUTSIZE);
Now, the results are a bit random. With offset =0, I get times as low as 21 usecs. With offset = 1, I get times ranging between 53 usecs to 24400 usecs.
Can someone please tell me what is going on. I thought that offset=0 will be the fastest, because all the threads will access consecutive locations, hence the minimum number of memory transactions will take place.
Bandwidth is a measure of how fast data can be transferred, and is typically measured in bytes/second in these situations (usually GB/s for GPU memory bandwidth).
To compute the bandwidth of a compute kernel, you just need to know how much data the kernel reads/writes from/to memory, and then divide that by the time your kernel took to execute.
Your example kernel has each work-item (or CUDA thread) read a single float, and write a single float. If you launch this kernel to copy 2^10 floats, then you will be reading 2^10 * sizeof(float) bytes, and writing the same amount (so 8MB in total). If this kernel takes 1ms to execute, then you have achieved bandwidth of 8MB / 0.001s = 8GB/s.
Your new code snippet that shows your kernel timing approach indicates that you are only timing the kernel enqueue, not the amount of time it actually takes to run the kernel. This is why you are getting very low kernel timings (0.5GB / 0.007ms ~= 71TB/s!). You should add calls to clFinish() to obtain proper timing. I typically also take timings over several runs, to allow the device to warm-up, which usually gives more consistent timing:
// Warm-up run (not timed)
clEnqueueNDRangeKernel(command_queue, ...);
clFinish(command_queue);
// start timing
start = ...
for (int i = 0; i < NUM_RUNS; i++)
{
clEnqueueNDRangeKernel(command_queue, ...);
}
clFinish(command_queue);
// stop timing
end = ...
// Compute time taken, bandwidth etc
average_time = (end-start)/NUM_RUNS;
...
Question from comment:
Why does offset=0 perform better than offset=1,4 or 6?
On NVIDIA GPUs, work-items are grouped into 'warps' of size 32, which execute in lockstep (other devices have similar approaches, just with a different sizes). Memory transactions are aligned to multiples of the cacheline size (e.g. 64 bytes, 128 bytes etc). Consider what happens when each work-item in a warp attempts to read a single 4-byte value (assuming they are contiguous, as per your example), with a cacheline size of 64 bytes.
This warp is reading a total of 128 bytes of data. If the start of this 128-byte chunk is aligned to a 64-byte boundary (i.e. if offset=0), then this can serviced in two 64-byte transactions. However, if this chunk is not aligned to the a 64-byte boundary (offset=1,4,6,etc), then this will require three memory transactions to fetch all of the data. This is where your performance difference comes from.
If you set the offset to be a multiple of the cacheline size (e.g. 64), then you will likely get performance equivalent to offset=0.