How to benchmark CUDA programs? - c++

I was trying to benchmark my first CUDA application that adds two arrays first using the CPU and then using the GPU.
Here is the program.
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include<iostream>
#include<chrono>
using namespace std;
using namespace std::chrono;
// add two arrays
void add(int n, float *x, float *y) {
for (int i = 0; i < n; i++) {
y[i] += x[i];
}
}
__global__ void addParallel(int n, float *x, float *y) {
int i = threadIdx.x;
if (i < n)
y[i] += x[i];
}
void printElapseTime(std::chrono::microseconds elapsed_time) {
cout << "completed in " << elapsed_time.count() << " microseconds" << endl;
}
int main() {
// generate two arrays of million float values each
cout << "Generating two lists of a million float values ... ";
int n = 1 << 28;
float *x, *y;
cudaMallocManaged(&x, sizeof(float)*n);
cudaMallocManaged(&y, sizeof(float)*n);
// begin benchmark array generation
auto begin = high_resolution_clock::now();
for (int i = 0; i < n; i++) {
x[i] = 1.0f;
y[i] = 2.0f;
}
// end benchmark array generation
auto end = high_resolution_clock::now();
auto elapsed_time = duration_cast<microseconds>(end - begin);
printElapseTime(elapsed_time);
// begin benchmark addition cpu
begin = high_resolution_clock::now();
cout << "Adding both arrays using CPU ... ";
add(n, x, y);
// end benchmark addition cpu
end = high_resolution_clock::now();
elapsed_time = duration_cast<microseconds>(end - begin);
printElapseTime(elapsed_time);
// begin benchmark addition gpu
begin = high_resolution_clock::now();
cout << "Adding both arrays using GPU ... ";
addParallel << <1, 1024 >> > (n, x, y);
cudaDeviceSynchronize();
// end benchmark addition gpu
end = high_resolution_clock::now();
elapsed_time = duration_cast<microseconds>(end - begin);
printElapseTime(elapsed_time);
cudaFree(x);
cudaFree(y);
return 0;
}
Surprisingly though, the program is generating the following output.
Generating two lists of a million float values ... completed in 13343211 microseconds
Adding both arrays using CPU ... completed in 543994 microseconds
Adding both arrays using GPU ... completed in 3030147 microseconds
I wonder where exactly I am going wrong. Why is the GPU computation taking 6 times longer than the one that is running on the CPU.
For your reference, I'm running Windows 10 on Intel i7 8750H and Nvidia GTX 1060.

Note that your unified memory array contains 268 million floats, meaning you're transferring about 1 GB of data to the device when you invoke your kernel. Use a GPU profiler (nvprof, nvvp, or nsight) and you should see a HtoD transfer taking the bulk of your computation time.

Related

Why using global variabes makes the multi-threaded execution 2x slower while in other environment it makes it 2x faster?

The code below was taken from an example compiled with g++. The multi-threaded was 2x faster than the single-threaded.
I'm executing it in Visual Studio 2019 and the results are the opposite: the single-threaded is 2x faster than the multi-threaded.
#include<thread>
#include<iostream>
#include<chrono>
using namespace std;
using ll = long long;
ll odd, even;
void par(const ll ini, const ll fim)
{
for (auto i = ini; i <= fim; i++)
if (!(i & 1))
even += i;
}
void impar(const ll ini, const ll fim)
{
for (auto i = ini; i <= fim; i++)
if (i & 1)
odd += i;
}
int main()
{
const ll start = 0;
const ll end = 190000000;
/* SINGLE THREADED */
auto start_single = chrono::high_resolution_clock::now();
par(start, end);
impar(start, end);
auto end_single = chrono::high_resolution_clock::now();
auto single_duration = chrono::duration_cast<chrono::microseconds>(end_single - start_single).count();
cout << "SINGLE THREADED\nEven sum: " << even << "\nOdd sum: " << odd << "\nTime: " << single_duration << "ms\n\n\n";
/* END OF SINGLE*/
/* MULTI THREADED */
even = odd = 0;
auto start_multi= chrono::high_resolution_clock::now();
thread t(par, start, end);
thread t2(impar, start, end);
t.join();
t2.join();
auto end_multi = chrono::high_resolution_clock::now();
auto multi_duration = chrono::duration_cast<chrono::microseconds>(end_multi - start_multi).count();
cout << "MULTI THREADED\nEven sum: " << even << "\nOdd sum: " << odd << "\nTime: " << multi_duration << "ms\n";
/*END OF MULTI*/
cout << "\n\nIs multi faster than single? => " << boolalpha << (multi_duration < single_duration) << '\n';
}
However, If I do a small modification on my functions as shown below:
void par(const ll ini, const ll fim)
{
ll temp = 0;
for (auto i = ini; i <= fim; i++)
if (!(i & 1))
temp += i;
even = temp;
}
void impar(const ll ini, const ll fim)
{
ll temp = 0;
for (auto i = ini; i <= fim; i++)
if (i & 1)
temp += i;
odd = temp;
}
The multi-threaded performs better. I would like to know what leads to this behavior (what are the possible differences in implementation that explains it).
Also, I have compiled with gcc from www.onlinegdb.com and the results are similar to Visual Studio's in my machine.
You are a victim of false sharing.
odd and even reside next to each other, and accessing them from two threads leads to L3 cache line contention (a.k.a false sharing).
You can fix it by spreading them by 64 bytes to make sure they reside in different cache lines, for example, like this:
alignas(64) ll odd, even;
With that change I get good speedup with 2 threads:
SINGLE THREADED
Even sum: 9025000095000000
Odd sum: 9025000000000000
Time: 825954ms
MULTI THREADED
Even sum: 9025000095000000
Odd sum: 9025000000000000
Time: 532420ms
As for G++ performance - it might be performing the optimization you made manually for you. MSVC is more careful when it comes to optimizing global variables.

the performance of CUDA depending on declaring variable

Is there any tip for improving CUDA performance in that case such as declaring global/local variable, parameter passing, memory copy.
I'm trying to figure out the reason why two performance are too different between sum_gpu_FAST and sum_gpu_SLOW in example below.
Here you can see the whole example code.
#include <iostream>
#include <chrono>
#define N 10000000
__global__
void sum_gpu_FAST(int (&data)[N][2], int& sum, int n) { // runtime : 2.42342s
int s = 0;
for (int i = 0; i < n; i++)
s += data[i][0] * 10 + data[i][1];
sum = s;
}
__global__
void sum_gpu_SLOW(int (&data)[N][2], int& sum, int n) { // runtime : 436.64ms
sum = 0;
for (int i = 0; i < n; i++) {
sum += data[i][0] * 10 + data[i][1];
}
}
void sum_cpu(int (*data)[2], int& sum, int n) {
for (int i = 0; i < n; i++) {
sum += data[i][0] * 10 + data[i][1];
}
}
int main()
{
int (*v)[2] = new int[N][2];
for (int i = 0; i < N; i++)
v[i][0] = 1, v[i][1] = 3;
printf ("-CPU------------------------------------------------\n");
{
int sum = 0;
auto start = std::chrono::system_clock::now();
sum_cpu(v, sum, N);
auto end = std::chrono::system_clock::now();
// print output
std::cout << sum << " / " << (end-start).count() / 1000000 << "ms" << std::endl;
}
printf ("-GPU-Ready------------------------------------------\n");
int *dev_sum = nullptr;
int (*dev_v)[N][2] = nullptr;
cudaMalloc((void **)&dev_v, sizeof(int[N][2]));
cudaMalloc((void **)&dev_sum, sizeof(int));
cudaMemcpy(dev_v, v, sizeof(int[N][2]), cudaMemcpyHostToDevice);
printf("-GPU-FAST-------------------------------------------\n");
{
int sum = 0;
auto start = std::chrono::system_clock::now();
sum_gpu_FAST<<<1, 1>>> (*dev_v, *dev_sum, N);
cudaDeviceSynchronize(); // wait until end of kernel
auto end = std::chrono::system_clock::now();
// print output
cudaMemcpy( &sum, dev_sum, sizeof(int), cudaMemcpyDeviceToHost );
std::cout << sum << " / " << (end-start).count() / 1000000 << "ms" << std::endl;
}
printf("-GPU-SLOW-------------------------------------------\n");
{
int sum = 0;
auto start = std::chrono::system_clock::now();
sum_gpu_SLOW<<<1, 1>>> (*dev_v, *dev_sum, N);
cudaDeviceSynchronize(); // wait until end of kernel
auto end = std::chrono::system_clock::now();
// print output
cudaMemcpy( &sum, dev_sum, sizeof(int), cudaMemcpyDeviceToHost );
std::cout << sum << " / " << (end-start).count() / 1000000 << "ms" << std::endl;
}
printf("----------------------------------------------------\n");
return 0;
}
I'm trying to figure out the reason why two performance are too different between sum_gpu_FAST and sum_gpu_SLOW in example below.
In the fast case, you are creating a local variable which is contained (presumably) in a register:
int s = 0;
During the loop iterations, reads are occurring from global memory, but the only write operation is to a register:
for (int i = 0; i < n; i++)
s += data[i][0] * 10 + data[i][1];
In the slow case, the running sum is contained in a variable resident in global memory:
sum = 0;
therefore, at each loop iteration, the updated value is written to global memory:
for (int i = 0; i < n; i++) {
sum += data[i][0] * 10 + data[i][1];
Therefore the loop has additional overhead to write to global memory at each iteration, which is slower than maintaining the sum in a register.
I'm not going to completely dissect the SASS code to compare these two cases, because the compiler is making other decisions in the fast case around loop unrolling and possibly other factors, but my guess is that the lack of a need to store results to global memory during the loop iterations considerably assists with loop unrolling as well. However we can make a simple deduction based on the tail end of the SASS code for each case:
Function : _Z12sum_gpu_FASTRA10000000_A2_iRii
.headerflags #"EF_CUDA_SM70 EF_CUDA_PTX_SM(EF_CUDA_SM70)"
/*0000*/ MOV R1, c[0x0][0x28] ; /* 0x00000a0000017a02 */
/* 0x000fd00000000f00 */
...
/*0b00*/ STG.E.SYS [R2], R20 ; /* 0x0000001402007386 */
/* 0x000fe2000010e900 */
/*0b10*/ EXIT ; /* 0x000000000000794d */
/* 0x000fea0003800000 */
In the fast case above, we see that there is a single global store (STG) instruction at the end of the kernel, right before the return statement (EXIT), and outside of any loops in the kernel. Although I haven't shown it all, indeed there are no other STG instructions in the fast kernel, except the one at the end. We see a different story looking at the tail end of the slow kernel:
code for sm_70
Function : _Z12sum_gpu_SLOWRA10000000_A2_iRii
.headerflags #"EF_CUDA_SM70 EF_CUDA_PTX_SM(EF_CUDA_SM70)"
/*0000*/ IMAD.MOV.U32 R1, RZ, RZ, c[0x0][0x28] ; /* 0x00000a00ff017624 */
/* 0x000fd000078e00ff */
...
/*0460*/ STG.E.SYS [R2], R7 ; /* 0x0000000702007386 */
/* 0x0005e2000010e900 */
/*0470*/ #!P0 BRA 0x2f0 ; /* 0xfffffe7000008947 */
/* 0x000fea000383ffff */
/*0480*/ EXIT ; /* 0x000000000000794d */
/* 0x000fea0003800000 */
The slow kernel ends a loop with the STG instruction inside the loop. The slow kernel also has many instances of the STG instruction throughout the kernel, presumably because of compiler unrolling.

thrust::max_element slow in comparison cublasIsamax - More efficient implementation?

I need a fast and efficient implementation for finding the index of the maximum value in an array in CUDA. This operation needs to be performed several times. I originally used cublasIsamax for this, however, it sadly returns the index of the maximum absolute value, which is not what I want. Instead, I'm using thrust::max_element, however the speed is rather slow in comparison to cublasIsamax. I use it in the following manner:
//d_vector is a pointer on the device pointing to the beginning of the vector, containing nrElements floats.
thrust::device_ptr<float> d_ptr = thrust::device_pointer_cast(d_vector);
thrust::device_vector<float>::iterator d_it = thrust::max_element(d_ptr, d_ptr + nrElements);
max_index = d_it - (thrust::device_vector<float>::iterator)d_ptr;
The number of elements in the vector range between 10'000 and 20'000. The difference in speed between thrust::max_element and cublasIsamax is rather big. Perhaps I'm performing several memory transactions without knowing?
A more efficient implementation would be to write your own max-index reduction code in CUDA. It's likely that cublasIsamax is using something like this under the hood.
We can compare 3 approaches:
thrust::max_element
cublasIsamax
custom CUDA kernel
Here's a fully worked example:
$ cat t665.cu
#include <cublas_v2.h>
#include <thrust/extrema.h>
#include <thrust/device_ptr.h>
#include <thrust/device_vector.h>
#include <iostream>
#include <stdlib.h>
#define DSIZE 10000
// nTPB should be a power-of-2
#define nTPB 256
#define MAX_KERNEL_BLOCKS 30
#define MAX_BLOCKS ((DSIZE/nTPB)+1)
#define MIN(a,b) ((a>b)?b:a)
#define FLOAT_MIN -1.0f
#include <time.h>
#include <sys/time.h>
unsigned long long dtime_usec(unsigned long long prev){
#define USECPSEC 1000000ULL
timeval tv1;
gettimeofday(&tv1,0);
return ((tv1.tv_sec * USECPSEC)+tv1.tv_usec) - prev;
}
__device__ volatile float blk_vals[MAX_BLOCKS];
__device__ volatile int blk_idxs[MAX_BLOCKS];
__device__ int blk_num = 0;
template <typename T>
__global__ void max_idx_kernel(const T *data, const int dsize, int *result){
__shared__ volatile T vals[nTPB];
__shared__ volatile int idxs[nTPB];
__shared__ volatile int last_block;
int idx = threadIdx.x+blockDim.x*blockIdx.x;
last_block = 0;
T my_val = FLOAT_MIN;
int my_idx = -1;
// sweep from global memory
while (idx < dsize){
if (data[idx] > my_val) {my_val = data[idx]; my_idx = idx;}
idx += blockDim.x*gridDim.x;}
// populate shared memory
vals[threadIdx.x] = my_val;
idxs[threadIdx.x] = my_idx;
__syncthreads();
// sweep in shared memory
for (int i = (nTPB>>1); i > 0; i>>=1){
if (threadIdx.x < i)
if (vals[threadIdx.x] < vals[threadIdx.x + i]) {vals[threadIdx.x] = vals[threadIdx.x+i]; idxs[threadIdx.x] = idxs[threadIdx.x+i]; }
__syncthreads();}
// perform block-level reduction
if (!threadIdx.x){
blk_vals[blockIdx.x] = vals[0];
blk_idxs[blockIdx.x] = idxs[0];
if (atomicAdd(&blk_num, 1) == gridDim.x - 1) // then I am the last block
last_block = 1;}
__syncthreads();
if (last_block){
idx = threadIdx.x;
my_val = FLOAT_MIN;
my_idx = -1;
while (idx < gridDim.x){
if (blk_vals[idx] > my_val) {my_val = blk_vals[idx]; my_idx = blk_idxs[idx]; }
idx += blockDim.x;}
// populate shared memory
vals[threadIdx.x] = my_val;
idxs[threadIdx.x] = my_idx;
__syncthreads();
// sweep in shared memory
for (int i = (nTPB>>1); i > 0; i>>=1){
if (threadIdx.x < i)
if (vals[threadIdx.x] < vals[threadIdx.x + i]) {vals[threadIdx.x] = vals[threadIdx.x+i]; idxs[threadIdx.x] = idxs[threadIdx.x+i]; }
__syncthreads();}
if (!threadIdx.x)
*result = idxs[0];
}
}
int main(){
int nrElements = DSIZE;
float *d_vector, *h_vector;
h_vector = new float[DSIZE];
for (int i = 0; i < DSIZE; i++) h_vector[i] = rand()/(float)RAND_MAX;
h_vector[10] = 10; // create definite max element
cublasHandle_t my_handle;
cublasStatus_t my_status = cublasCreate(&my_handle);
cudaMalloc(&d_vector, DSIZE*sizeof(float));
cudaMemcpy(d_vector, h_vector, DSIZE*sizeof(float), cudaMemcpyHostToDevice);
int max_index = 0;
unsigned long long dtime = dtime_usec(0);
//d_vector is a pointer on the device pointing to the beginning of the vector, containing nrElements floats.
thrust::device_ptr<float> d_ptr = thrust::device_pointer_cast(d_vector);
thrust::device_vector<float>::iterator d_it = thrust::max_element(d_ptr, d_ptr + nrElements);
max_index = d_it - (thrust::device_vector<float>::iterator)d_ptr;
cudaDeviceSynchronize();
dtime = dtime_usec(dtime);
std::cout << "thrust time: " << dtime/(float)USECPSEC << " max index: " << max_index << std::endl;
max_index = 0;
dtime = dtime_usec(0);
my_status = cublasIsamax(my_handle, DSIZE, d_vector, 1, &max_index);
cudaDeviceSynchronize();
dtime = dtime_usec(dtime);
std::cout << "cublas time: " << dtime/(float)USECPSEC << " max index: " << max_index << std::endl;
max_index = 0;
int *d_max_index;
cudaMalloc(&d_max_index, sizeof(int));
dtime = dtime_usec(0);
max_idx_kernel<<<MIN(MAX_KERNEL_BLOCKS, ((DSIZE+nTPB-1)/nTPB)), nTPB>>>(d_vector, DSIZE, d_max_index);
cudaMemcpy(&max_index, d_max_index, sizeof(int), cudaMemcpyDeviceToHost);
dtime = dtime_usec(dtime);
std::cout << "kernel time: " << dtime/(float)USECPSEC << " max index: " << max_index << std::endl;
return 0;
}
$ nvcc -O3 -arch=sm_20 -o t665 t665.cu -lcublas
$ ./t665
thrust time: 0.00075 max index: 10
cublas time: 6.3e-05 max index: 11
kernel time: 2.5e-05 max index: 10
$
Notes:
CUBLAS returns an index 1 higher than the others because CUBLAS uses 1-based indexing.
CUBLAS might be quicker if you used CUBLAS_POINTER_MODE_DEVICE, however for validation you would still have to copy the result back to the host.
CUBLAS with CUBLAS_POINTER_MODE_DEVICE should be asynchronous, so the cudaDeviceSynchronize() will be desirable for the host based timing I've shown here. In some cases, thrust can be asynchronous as well.
For convenience and results comparison between CUBLAS and the other methods, I am using all nonnegative values for my data. You may want to adjust the FLOAT_MIN value if you are using negative values as well.
If you're freaky about performance, you can try tuning the nTPB and MAX_KERNEL_BLOCKS parameters to see if you can max out performance on your specific GPU. The kernel code also arguably leaves some performance on the table by not switching carefully into a warp-synchronous mode for the final stages of the (two) threadblock reduction(s).
The threadblock reduction kernel uses a block-draining/last-block strategy to avoid the overhead of an additional kernel launch to perform the final reduction.

Why cublas on GTX Titan is slower than single threaded CPU code?

I am testing Nvidia Cublas Library on my GTX Titan. I have the following code:
#include "cublas.h"
#include <stdlib.h>
#include <conio.h>
#include <Windows.h>
#include <iostream>
#include <iomanip>
/* Vector size */
#define N (1024 * 1024 * 32)
/* Main */
int main(int argc, char** argv)
{
LARGE_INTEGER frequency;
LARGE_INTEGER t1, t2;
float* h_A;
float* h_B;
float* d_A = 0;
float* d_B = 0;
/* Initialize CUBLAS */
cublasInit();
/* Allocate host memory for the vectors */
h_A = (float*)malloc(N * sizeof(h_A[0]));
h_B = (float*)malloc(N * sizeof(h_B[0]));
/* Fill the vectors with test data */
for (int i = 0; i < N; i++)
{
h_A[i] = rand() / (float)RAND_MAX;
h_B[i] = rand() / (float)RAND_MAX;
}
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&t1);
/* Allocate device memory for the vectors */
cublasAlloc(N, sizeof(d_A[0]), (void**)&d_A);
cublasAlloc(N, sizeof(d_B[0]), (void**)&d_B);
/* Initialize the device matrices with the host vectors */
cublasSetVector(N, sizeof(h_A[0]), h_A, 1, d_A, 1);
cublasSetVector(N, sizeof(h_B[0]), h_B, 1, d_B, 1);
/* Performs operation using cublas */
float res = cublasSdot(N, d_A, 1, d_B, 1);
/* Memory clean up */
cublasFree(d_A);
cublasFree(d_B);
QueryPerformanceCounter(&t2);
double elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
std::cout << "GPU time = " << std::setprecision(16) << elapsedTime << std::endl;
std::cout << "GPU result = " << res << std::endl;
QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&t1);
float sum = 0.;
for (int i = 0; i < N; i++) {
sum += h_A[i] * h_B[i];
}
QueryPerformanceCounter(&t2);
elapsedTime = (t2.QuadPart - t1.QuadPart) * 1000.0 / frequency.QuadPart;
std::cout << "CPU time = " << std::setprecision(16) << elapsedTime << std::endl;
std::cout << "CPU result = " << sum << std::endl;
free(h_A);
free(h_B);
/* Shutdown */
cublasShutdown();
getch();
return EXIT_SUCCESS;
}
When I run the code I get the following result:
GPU time = 164.7487009845991
GPU result = 8388851
CPU time = 45.22368030957917
CPU result = 7780599.5
Why using cublas library on GTX Titan is 3 times slower than calculations on one Xeon 2.4GHz IvyBridge core?
When I increase or decrease the vector sizes, I get the same results: GPU is slower than CPU. Double precision doesn't change it.
Because dot product is a function that uses each vector element only once. That means that the time to send it to the video card is much greater than to calculate everything on cpu, because PCIExpress is much slower than RAM.
I think you should read this:
http://blog.theincredibleholk.org/blog/2012/12/10/optimizing-dot-product/
There are three main points, I will briefly comment on those:
GPUs are good at hiding latencies with lots of computations (if you can balance between calculations and data transfers), here the memory is accessed a lot (bandwidth limited problem)and there isn't enough computation to hide latencies that, indeed, kill your performances.
Furthermore data is read only once so caching capabilities aren't exploited at all while CPUs are extremely good at predicting which data will be accessed next.
Plus you're also timing the allocation times.. that means PCI-E bus time which is very slow compared to main memory accesses.
All of the above render the example you just posted a case in which CPU outperform a massive parallel architecture like your GPU.
Optimizations for such a problem could be:
Keeping data on the device as much as possible
Having threads calculate more elements (and thus hide latencies)
Also: http://www.nvidia.com/object/nvidia_research_pub_001.html

Performance swapping integers vs double

For some reason my code is able to perform swaps on doubles faster than on the integers. I have no idea why this would be happening.
On my machine the double swap loop completes 11 times faster than the integer swap loop. What property of doubles/integers make them perform this way?
Test setup
Visual Studio 2012 x64
cpu core i7 950
Build as Release and run exe directly, VS Debug hooks skew things
Output:
Process time for ints 1.438 secs
Process time for doubles 0.125 secs
#include <iostream>
#include <ctime>
using namespace std;
#define N 2000000000
void swap_i(int *x, int *y) {
int tmp = *x;
*x = *y;
*y = tmp;
}
void swap_d(double *x, double *y) {
double tmp = *x;
*x = *y;
*y = tmp;
}
int main () {
int a = 1, b = 2;
double d = 1.0, e = 2.0, iTime, dTime;
clock_t c0, c1;
// Time int swaps
c0 = clock();
for (int i = 0; i < N; i++) {
swap_i(&a, &b);
}
c1 = clock();
iTime = (double)(c1-c0)/CLOCKS_PER_SEC;
// Time double swaps
c0 = clock();
for (int i = 0; i < N; i++) {
swap_d(&d, &e);
}
c1 = clock();
dTime = (double)(c1-c0)/CLOCKS_PER_SEC;
cout << "Process time for ints " << iTime << " secs" << endl;
cout << "Process time for doubles " << dTime << " secs" << endl;
}
It seems that VS only optimized one of the loops as Blastfurnace explained.
When I disable all compiler optimizations and have my swap code inline inside the loops, I got the following results (I also switched my timer to std::chrono::high_resolution_clock):
Process time for ints 1449 ms
Process time for doubles 1248 ms
You can find the answer by looking at the generated assembly.
Using Visual C++ 2012 (32-bit Release build) the body of swap_i is three mov instructions but the body of swap_d is completely optimized away to an empty loop. The compiler is smart enough to see that an even number of swaps has no visible effect. I don't know why it doesn't do the same with the int loop.
Just changing #define N 2000000000 to #define N 2000000001 and rebuilding causes the swap_d body to perform actual work. The final times are close on my machine with swap_d being about 3% slower.