Intel MKL Mismatch results of LAPACKE_dgesvd - c++

I have in my code a call to LAPACKE_dgesvd function. This code is covered by autotest. Upon compiler migration we decided to upgrade MKL too from 11.3.4 to 2019.0.5.
And tests became red. After deep investigation I found that this function is not more returning the same U & V matrices.
I extracted the code and make it run in a separate env/project and same observation. the observation is the first column of U and first row of V have opposite sign
Could you please tell my what I'm doing wrong there ? or how should I use the new version to have the old results ?
I made a simple project allowing to easily reproduce the issue. Here is the code :
// MKL.cpp : This file contains the 'main' function. Program execution begins and ends there
#include <iostream>
#include <algorithm>
#include <mkl.h>
int main()
{
const int rows(3), cols(3);
double covarMatrix[rows*cols] = { 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761};
double U[rows*rows] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double V[cols*cols] = { -1,-1,-1,
-1,-1,-1,
-1,-1,-1 };
double superb[std::min(rows, cols) - 1];
double eigenValues[std::max(rows, cols)];
MKL_INT info = LAPACKE_dgesvd(LAPACK_ROW_MAJOR, 'A', 'A',
rows, cols, covarMatrix, cols, eigenValues, U, rows, V, cols, superb);
if (info > 0)
std::cout << "not converged!\n";
std::cout << "U\n";
for (int row(0); row < rows; ++row)
{
for (int col(0); col < rows; ++col)
std::cout << U[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "V\n";
for (int row(0); row < cols; ++row)
{
for (int col(0); col < cols; ++col)
std::cout << V[row * rows + col] << " ";
std::cout << std::endl;
}
std::cout << "Converged!\n";
}
Here is more numerical explanations :
A = 0.9992441421012894, -0.6088405718211041, -0.4935146797825398,
-0.6088405718211041, 0.9992441421012869, -0.3357678733652218,
-0.4935146797825398, -0.3357678733652218, 0.9992441421012761
results on :
11.3.4 2019.0.5 & 2020.1.216
U
-0.765774 -0.13397 0.629 0.765774 -0.13397 0.629
0.575268 -0.579935 0.576838 -0.575268 -0.579935 0.576838
0.2875 0.803572 0.521168 -0.2875 0.803572 0.521168
V
-0.765774 0.575268 0.2875 0.765774 -0.575268 -0.2875
-0.13397 -0.579935 0.803572 -0.13397 -0.579935 0.803572
0.629 0.576838 0.521168 0.629 0.576838 0.521168
I tested using scipy and the result is identical as on 11.3.4 version.
from scipy import linalg
from numpy import array
A = array([[0.9992441421012894, -0.6088405718211041, -0.4935146797825398], [-0.6088405718211041, 0.9992441421012869, -0.3357678733652218], [-0.4935146797825398, -0.3357678733652218, 0.9992441421012761]])
print(A)
u,s,vt,info = linalg.lapack.dgesvd(A)
print(u)
print(s)
print(vt)
print(info)
Thanks for your help and best regards
Mokhtar

The singular value decomposition is not unique. For example, if we have a SVD decomposition (e.g. a set of matrices U, S, V) so that A=U* S* V^T then the set of matrices (-U, S, -V) is also a SVD decomposition because (-U) S (-V^T) = USV^T = A. Moreover if D is a diagonal matrix which diagonal entries are equal to -1 or 1 then the set of matrices UD, S, VD is also a SVD decomposition because (UD)SDV^T = US*V^T =A.
Since that it is not a good idea to validate the SVD decomposition by comparing two sets of matrices. The LAPACK User’s Guide as many other publications recommend to check the following conditions for the computed SVD decomposition:
1. || A V – US || / || A|| should be small enough
2. || U^T *U – I || close to zero
3. || V^T *V – I || close to zero
4. all diagonal entries of the diagonal S must be positive and sorted in decreasing order. The error bounds for all expressions given above can be found on https://www.netlib.org/lapack/lug/node97.html
So the both MKL versions mentioned in the post-return the singular values and singular vectors which satisfied all 4 error bounds. Since that and because the SVD is not unique, both results are correct. The change of sign in the first singular vectors happened because for very small matrices another faster method for the reduction to bidiagonal form started to use.

Related

Diagonal matrix properly in armadillo

My code works but I'm just curious to see if someone knows how to do this but properly using Armadillo library.
Thanks for your time :)
arma::mat W = arma::mat(4, 4, arma::fill::ones);
arma::mat D = arma::mat(4, 4, arma::fill::zeros);
for (size_t i = 0; i < W.n_rows; i++)
{
for (size_t j = 0; j < W.n_cols; j++)
{
D(i, i) += W(i, j);
}
}
std::cout<< "W = \n"<< W <<std::endl;
std::cout<< "D = \n"<< D <<std::endl;
It seems you are summing the elements in each row in the W matrix and putting the result in the diagonal of the D matrix. That is, you are summing elements over the "columns" dimension. This is very easy to do in armadillo and does not require any manual loop.
Armadillo has a sum function with a few overloads. One of these overloads receives a second parameter that you can use to specify in which dimension you want to perform the sum. Just specify the second dimension (index 1) and you get the proper result.
However, the result you get from arma::sum(W, 1) will be a vector. It makes sense, since you are summing over one of the dimensions of the matrix. Just pass the result to arma::diagmat and you get the same D matrix as with you original code. Your code can then be replaced by
arma::mat W = arma::mat(4, 4, arma::fill::ones);
arma::mat D = arma::mat(4, 4, arma::fill::zeros);
W.print("W");
arma::diagmat(arma::sum(W, 1)).print("D");
Note: I have used the .print method to print the matrices, in case you don't know about it. It is easier to use than using std::cout;

(c++, armadillo) Replace a part of column vector from a matrix

I'm using Rcpp with Armadillo library. My algorithm has a for-loop where I updates j-th column without j-th element at every step. Therefore, after a cycle, the input matrix will have all off-diagonal elements replaced with new values. To this end, I write Rcpp code like below.
arma::mat submatrix(
arma::mat A,
arma::uvec rowid){
for(int j = 0; j < A.n_rows; j++){
A.submat(rowid, "j") = randu(A.n_rows - 1);
}
return A;
}
However, I'm not sure how the submatrix view will work in the for-loop.
If you replace "j" in the above code with any of below, then this toy example
submatrix(matrix(rnorm(3 * 4), nrow = 3, ncol = 4), c(1:2))
will return an error message.
(uvec) j :
error: Mat::elem(): incompatible matrix dimensions: 2x0 and 2x1
j or (unsigned int) j : no matching member function for call to 'submat'
How could I handle this issue? Any comment would be very appreciated!
I have to confess that you do not fully understand your question -- though I think I get the idea of replace 'all but one' elements of a given row or column.
But your code has a number of problems. The following code is simpliefied (as I replace the full row), but it assigns row by row. You probably want something like this X.submat( first_row, first_col, last_row, last_col ), possibly in two chunks (assign above diagonal, then below). There is a bit more in the Armadillo documentation about indexing, and there is more too at the Rcpp Gallery.
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
arma::mat submatrix(arma::mat A, arma::uvec rowid, int k) {
for (arma::uword j = 0; j < A.n_rows; j++) {
A.row(j) = arma::randu(A.n_rows).t();
}
return A;
}
/*** R
M <- matrix(1:16,4,4)
submatrix(M, 1, 1)
*/

Euclidean distance between each record with other records in an array

so i have an array [nm] and i need to code in c++ the Euclidean distance between each row and the other rows in the array and store it in a new distance-array [nn] which every cell's value is the distance between the intersected rows.
distance-array:
r0 r1 .....rn
r0 0
r1 0
. 0
. 0
rn 0
the Euclidean distance between tow rows or tow records is:
assume we have these tow records:
r0: 1 8 7
r1: 2 5 3
r2
.
.
rn
Euclidean distance between r0 and r1 = sqrt((1-2)^2+(8-5)^2+(7-3)^2)
to code this i used 4 loops(which i think is too much) but i couldn't do it right, can someone help me to code this without using 3-D array ??
this is my code:
int norarr1[row][column] = { 1,1,1,2,2,2,3,3,3 };
int i = 0; int j = 0; int k = 0; int l = 0;
for (i = 0; i < column; i++){
for(j = 0; j < column; j++){
sumd = 0;
for (k = 0; k < row; k++) {
for (l = 0; l < row; l++) {
dist = sqrt((norarr1[i][k] - norarr1[j][l]) ^ 2);
sumd = sumd + dist;
cout << "sumd =" << sumd << " ";
}
cout << endl;
}
disarr[j][i] = sumd;
disarr[i][j] = sumd;
cout << disarr[i][j];
}
cout << endl;
}
There are several problems with your code. For now, let's ignore the for loops. We'll get to that later.
The first thing is that ^ is the bitwise exclusive or (XOR) operator. It does not do exponentiation like in some other languages. Instead, you need to use std::pow().
Second, you are summing square roots, which is not the correct way to calculate Euclidean distance. Instead, you need to calculate a sum and then take the square root.
Now let's think about the for loops. Assume that you already know which two rows you want to calculate the distance between. Call these r1 and r2. Now you just need to pair one coordinate from r1 with one coordinate from r2. Note that these coordinates will always be in the same column. This means that you only need one loop to calculate the squares of the differences of each pair of coordinates. Then you sum these squares. Finally after this single loop you take the square root.
With that out of the way, we need to iterate over the rows to choose each r1 and r2. Okay, this will take two loops since we want each of these to take on the value of each row.
In total, we will need three for loops. You can make this easier to understand by designing your code well. For example, you can create a class or struct that holds each row. If you know that every row is only three dimensions, then create a point or vector3 class. Now you can write a function which calculates the distance between two points. Finally, store the list of points as a 1D array. In fact, breaking up the data and calculation in this way makes the previous discussion about calculating the distance even easier to understand.

calculating the eigenvector from a complex eigenvalue in opencv

I am trying to calculate the eigenvector of a 4x4 matrix in opencv.
For this I first calculate the eigenvalue according to this formula:
Det( A - lambda * identity matrix ) = 0
From wiki on eigenvalues and eigenvectors.
After solving this, it gives me 4 eigenvalues that look something like this:
0.37789 + 1.91687i
0.37789 - 1.91687i
0.412312 + 1.87453i
0.412312 - 1.87453i
From these 4 eigenvalues I take the highest value and I want use that with this formula:
( A - lambda * identity matrix ) v = 0
I tried to use my original matrix A with the opencv function "eigen()", but this doesn't give me the results I am looking for.
I also tried to use RREF (reduced row echelon form), however I don't know how to do this with complex eigenvalues.
So my question is, how would you calculate this eigenvector?
I plugged my data in to wolframalpha to see what my results should be.
Opencv already has function for calculating eigenvalues and eigenvectors, cv::eigen(). I advise using it instead of writing the algorithm yourself.
Here is good blog that explains how to do this in c, c++ and python.
So I solved the problem using the 'ComplexEigenSolver' from the Eigen library.
//create a multichannel matrix
Mat a_com = Mat::zeros(4,4,CV_32FC2);
for(int i = 0; i<4; i++)
{
for(int j = 0; j<4; j++)
{
a_com.at<Vec2f>(i,j)[0] = a.at<double>(i,j);
a_com.at<Vec2f>(i,j)[1] = 0;
}
}
MatrixXcf eigenA;
cv2eigen(a_com,eigenA); //convert OpenCV to Eigen
ComplexEigenSolver<MatrixXcf> ces;
ces.compute(eigenA);
cout << "The eigenvalues of A are:\n" << ces.eigenvalues() << endl;
cout << "The matrix of eigenvectors, V, is:\n" << ces.eigenvectors() << endl;
This gives me the following output (which is more or less what I was looking for):
The eigenvalues of A are:
(0.3951,-1.89571)
(0.3951,1.89571)
(0.3951,1.89571)
(0.3951,-1.89571)
The matrix of eigenvectors, V, is:
(-0.704546,0) (-5.65862e-009,-0.704546) (-0.064798,-0.0225427) (0.0167534,0.0455606)
(-2.22328e-008,0.707107) (0.707107,-1.65536e-008) (0.0206999,-0.00474562) (-0.0145628,-0.0148895)
(-6.07644e-011,0.0019326) (0.00193259,-4.52426e-011) (-0.706729,6.83797e-005) (-0.000121153,0.706757)
(-1.88954e-009,0.0600963) (0.0600963,-1.40687e-009) (0.00200449,0.703827) (-0.70548,-0.00151068)

Advice on CUDA algorithm to sum columns of a matrix [duplicate]

Windows 7, NVidia GeForce 425M.
I wrote a simple CUDA code which calculates the row sums of a matrix.
The matrix has uni-dimensional representation (pointer to a float).
The serial version of code is below (it has 2 loops, as expected):
void serial_rowSum (float* m, float* output, int nrow, int ncol) {
float sum;
for (int i = 0 ; i < nrow ; i++) {
sum = 0;
for (int j = 0 ; j < ncol ; j++)
sum += m[i*ncol+j];
output[i] = sum;
}
}
Inside the CUDA code, I call the kernel function sweeping the matrix by rows. Below, the kernel call snippet:
dim3 threadsPerBlock((unsigned int) nThreadsPerBlock); // has to be multiple of 32
dim3 blocksPerGrid((unsigned int) ceil(nrow/(float) nThreadsPerBlock));
kernel_rowSum<<<blocksPerGrid, threadsPerBlock>>>(d_m, d_output, nrow, ncol);
and the kernel function which performs the parallel sum of the rows (still has 1 loop):
__global__ void kernel_rowSum(float *m, float *s, int nrow, int ncol) {
int rowIdx = threadIdx.x + blockIdx.x * blockDim.x;
if (rowIdx < nrow) {
float sum=0;
for (int k = 0 ; k < ncol ; k++)
sum+=m[rowIdx*ncol+k];
s[rowIdx] = sum;
}
}
So far so good. The serial and parallel (CUDA) results are equal.
The whole point is that the CUDA version takes almost twice the time of the serial one to compute, even if I change the nThreadsPerBlock parameter: I tested nThreadsPerBlock from 32 to 1024 (maximum number of threads per block allowed for my card).
IMO, the matrix dimension is big enough to justify parallelization: 90,000 x 1,000.
Below, I report the time elapsed for the serial and parallel versions using different nThreadsPerBlock. Time reported in msec over an average of 100 samples:
Matrix: nrow = 90000 x ncol = 1000
Serial: Average Time Elapsed per Sample in msec (100 samples): 289.18.
CUDA (32 ThreadsPerBlock): Average Time Elapsed per Sample in msec (100 samples): 497.11.
CUDA (1024 ThreadsPerBlock): Average Time Elapsed per Sample in msec (100 samples): 699.66.
Just in case, the version with 32/1024 nThreadsPerBlock is the fastest/slowest one.
I understand that there is a kind of overhead when copying from Host to Device and the other way around, but maybe the slowness is because I am not implementing the fastest code.
Since I am far from being a CUDA expert:
Am I coding the fastest version for this task? How could I improve my code?
Can I get rid of the loop in the kernel function?
Any thoughts appreciated.
EDIT 1
Although I describe a standard rowSum, I am interested in the AND/OR operation of rows which have (0;1} values, like rowAND/rowOR. That said, it doesn't allow me to exploit the cuBLAS multiply by 1's COL column vector trick, as suggested by some commentators.
EDIT 2
As suggest by users other users and here endorsed:
FORGET ABOUT TRYING TO WRITE YOUR OWN FUNCTIONS, use Thrust library instead and the magic comes.
Since you mentioned you need general reduction algorithm other than sum only. I will try to give 3 approaches here. kernel approach may have the highest performance. thrust approach is easiest to implement. cuBLAS approach works only with sum and have good performance.
Kernel Approach
Here's a very good doc introducing how to optimize standard parallel reduction. Standard reduction can be divide into 2 stages.
Multiple thread blocks each reduces one part of the data;
One thread block reduces from result of stage 1 to the final 1 element.
For your multi-reduction (reduce rows of mat) problem, only stage 1 is enough. The idea is to reduce 1 row per thread block. For further considerations like multi-row per thread block or 1 row per multiple thread blocks, you can refer to the paper provided by #Novak. This may improve the performance more, especially for matrices with bad shape.
Thrust Approach
General multi-reduction can be done by thrust::reduction_by_key in a few minutes. You can find some discussions here Determining the least element and its position in each matrix column with CUDA Thrust.
However thrust::reduction_by_key does not assume each row has the same length, so you will get performance penalty. Another post How to normalize matrix columns in CUDA with max performance? gives profiling comparison between thrust::reduction_by_key and cuBLAS approach on sum of rows. It may give you a basic understanding about the performance.
cuBLAS Approach
Sum of rows/cols of a matrix A can be seen as a matrix-vector multiplication where the elements of the vector are all ones. it can be represented by the following matlab code.
y = A * ones(size(A,2),1);
where y is the sum of rows of A.
cuBLAS libary provides a high performance matrix-vector multiplication function cublas<t>gemv() for this operation.
Timing result shows that this routine is only 10~50% slower than simply read all the elements of A once, which can be seen as the theoretical upper limit of the performance for this operation.
Reducing the rows of a matrix can be solved by using CUDA Thrust in three ways (they may not be the only ones, but addressing this point is out of scope). As also recognized by the same OP, using CUDA Thrust is preferable for such a kind of problem. Also, an approach using cuBLAS is possible.
APPROACH #1 - reduce_by_key
This is the approach suggested at this Thrust example page. It includes a variant using make_discard_iterator.
APPROACH #2 - transform
This is the approach suggested by Robert Crovella at CUDA Thrust: reduce_by_key on only some values in an array, based off values in a “key” array.
APPROACH #3 - inclusive_scan_by_key
This is the approach suggested by Eric at How to normalize matrix columns in CUDA with max performance?.
APPROACH #4 - cublas<t>gemv
It uses cuBLAS gemv to multiply the relevant matrix by a column of 1's.
THE FULL CODE
Here is the code condensing the two approaches. The Utilities.cu and Utilities.cuh files are mantained here and omitted here. The TimingGPU.cu and TimingGPU.cuh are maintained here and are omitted as well.
#include <cublas_v2.h>
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/generate.h>
#include <thrust/reduce.h>
#include <thrust/functional.h>
#include <thrust/random.h>
#include <thrust/sequence.h>
#include <stdio.h>
#include <iostream>
#include "Utilities.cuh"
#include "TimingGPU.cuh"
// --- Required for approach #2
__device__ float *vals;
/**************************************************************/
/* CONVERT LINEAR INDEX TO ROW INDEX - NEEDED FOR APPROACH #1 */
/**************************************************************/
template <typename T>
struct linear_index_to_row_index : public thrust::unary_function<T,T> {
T Ncols; // --- Number of columns
__host__ __device__ linear_index_to_row_index(T Ncols) : Ncols(Ncols) {}
__host__ __device__ T operator()(T i) { return i / Ncols; }
};
/******************************************/
/* ROW_REDUCTION - NEEDED FOR APPROACH #2 */
/******************************************/
struct row_reduction {
const int Ncols; // --- Number of columns
row_reduction(int _Ncols) : Ncols(_Ncols) {}
__device__ float operator()(float& x, int& y ) {
float temp = 0.f;
for (int i = 0; i<Ncols; i++)
temp += vals[i + (y*Ncols)];
return temp;
}
};
/**************************/
/* NEEDED FOR APPROACH #3 */
/**************************/
template<typename T>
struct MulC: public thrust::unary_function<T, T>
{
T C;
__host__ __device__ MulC(T c) : C(c) { }
__host__ __device__ T operator()(T x) { return x * C; }
};
/********/
/* MAIN */
/********/
int main()
{
const int Nrows = 5; // --- Number of rows
const int Ncols = 8; // --- Number of columns
// --- Random uniform integer distribution between 10 and 99
thrust::default_random_engine rng;
thrust::uniform_int_distribution<int> dist(10, 99);
// --- Matrix allocation and initialization
thrust::device_vector<float> d_matrix(Nrows * Ncols);
for (size_t i = 0; i < d_matrix.size(); i++) d_matrix[i] = (float)dist(rng);
TimingGPU timerGPU;
/***************/
/* APPROACH #1 */
/***************/
timerGPU.StartCounter();
// --- Allocate space for row sums and indices
thrust::device_vector<float> d_row_sums(Nrows);
thrust::device_vector<int> d_row_indices(Nrows);
// --- Compute row sums by summing values with equal row indices
//thrust::reduce_by_key(thrust::make_transform_iterator(thrust::counting_iterator<int>(0), linear_index_to_row_index<int>(Ncols)),
// thrust::make_transform_iterator(thrust::counting_iterator<int>(0), linear_index_to_row_index<int>(Ncols)) + (Nrows*Ncols),
// d_matrix.begin(),
// d_row_indices.begin(),
// d_row_sums.begin(),
// thrust::equal_to<int>(),
// thrust::plus<float>());
thrust::reduce_by_key(
thrust::make_transform_iterator(thrust::make_counting_iterator(0), linear_index_to_row_index<int>(Ncols)),
thrust::make_transform_iterator(thrust::make_counting_iterator(0), linear_index_to_row_index<int>(Ncols)) + (Nrows*Ncols),
d_matrix.begin(),
thrust::make_discard_iterator(),
d_row_sums.begin());
printf("Timing for approach #1 = %f\n", timerGPU.GetCounter());
// --- Print result
for(int i = 0; i < Nrows; i++) {
std::cout << "[ ";
for(int j = 0; j < Ncols; j++)
std::cout << d_matrix[i * Ncols + j] << " ";
std::cout << "] = " << d_row_sums[i] << "\n";
}
/***************/
/* APPROACH #2 */
/***************/
timerGPU.StartCounter();
thrust::device_vector<float> d_row_sums_2(Nrows, 0);
float *s_vals = thrust::raw_pointer_cast(&d_matrix[0]);
gpuErrchk(cudaMemcpyToSymbol(vals, &s_vals, sizeof(float *)));
thrust::transform(d_row_sums_2.begin(), d_row_sums_2.end(), thrust::counting_iterator<int>(0), d_row_sums_2.begin(), row_reduction(Ncols));
printf("Timing for approach #2 = %f\n", timerGPU.GetCounter());
for(int i = 0; i < Nrows; i++) {
std::cout << "[ ";
for(int j = 0; j < Ncols; j++)
std::cout << d_matrix[i * Ncols + j] << " ";
std::cout << "] = " << d_row_sums_2[i] << "\n";
}
/***************/
/* APPROACH #3 */
/***************/
timerGPU.StartCounter();
thrust::device_vector<float> d_row_sums_3(Nrows, 0);
thrust::device_vector<float> d_temp(Nrows * Ncols);
thrust::inclusive_scan_by_key(
thrust::make_transform_iterator(thrust::make_counting_iterator(0), linear_index_to_row_index<int>(Ncols)),
thrust::make_transform_iterator(thrust::make_counting_iterator(0), linear_index_to_row_index<int>(Ncols)) + (Nrows*Ncols),
d_matrix.begin(),
d_temp.begin());
thrust::copy(
thrust::make_permutation_iterator(
d_temp.begin() + Ncols - 1,
thrust::make_transform_iterator(thrust::make_counting_iterator(0), MulC<int>(Ncols))),
thrust::make_permutation_iterator(
d_temp.begin() + Ncols - 1,
thrust::make_transform_iterator(thrust::make_counting_iterator(0), MulC<int>(Ncols))) + Nrows,
d_row_sums_3.begin());
printf("Timing for approach #3 = %f\n", timerGPU.GetCounter());
for(int i = 0; i < Nrows; i++) {
std::cout << "[ ";
for(int j = 0; j < Ncols; j++)
std::cout << d_matrix[i * Ncols + j] << " ";
std::cout << "] = " << d_row_sums_3[i] << "\n";
}
/***************/
/* APPROACH #4 */
/***************/
cublasHandle_t handle;
timerGPU.StartCounter();
cublasSafeCall(cublasCreate(&handle));
thrust::device_vector<float> d_row_sums_4(Nrows);
thrust::device_vector<float> d_ones(Ncols, 1.f);
float alpha = 1.f;
float beta = 0.f;
cublasSafeCall(cublasSgemv(handle, CUBLAS_OP_T, Ncols, Nrows, &alpha, thrust::raw_pointer_cast(d_matrix.data()), Ncols,
thrust::raw_pointer_cast(d_ones.data()), 1, &beta, thrust::raw_pointer_cast(d_row_sums_4.data()), 1));
printf("Timing for approach #4 = %f\n", timerGPU.GetCounter());
for(int i = 0; i < Nrows; i++) {
std::cout << "[ ";
for(int j = 0; j < Ncols; j++)
std::cout << d_matrix[i * Ncols + j] << " ";
std::cout << "] = " << d_row_sums_4[i] << "\n";
}
return 0;
}
TIMING RESULTS (tested on a Kepler K20c)
Matrix size #1 #1-v2 #2 #3 #4 #4 (no plan)
100 x 100 0.63 1.00 0.10 0.18 139.4 0.098
1000 x 1000 1.25 1.12 3.25 1.04 101.3 0.12
5000 x 5000 8.38 15.3 16.05 13.8 111.3 1.14
100 x 5000 1.25 1.52 2.92 1.75 101.2 0.40
5000 x 100 1.35 1.99 0.37 1.74 139.2 0.14
It seems that approaches #1 and #3 outperform approach #2, except in the cases of small numbers of columns. The best approach, however, is approach #4, which is significantly more convenient than the others, provided that the time needed to create the plan can be amortized during the computation.
If this is the extent (summing the rows) of the operations you need to do with this data, I wouldn't expect a sizable benefit from the GPU. You have exactly one arithmetic operation per data element, and for that you are paying the cost of transferring that data element to the GPU. And beyond a certain problem size (whatever it takes to keep the machine busy) you get no added benefit from larger problem sizes, because the arithmetic intensity is O(n).
So this isn't a particularly exciting problem to solve on the GPU.
But as talonmies has indicated, you have a coalescing problem in the way you have crafted it, which will further slow things down. Let's take a look at a small example:
C1 C2 C3 C4
R1 11 12 13 14
R2 21 22 23 24
R3 31 32 33 34
R4 41 42 43 44
Above is a simple pictorial example of a small portion of your matrix. The machine data storage is such that elements (11), (12), (13), and (14) are stored in adjacent memory locations.
For coalesced access, we want an access pattern such that adjacent memory locations are requested from the same instruction, executed across the warp.
We need to think about execution of your code from the standpoint of a warp, that is 32 threads executing in lock-step. What is your code doing? Which elements is it retrieving (asking for) at each step/instruction? Let's take a look at this line of code:
sum+=m[rowIdx*ncol+k];
Adjacent threads in the warp have adjacent (i.e. consecutive) values for rowIdx as you have created that variable. So when k = 0, which data element is being asked for by each thread when we try to retrieve the value m[rowIdx*ncol+k] ?
In block 0, thread 0 has a rowIdx of 0. Thread 1 has a rowIdx of 1, etc. So the values being asked for by each thread at this instruction are:
Thread: Memory Location: Matrix Element:
0 m[0] (11)
1 m[ncol] (21)
2 m[2*ncol] (31)
3 m[3*ncol] (41)
But this is not coalesced access! Elements (11), (21), etc. are not adjacent in memory. For coalesced access, we would like that Matrix Element row to read like this:
Thread: Memory Location: Matrix Element:
0 m[?] (11)
1 m[?] (12)
2 m[?] (13)
3 m[?] (14)
If you then work backwards to determine what the value of ? should be, you will come up with an instruction something like this:
sum+=m[k*ncol+rowIdx];
This will give coalesced access, but it will not give you the correct answer, because we are now summing matrix columns instead of matrix rows. We can fix this by re-organizing your data storage to be in column-major order rather than row-major order. (You should be able to google that for ideas, right?) Conceptually, this is equivalent to transposing your matrix m. Whether this is convenient for you to do or not is outside the scope of your question, as I see it, and not really a CUDA issue. It may be a simple thing for you to do as you are creating the matrix on the host or transferring the matrix from host to device. But in summary, I don't know of a way to sum the matrix rows with 100% coalesced access, if the matrix is stored in row-major order. (You could resort to a sequence of row-reductions but that looks painful to me.)
It's not uncommon, when we are thinking about ways to accelerate code on the GPU, to consider re-organizing our data storage to facilitate the GPU. This is one example.
And, yes, what I'm outlining here still retains a loop in the kernel.
As an additional comment, I would suggest timing the data copy portions, and kernel (compute) portions separately. I can't tell from your question whether you are timing just the kernel or the entire (GPU) operation, including the data copies. If you time the data copies separately, you may discover that just the data copy time exceeds your CPU time. Any effort put into optimizing your CUDA code will not affect the data copy time. This might be a useful data point before you spend much time on this.