why is A[k][i][j] better for spatial locality in a 3D array? ( where i,j,k are row, col, depth) CMU lecture 55min
I think that OP's question
why is A[k][i][j] better for spatial locality in a 3D array? ( where i,j,k are row, col, depth)
Comes from a misunderstanding of the exercise given as an example of spatial locality, where the reader is asked to
permute the loops so that the function ... has good spatial locality
and this code is given:
int sum_array_3d(int a[M][N][N])
{
int i, j, k, sum = 0;
for (i = 0; i < M; i++)
for (j = 0; j < N; j++)
for (k = 0; k < N; k++)
sum += a[k][i][j];
return sum;
}
My interpretation of this task is that the students are asked to either rewrite the inner statement as sum += a[i][j][k]; or change the order of the loops:
int sum_array_3d(int a[M][N][N])
{
int i, j, k, sum = 0;
for (k = 0; k < M; k++) // <-- those are reordered
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
sum += a[k][i][j]; // <-- this is mantained, verbatim
return sum;
}
Actually, that example is completely wrong. While rank 0 goes from 0..M-1, that loop is iterating 0..N-1. Unless M==N, you'll be reading the wrong element.
The goal is to have your loop iteratively access physically-adjacent locations in memory by manipulating the order of the loops.
Whenever your program reads a value, the CPU requests it from the cache controller. If it's not in cache, that value - and those near it - are retrieved from memory and stored in the cache.
If you then read the next element, it should (usually) already be in the cache, so there's no slow round-trip out to the next cache or host RAM.
If your loop is walking all over the place rather than taking advantage of spatial locality, then you run the risk of suffering far more cache misses, which makes things slow.
In short: getting stuff from the cache is fast, getting it from RAM is slow, and ordering your loops so that they touch adjacent locations helps keep the cache happy.
In graphics, we typically do this:
int a[M*N*N];
for(int offset=0; offset < M*N*N; ++offset)
{
//int y = offset / cols;
//int x = offset % rows;
sum += a[offset];
}
if you need an element by it's X,Y, just
offset = Y * cols + X;
int val = a[offset];
or for 3D
offset = Z*N*N + Y*N + X
or
offset = Z * rows * cols + Y * cols + X;
... and skip all the multidimensional array silliness.
Personally, I'd just do this:
int *p = &a[0][0][0]; // could probably just do int* p=a, but for clarity...
//... array gets populated somehow
for(int i=0;i<M*N*N;++i)
{
sum += p[i];
}
... but that assumes the array is a regular square array, not an array of pointers, or an array of an array of pointers.
Related
I have the following piece of C++ code. The scale of the problem is N and M. Running the code takes about two minutes on my machine. (after g++ -O3 compilation). Is there anyway to further accelerate it, on the same machine? Any kind of option, choosing a better data structure, library, GPU or parallelism, etc, is on the table.
void demo() {
int N = 1000000;
int M=3000;
vector<vector<int> > res(M);
for (int i =0; i <N;i++) {
for (int j=1; j < M; j++){
res[j].push_back(i);
}
}
}
int main() {
demo();
return 0;
}
An additional info: The second loop above for (int j=1; j < M; j++) is a simplified version of the real problem. In fact, j could be in a different range for each i (of the outer loop), but the number of iterations is about 3000.
With the exact code as shown when writing this answer, you could create the inner vector once, with the specific size, and call iota to initialize it. Then just pass this vector along to the outer vector constructor to use it for each element.
Then you don't need any explicit loops at all, and instead use the (highly optimized, hopefully) standard library to do all the work for you.
Perhaps something like this:
void demo()
{
static int const N = 1000000;
static int const M = 3000;
std::vector<int> data(N);
std::iota(begin(data), end(data), 0);
std::vector<std::vector<int>> res(M, data);
}
Alternatively you could try to initialize just one vector with that elements, and then create the other vectors just by copying that part of the memory using std::memcpy or std::copy.
Another optimization would be to allocate the memory in advance (e.g. array.reserve(3000)).
Also if you're sure that all the members of the vector are similar vectors, you could do a hack by just creating a single vector with 3000 elements, and in the other res just put the same reference of that 3000-element vector million times.
On my machine which has enough memory to avoid swapping your original code took 86 seconds.
Adding reserve:
for (auto& v : res)
{
v.reserve(N);
}
made basically no difference (85 seconds but I only ran each version once).
Swapping the loop order:
for (int j = 1; j < M; j++) {
for (int i = 0; i < N; i++) {
res[j].push_back(i);
}
}
reduced the time to 10 seconds, this is likely due to a combination of allowing the compiler to use SIMD optimisations and improving cache coherency by accessing memory in sequential order.
Creating one vector and copying it into the others:
for (int i = 0; i < N; i++) {
res[1].push_back(i);
}
for (int j = 2; j < M; j++) {
res[j] = res[1];
}
reduced the time to 4 seconds.
Using a single vector:
void demo() {
size_t N = 1000000;
size_t M = 3000;
vector<int> res(M*N);
size_t offset = N;
for (size_t i = 0; i < N; i++) {
res[offset++] = i;
}
for (size_t j = 2; j < M; j++) {
std::copy(res.begin() + N, res.begin() + N * 2, res.begin() + offset);
offset += N;
}
}
also took 4 seconds, there probably isn't much improvement because you have 3,000 4 MB vectors, there would likely be more difference if N was smaller or M was larger.
I have a matrix of size 50000x100 and I need to sort each row using Cuda in C++. My architecture is a K80 NVidia card.
Since the number of columns is small, I am currently running the sorting algorithm inside a kernel. I am using a modified bubble algorithm that runs on all lines of the matrix.
I am wondering if there is an more efficient way to proceed. I tried to use thrust::sort inside my kernel but it is much slower. I also tried a merge sort algorithm but the recursive part of the algorithm didn't work inside my kernel.
==edit==
here is my kernel:
__global__ void computeQuantilesKernel(float *matIn, int nRows, int nCols, int nQuantiles, float *outsideValues, float *quantilesAve, int param2)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
float values[100];//big enough for 100 columns
int keys[100];
int nQuant[100];//big enough for 100 quantiles (percentiles)
float thisQuantile[100];
int quant;
if (idx >= nRows) return;
//read matIn from global memory
for (int i = 0; i < nCols; i++)
{
values[i] = matIn[idx * nCols + i + param2 * nCols * nRows];
keys[i] = i;
}
//bubble Sort:
int i, j;
int temp;
float tempVal;
for (i = 0; i < nCols - 1; i++)
{
for (j = 0; j < nCols - i - 1; j++)
{
if (values[j + 1] < values[j]) // ascending order simply changes to <
{
tempVal = values[j]; // swap elements
temp = keys[j]; // swap elements
values[j] = values[j + 1];
keys[j] = keys[j + 1];
values[j + 1] = tempVal;
keys[j + 1] = temp;
}
}
}
//end of bubble sort
//reset nQuant and thisQuantile
for (int iQuant = 0; iQuant < nQuantiles; iQuant++)
{
nQuant[iQuant] = 0;
thisQuantile[iQuant] = 0;
}
//Compute sum of outsideValues for each quantile
for (int i = 0; i < nCols; i++)
{
quant = (int)(((float)i + 0.5) / ((float)nCols / (float)nQuantiles));//quantile like Matlab
nQuant[quant]++;
thisQuantile[quant] += outsideValues[idx * nCols + keys[i]];
}
//Divide by the size of each quantile to get averages
for (int iQuant = 0; iQuant < nQuantiles; iQuant++)
{
quantilesAve[idx + nRows * iQuant + param2 * nQuantiles * nRows] = thisQuantile[iQuant] / (float)nQuant[iQuant];
}
}
Your code as it stands uses a single thread to handle each of your rows separately. As a result you are starving for quick scratch memory (registers, L1 cache, shared memory). You are allocating at least 1600 bytes per each thread - that is a lot! You want to stay at around 128 bytes per thread (32 registers of 32 bits each). Secondly, you are using local arrays addressable at run-time -- those arrays will be spilled into local memory, trash your L1 cache and end up in global memory again (1600B x 32 threads gives 51KB, which is already at or above the limits of shmem/L1).
For that reason I would suggest handling a single row per block of 64 or 128 threads instead, and keep the row you sort in shared memory. Bubble sort is actually very easy to implement in parallel:
__shared__ float values[nCols];
... load the data ...
__syncthreads();
for (int i = 0; i < nCols/2; i++)
{
int j = threadIdx.x;
if (j % 2 == 0 && j<nCols-1)
if (values[j+1] < values[j])
swap(values[j+1], values[j]);
__syncthreads();
if (j % 2 == 1 && j<nCols-1)
if (values[j+1] < values[j])
swap(values[j+1], values[j]);
__syncthreads();
}
Notice how your inner for j = ... loop is replaced by threadIdx, but the core idea of the algorithm stays the same. In each iteration I perform bubble swap first only on even pairs and then only on odd pairs to avoid parallel conflicts.
I assume that nCols is lower than the dimension of your block, which for 100 elements is easily achievable.
There are many ways that the above code can be improved further, for example
Cut the thread count in half and assume j=threadIdx.x*2 for the first half of the loop, and j=threadIdx.x*2+1 for the second half. This way no thread stays idle.
Use only 32 threads, each handling two values of j sequentially. This way your problem will fit a single warp, allowing you to drop __syncthreads() altogether. With 32 threads, you might be able to use warp shuffle intrinsics.
Experiment with #pragma unroll, although the amount of produce code may be unfeasible. Profiling will help.
Also consider experimenting with hardcoded merge sort instead of bubble sort. If my memory serves me right, when I implemented a warp-sized bubble sort and merge-sort with all loops unrolled, merge sort performed almost twice as fast as bubble sort. Note, it was several years ago, on the first generation of CUDA-capable cards.
I'm attempting to implement a set associative cache that uses least recently used replacement techniques. So far, my code is underestimating the amount of cache hits, and I'm not sure why. Posted below is my function, setAssoc, which takes in an int value that denotes the associativity of the cache, and also a vector of pairs that are a series of data accesses.
The function uses two 2D arrays, one to store the cache blocks, and one to store the "age" of each block in the cache.
For this particular implementation, it's okay to not worry about tag bits or anything of that nature; simply using the address divided by the block size is enough to determine the block number, then using the block number modulo the number of sets to determine the set number is sufficient.
Any insight as to why I may not be accurately predicting the right number of cache hits is appreciated!
int setAssoc(int associativity, vector<pair<unsigned long long, int>>& memAccess){
int blockNum, setNum;
int hitRate = 0;
int numOfSets = 16384 / (associativity * 32);
int cache [numOfSets][associativity];//used to store blocks
int age [numOfSets][associativity];//used to store ages
int maxAge = 0;
int hit;//use this to signal a hit in the cache
//set up cache here
for(int i = 0; i < numOfSets; i++){
for(int j = 0; j < associativity; j++){
cache[i][j] = -1;//initialize all blocks to -1
age[i][j] = 0;//initialize all ages to 0
}//end for int j
}//end for int i
for(int i = 0; i < memAccess.size(); i++){
blockNum = int ((memAccess[i].first) / 32);
setNum = blockNum % numOfSets;
hit = 0;
for(int j = 0; j < associativity; j++){
age[setNum][j]++;//age each entry in the cache
if(cache[setNum][j] == blockNum){
hitRate++;//increment hitRate if block is in cache
age[setNum][j] = 0;//reset age of block since it was just accessed
hit = 1;
}//end if
}//end for int j
if(!hit){
for(int j = 0; j < associativity; j++){
//loop to find the least recently used block
if(age[setNum][j] > maxAge){
maxAge = j;
}//end if
}//end for int j
cache[setNum][maxAge] = blockNum;
age[setNum][maxAge] = 0;
}
}//end for int i
return hitRate;
}//end setAssoc function
Not sure if that's the only problem in this code, but you seem to be confusing between the ages and the way number. By assigning maxAge = j, you put an arbitrary way number in your age var (which will interfere with finding the LRU way). You then use it as a way index.
I would suggest splitting this into 2 variables:
if(!hit){
for(int j = 0; j < associativity; j++){
//loop to find the least recently used block
if(age[setNum][j] > maxAge){
maxAgeWay = j;
maxAge = age[setNum][j];
}//end if
}//end for int j
cache[setNum][maxAgeWay] = blockNum;
age[setNum][maxAgeWay] = 0;
}
(with the proper initialization and bound checking of course)
Here is code to find determinant of matrix n x n.
#include <iostream>
using namespace std;
int determinant(int *matrix[], int size);
void ijMinor(int *matrix[], int *minorMatrix[], int size, int row, int column);
int main()
{
int size;
cout << "What is the size of the matrix for which you want to find the determinant?:\t";
cin >> size;
int **matrix;
matrix = new int*[size];
for (int i = 0 ; i < size ; i++)
matrix[i] = new int[size];
cout << "\nEnter the values of the matrix seperated by spaces:\n\n";
for(int i = 0; i < size; i++)
for(int j = 0; j < size; j++)
cin >> matrix[i][j];
cout << "\nThe determinant of the matrix is:\t" << determinant(matrix, size) << endl;
return 0;
}
int determinant(int *matrix[], int size){
if(size==1)return matrix[0][0];
else{
int result=0, sign=-1;
for(int j = 0; j < size; j++){
int **minorMatrix;
minorMatrix = new int*[size-1];
for (int k = 0 ; k < size-1 ; k++)
minorMatrix[k] = new int[size-1];
ijMinor(matrix, minorMatrix, size, 0, j);
sign*=-1;
result+=sign*matrix[0][j]*determinant(minorMatrix, size-1);
for(int i = 0; i < size-1; i++){
delete minorMatrix[i];
}
}
return result;
}
}
void ijMinor(int *matrix[], int *minorMatrix[], int size, int row, int column){
for(int i = 0; i < size; i++){
for(int j = 0; j < size; j++){
if(i < row){
if(j < column)minorMatrix[i][j] = matrix[i][j];
else if(j == column)continue;
else minorMatrix[i][j-1] = matrix[i][j];
}
else if(i == row)continue;
else{
if(j < column)minorMatrix[i-1][j] = matrix[i][j];
else if(j == column)continue;
else minorMatrix[i-1][j-1] = matrix[i][j];
}
}
}
}
After adding OpenMP pragmas, I've changed the determinant function and now it looks like this:
int determinant(int *matrix[], int size){
if(size==1)return matrix[0][0];
else{
int result=0, sign=-1;
#pragma omp parallel for default(none) shared(size,matrix,sign) private(j,k) reduction(+ : result)
for(int j = 0; j < size; j++){
int **minorMatrix;
minorMatrix = new int*[size-1];
for (int k = 0 ; k < size-1 ; k++)
minorMatrix[k] = new int[size-1];
ijMinor(matrix, minorMatrix, size, 0, j);
sign*=-1;
result+=sign*matrix[0][j]*determinant(minorMatrix, size-1);
for(int i = 0; i < size-1; i++){
delete minorMatrix[i];
}
}
return result;
delete [] matrix;
}
}
My problem is that the result is every time different. Sometimes it gives correct value, but most often it is wrong. I think it's because of the sign variable. I am following the formula:
As you can see, in every iteration of my for loop there should be different sign but when I use OpenMP, something is wrong. How can I make this program to run with OpenMP?
Finally, my second issue is that using OpenMP does not make the program run quicker than without OpenMP. I also tried to make a 100,000 x 100,000 matrix, but my program reports an error about allocating memory. How can I run this program with very large matrices?
Your issues as I see it are as follows:
1) As noted by Hristo, your threads are stomping over each other's data with respect to the sign variable. It should be private to each thread so that they have full read/write access to it without having to worry about race conditions. Then, you simply need an algorithm to compute whether sign is plus or minus 1 depending on the iteration j independently from the other iterations. With a little thinking, you'll see that Hristo's suggestion is correct: sign = (j % 2) ? -1 : 1; should do the trick.
2) Your determinant() function is recursive. As is, that means that every iteration of the loop, after forming your minors, you then call your function again on that minor. Therefore, a single thread is going to be performing its iteration, enter the recursive function, and then try to split itself up into nthreads more threads. You can see now how you are oversubscribing your system by launching many more threads than you physically have cores. Two easy solutions:
Call your original serial function from within the omp parallel code. This is the fastest way to do it because this would avoid any OpenMP-startup overhead.
Turn off nested parallelism by calling omp_set_nested(0); before your first call to determinant().
Add an if clause to your parallel for directive: if(omp_in_parallel())
3) Your memory issues are because every iteration of your recursion, you are allocating more memory. If you fix problem #2, then you should be using comparable amounts of memory in the serial case as the parallel case. That being said, it would be much better to allocate all the memory you want before entering your algorithm. Allocating large chunks of memory (and then freeing it!), especially in parallel, is a terrible bottleneck in your code.
Compute the amount of memory you would need (on paper) before entering the first loop and allocate it all at once. I would also strongly suggest you consider allocating your memory contiguously (aka in 1D) to take better advantage of caching as well. Remember that each thread should have its own separate area to work with. Then, change your function to:
int determinant(int *matrix, int *startOfMyWorkspace, int size).
Instead of allocating a new (size-1)x(size-1) matrix inside of your loop, you would simply utilize the next (size-1)*(size-1) integers of your workspace, update what startOfMyWorkspace would be for the next recursive call, and continue along.
I would like to optimize this simple loop:
unsigned int i;
while(j-- != 0){ //j is an unsigned int with a start value of about N = 36.000.000
float sub = 0;
i=1;
unsigned int c = j+s[1];
while(c < N) {
sub += d[i][j]*x[c];//d[][] and x[] are arrays of float
i++;
c = j+s[i];// s[] is an array of unsigned int with 6 entries.
}
x[j] -= sub; // only one memory-write per j
}
The loop has an execution time of about one second with a 4000 MHz AMD Bulldozer. I thought about SIMD and OpenMP (which I normally use to get more speed), but this loop is recursive.
Any suggestions?
think you may want to transpose the matrix d -- means store it in such a way that you can exchange the indices -- make i the outer index:
sub += d[j][i]*x[c];
instead of
sub += d[i][j]*x[c];
This should result in better cache performance.
I agree with transposing for better caching (but see my comments on that at the end), and there's more to do, so let's see what we can do with the full function...
Original function, for reference (with some tidying for my sanity):
void MultiDiagonalSymmetricMatrix::CholeskyBackSolve(float *x, float *b){
//We want to solve L D Lt x = b where D is a diagonal matrix described by Diagonals[0] and L is a unit lower triagular matrix described by the rest of the diagonals.
//Let D Lt x = y. Then, first solve L y = b.
float *y = new float[n];
float **d = IncompleteCholeskyFactorization->Diagonals;
unsigned int *s = IncompleteCholeskyFactorization->StartRows;
unsigned int M = IncompleteCholeskyFactorization->m;
unsigned int N = IncompleteCholeskyFactorization->n;
unsigned int i, j;
for(j = 0; j != N; j++){
float sub = 0;
for(i = 1; i != M; i++){
int c = (int)j - (int)s[i];
if(c < 0) break;
if(c==j) {
sub += d[i][c]*b[c];
} else {
sub += d[i][c]*y[c];
}
}
y[j] = b[j] - sub;
}
//Now, solve x from D Lt x = y -> Lt x = D^-1 y
// Took this one out of the while, so it can be parallelized now, which speeds up, because division is expensive
#pragma omp parallel for
for(j = 0; j < N; j++){
x[j] = y[j]/d[0][j];
}
while(j-- != 0){
float sub = 0;
for(i = 1; i != M; i++){
if(j + s[i] >= N) break;
sub += d[i][j]*x[j + s[i]];
}
x[j] -= sub;
}
delete[] y;
}
Because of the comment about parallel divide giving a speed boost (despite being only O(N)), I'm assuming the function itself gets called a lot. So why allocate memory? Just mark x as __restrict__ and change y to x everywhere (__restrict__ is a GCC extension, taken from C99. You might want to use a define for it. Maybe the library already has one).
Similarly, though I guess you can't change the signature, you can make the function take only a single parameter and modify it. b is never used when x or y have been set. That would also mean you can get rid of the branch in the first loop which runs ~N*M times. Use memcpy at the start if you must have 2 parameters.
And why is d an array of pointers? Must it be? This seems too deep in the original code, so I won't touch it, but if there's any possibility of flattening the stored array, it will be a speed boost even if you can't transpose it (multiply, add, dereference is faster than dereference, add, dereference).
So, new code:
void MultiDiagonalSymmetricMatrix::CholeskyBackSolve(float *__restrict__ x){
// comments removed so that suggestions are more visible. Don't remove them in the real code!
// these definitions got long. Feel free to remove const; it does nothing for the optimiser
const float *const __restrict__ *const __restrict__ d = IncompleteCholeskyFactorization->Diagonals;
const unsigned int *const __restrict__ s = IncompleteCholeskyFactorization->StartRows;
const unsigned int M = IncompleteCholeskyFactorization->m;
const unsigned int N = IncompleteCholeskyFactorization->n;
unsigned int i;
unsigned int j;
for(j = 0; j < N; j++){ // don't use != as an optimisation; compilers can do more with <
float sub = 0;
for(i = 1; i < M && j >= s[i]; i++){
const unsigned int c = j - s[i];
sub += d[i][c]*x[c];
}
x[j] -= sub;
}
// Consider using processor-specific optimisations for this
#pragma omp parallel for
for(j = 0; j < N; j++){
x[j] /= d[0][j];
}
for( j = N; (j --) > 0; ){ // changed for clarity
float sub = 0;
for(i = 1; i < M && j + s[i] < N; i++){
sub += d[i][j]*x[j + s[i]];
}
x[j] -= sub;
}
}
Well it's looking tidier, and the lack of memory allocation and reduced branching, if nothing else, is a boost. If you can change s to include an extra UINT_MAX value at the end, you can remove more branches (both the i<M checks, which again run ~N*M times).
Now we can't make any more loops parallel, and we can't combine loops. The boost now will be, as suggested in the other answer, to rearrange d. Except… the work required to rearrange d has exactly the same cache issues as the work to do the loop. And it would need memory allocated. Not good. The only options to optimise further are: change the structure of IncompleteCholeskyFactorization->Diagonals itself, which will probably mean a lot of changes, or find a different algorithm which works better with data in this order.
If you want to go further, your optimisations will need to impact quite a lot of the code (not a bad thing; unless there's a good reason for Diagonals being an array of pointers, it seems like it could do with a refactor).
I want to give an answer to my own question: The bad performance was caused by cache conflict misses due to the fact that (at least) Win7 aligns big memory blocks to the same boundary. In my case, for all buffers, the adresses had the same alignment (bufferadress % 4096 was same for all buffers), so they fall into the same cacheset of L1 cache. I changed memory allocation to align the buffers to different boundaries to avoid cache conflict misses and got a speedup of factor 2. Thanks for all the answers, especially the answers from Dave!