cudaMemcpy2D for shared memory copies - c++

I have some memory that has been allocated on device that is just a single malloc of H*W*sizeof(float) in size.
This is to represent an H*W matrix.
I have a code where I need to swap the quadrants of the matrix. Can i use cudaMemcpy2D to accomplish this? Would I just need to specify the spitch and dpitch to be W*sizeof(float) and just use pointers to each quadrant of the matrix to accomplish this?
Also, when these cudaMemcpy talk about the memory areas not overlapping - does that mean src and dst cannot overlap at all? As in, if I had a 10 byte wide array that I wanted to shift left one time - it will fail?
Thanks

You can use cudaMemcpy2D for moving around sub-blocks which are part of larger pitched linear memory allocations. There is no problem in doing that. The non-overlapping requirement is non-negotiable and it will fail if you try it. The source and destination can come from the same allocation, but the address ranges of the source and destination cannot overlap. If you need to do some "in-situ" copying where there is overlap, you might be better served to write a kernel to do it (see the matrix transpose example in the SDK as a sound way to do that kind of thing).

I suggest writing a simple kernel to do this matrix manipulation. I think it would be easier to write than using cudaMemcpy(2D) and almost definitely faster assuming you write it to get good memory coherence.
It's probably easiest to do an out-of-place transform (i.e. different input and output arrays) to avoid clobbering the input matrix. Each thread would simply read from its input offset and write to the transformed offset.
It would be similar to a matrix transpose. There is a matrix transpose example in the CUDA SDK.

Related

Use shared memory for neighboring array elements?

I'd like to process an image with CUDA. Each pixel's new value is calculated based on the two neighboring pixels in one row. Would it make sense to use __shared__ memory for the pixel values, since each value will be used only twice? Aren't tiles also the wrong way to do it, since it doesn't suit the problem structure? My approach would be to run a thread on each pixel and load the neighboring pixel values each time for each thread.
All currently supported CUDA architectures have caches.
From compute capability 3.5 onwards these are particularly efficient for read-only data (as read-write data is only cached in L2, the L1 cache is limited to read-only data). If you mark the pointer to the input data as const __restrict__, the compiler will most likely load it via the L1 texture cache. You can also force this by explicitly using the __ldg() builtin.
While it is possible to explicitly manage the reuse of data from neighoring pixels via shared memory, you will probably find this to provide no benefit over just relying on the cache.
Of course, whether or not you use shared memory, you want to maximize the blocksize in x-direction and use a blockSize.y of 1 for optimal access locality.
Combine using shared memory with taking advantage of coalesced memory accesses. All you need to do is to ensure that image is stored row-wise. Each block would process a chunk of linear array. Because of data reuse (every pixel except the first and last ones would take part in processing three times) it would be beneficial if at the beginning of your kernel you would copy the values of all pixels that will be processed to shared memory.

Efficient (time and space complexity) data structure for dense and sparse matrix

I have to read a file in which is stored a matrix with cars (1=BlueCar, 2=RedCar, 0=Empty).
I need to write an algorithm to move the cars of the matrix in that way:
blue ones move downward;
red ones move rightward;
there is a turn in which all the blue ones move and a turn to move all the red ones.
Before the file is read I don't know the matrix size and if it's dense or sparse, so I have to implement two data structures (one for dense and one for sparse) and two algorithms.
I need to reach the best time and space complexity possible.
Due to the unknown matrix size, I think to store the data on the heap.
If the matrix is dense, I think to use something like:
short int** M = new short int*[m];
short int* M_data = new short int[m*n];
for(int i=0; i< m; ++i)
{
M[i] = M_data + i * n;
}
With this structure I can allocate a contiguous space of memory and it is also simple to be accessed with M[i][j].
Now the problem is the structure to choose for the sparse case, and I have to consider also how I can move the cars through the algorithm in the simplest way: for example when I evaluate a car, I need to find easily if in the next position (downward or rightward) there is another car or if it's empty.
Initially I thought to define BlueCar and RedCar objects that inherits from the general Car object. In this objects I can save the matrix coordinates and then put them in:
std::vector<BluCar> sparseBlu;
std::vector<RedCar> sparseRed;
Otherwise I can do something like:
vector< tuple< row, column, value >> sparseMatrix
But the problem of finding what's in the next position still remains.
Probably this is not the best way to do it, so how can I implement the sparse case in a efficient way? (also using a unique structure for sparse)
Why not simply create a memory mapping directly over the file? (assuming your data 0,1,2 is stored in contiguous bytes (or bits) in the file, and the position of those bytes also represents the coordinates of the cars)
This way you don't need to allocate extra memory and read in all the data, and the data can simply and efficiently be accessed with M[i][j].
Going over the rows would be L1-cache friendly.
In case of very sparse data, you could scan through the data once and keep a list of the empty regions/blocks in memory (only need to store startpos and size), which you could then skip (and adjust where needed) in further runs.
With memory mapping, only frequently accessed pages are kept in memory. This means that once you have scanned for the empty regions, memory will only be allocated for the frequently accessed non-empty regions (all this will be done automagically by the kernel - no need to keep track of it yourself).
Another benefit is that you are accessing the OS disk cache directly. Thus no need to keep copying and moving data between kernel space and user space.
To further optimize space- and memory usage, the cars could be stored in 2 bits in the file.
Update:
I'll have to move cars with openMP and MPI... Will the memory mapping
work also with concurrent threads?
You could certainly use multithreading, but not sure if openMP would be the best solution here, because if you work on different parts of the data at the same time, you may need to check some overlapping regions (i.e. a car could move from one block to another).
Or you could let the threads work on the middle parts of the blocks, and then start other threads to do the boundaries (with red cars that would be one byte, with blue cars a full row).
You would also need a locking mechanism for adjusting the list of the sparse regions. I think the best way would be to launch separate threads (depending on the size of the data of course).
In a somewhat similar task, I simply made use of Compressed Row Storage.
The Compressed Row and Column (in the next section) Storage formats
are the most general: they make absolutely no assumptions about the
sparsity structure of the matrix, and they don't store any unnecessary
elements. On the other hand, they are not very efficient, needing an
indirect addressing step for every single scalar operation in a
matrix-vector product or preconditioner solve.
You will need to be a bit more specific about time and space complexity requirements. CSR requires an extra indexing step for simple operations, but that is a minor amount of overhead if you're just doing simple matrix operations.
There's already an existing C++ implementation available online as well.

Calling multiple kernels, global memory performances - CUDA

I have four CUDA kernels working on matrices in the following way:
convolution<<<>>>(A,B);
multiplybyElement1<<<>>>(B);
multiplybyElement2<<<>>>(A);
multiplybyElement3<<<>>>(C);
// A + B + C with CUBLAS' cublasSaxpy
every kernel basically (except the convolution first) performs a matrix each-element multiplication by a fixed value hardcoded in its constant memory (to speed things up).
Should I join these kernels into a single one by calling something like
multiplyBbyX_AbyY_CbyZ<<<>>>(B,A,C)
?
Global memory should already be on the device so probably that would not help, but I'm not totally sure
If I understood correctly, you're asking if you should merge the three "multiplybyElement" kernels into one, where each of those kernels reads an entire (different) matrix, multiplying each element by a constant, and storing the new scaled matrix.
Given that these kernels will be memory bandwidth bound (practically no computation, just one multiply for every element) there is unlikely to be any benefit from merging the kernels unless your matrices are small, in which case you would be making inefficient use of the GPU since the kernels will execute in series (same stream).
If merging the kernels means that you can do only one pass over the memory, then you may see a 3x speedup.
Can you multiply up the fixed values up front and then do a single multiply in a single kernel?

How to create an array with size more than C++ limits

I have a little problem here, i write c++ code to create an array but when i want to set array size to 100,000,000 or more i got an error.
this is my code:
int i=0;
double *a = new double[n*n];
this part is so important for my project.
When you think you need an array of 100,000,000 elements, what you actually need is a different data structure that you probably have never heard of before. Maybe a hash map, or maybe a sparse matrix.
If you tell us more about the actual problem you are trying to solve, we can provide better help.
In general, the only reason that would fail would be due to lack of memory/memory fragmentation/available address space. That is, trying to allocate 800MB of memory. Granted, I have no idea why your system's virtual memory can't handle that, but maybe you allocated a bunch of other stuff. It doesn't matter.
Your alternatives are to tricks like memory-mapped files, sparse arrays, and so forth instead of an explicit C-style array.
If you do not have sufficient memory, you may need to use a file to store your data and process it in smaller chunks.
Don't know if IMSL provides what you are looking for, however, if you want to work on smaller chunks you might devise an algorithm that can call IMSL functions with these small chunks and later merge the results. For example, you can do matrix multiplication by combining multiplication of sub-matrices.

Layout of Pixel-data in Memory?

I'm writing a C++ library for an image format that is based on PNG. One stopping point for me is that I'm unsure as to how I ought to lay out the pixel data in memory; as far as I'm aware, there are two practical approaches:
An array of size (width * height); each pixel can be accessed by array[y*width + x].
An array of size (height), containing pointers to arrays of size (width).
The standard reference implementation for PNG (libpng) uses method 2 of the above, while I've seen others use method 1. Is one better than the other, or is each a method with its own pros and cons, to where a compromise must be made? Further, which format do most graphical display systems use (perhaps for ease of using the output of my library into other APIs)?
Off the top of my head:
The one thing that would make me choose #2 is the fact that your memory requirements are a little relaxed. If you were to go for #1, the system will need to be able to allocate height * width amount of contiguous memory. Whereas, in case of #2, it has the freedom to allocate smaller chunks of contiguous memory of size width (could as well be height) off of areas that are free. (When you factor in the channels per pixel, the #1 may fail for even moderately sized images.)
Further, it may be slightly better when swapping rows (or columns) if required for image manipulation purposes (pointer swap suffices).
The downside for #2 is of course an extra level of indirection that seeps in for every access and the array of pointers to be maintained. But this is hardly a matter given todays processor speed and memory.
The second downside for #2 is that the data isn't necessarily next to each other, which makes it harder for the processor the load the right memory pages into the cache.
The advantage of method 2 (cutting up the array in rows) is that you can perform memory operations in steps, e.g. resizing or shuffling the image without reallocating the entire chunk of memory at once. For really large images this may be an advantage.
The advantage of a single array is that you calculations are simpler, i.e. to go one row down you do
pos += width;
instead of having to reference pointers. For small to medium images this is probably faster. Unless you're dealing with images of hundreds of Mb, I would go with method 1.
I suspect that libpng does that (style 2) for a few possible reasons:
Avoid large allocations (as mentioned), and may ease handling VERY large PNGs, especially in systems without VM
(perhaps) allow for interleaved decode ala interleaved JPEG (if PNG supports that)
Ease of certain transformations (vertical flip) (unlikely)
Ease of scaling (insert or remove lines without needing a full second buffer, or widen/narrow lines) (unlikely but possible)
The problem with this approach (assuming each line is an allocation) is a LOT more allocation/free overhead, and perhaps encourage memory fragmentation.
Unless you have a good reason, use style 1 (single allocation), and perhaps round to a "good" boundary for the architecture you're using (could be 4, 8, 16 or perhaps even more bytes). Note that many library functions may look for style 1 with no padding - think about how you'll be using this and where you'll be passing them to.
Windows itself uses a variation of method 1. Each row of the image is padded to a multiple of 4 bytes, and the order of the colors is B,G,R instead of the more normal R,G,B. Also the first row of the buffer is the bottom row of the image.