Sparse matrix storage format designed for row/col manipulation? - c++

I am working with a program that needs to access and store data in a sparse matrix. About 40-60% of the matrix will be non-zero, dimensions are anywhere from 14K to 22K elements square.
Here's my problem- I'm going to be performing a lot of row and column manipulations. Mainly adding, removing, and swapping. I've looked at most of the existing well known sparse matrix formats (CRS, CCS, COO, the block formats, etc), and most of them don't seem very receptive to these kinds of operations. The moment you start adding and removing entire rows or columns, you land up having to update all the elements to either side of the manipulated row/col, and that's something I'd like to try and avoid if possible (it has occurred to me that you could probably manage elements in such a way that their coordinates in the matrix are actually stored as a pair of pointers to a common row or column index, and save yourself from manually updating thousands of elements by simply incrementing or decrementing that value).
Is there anything like that out there?

I would consider a layer of indirection that serves as an efficient mechanism for swapping, inserting, and deleting rows and columns.
This would be analogous to how virtual memory is managed in modern operating systems. This is just a brief aside: the physical RAM addresses get mapped by the CPU's memory management unit into a linear addressing space. The MMU, with the host O/S's help, maps the actual physical RAM addresses into each process's virtual address space. The addresses used by pointers, and other objects, in each process are not real RAM addresses, they are virtual, and are translated by the hardware MMU unit into actual physical RAM addresses. This is why idle processes can be paged out by the host operating system into a swap file or partition, when the system is short on space, and then loaded back in later, into some entirely different area of physical RAM, without the process even being aware of what happened.
Anyway, back on topic, this would be an analogous approach here.
Consider a plain, garden variety two-dimensional std::map. Here's your sparse matrix:
std::map<size_t, std::map<size_t, value_t>>
The first map's dimension, or key, is the row, which gives you a second map, whose dimension/key is the column, and which finally contains your value.
This is pretty straightforward. Nothing earth-shattering here. But as you understand: moving, swapping, and inserting rows and columns becomes quite a bear.
Ok, so let's introducing your own, personalized MMU: your map management unit. It will work, pretty much, like a hardware MMU:
std::map<size_t, size_t> rows;
std::map<size_t, size_t> columns;
So, in your example, to look up the value for virtual row R, columns C:
1) rows[R] gives you the "physical" row number.
2) columns[C] gives you the "physical" column number.
3) Now, you take the "physical" row and column numbers, then go to your two-dimensional std::map, and look up your value given the physical row and column.
So, what do we get here? Well, moving an entire row or column now involves a simple update to your map management unit. Remove a value from the rows or the columns map, and put it back with a different key, the new "logical" row or column. The values in the "physical", two-dimensional map, remain untouched.
That's it. Moving, swapping, or inserting rows becomes a simple operation on your mmu object.
There are two other details that will need to be taken care of:
A) Keep track of which physical rows and columns are not unused, and can be assigned to new logical rows and columns, when they get added.
B) Depending on your use case, it may also be necessary to map physical rows and columns back into the virtual row and column numbers.
Both of these are fairly trivial, straightforward tasks, which can be your homework assignment.

Related

Efficient (time and space complexity) data structure for dense and sparse matrix

I have to read a file in which is stored a matrix with cars (1=BlueCar, 2=RedCar, 0=Empty).
I need to write an algorithm to move the cars of the matrix in that way:
blue ones move downward;
red ones move rightward;
there is a turn in which all the blue ones move and a turn to move all the red ones.
Before the file is read I don't know the matrix size and if it's dense or sparse, so I have to implement two data structures (one for dense and one for sparse) and two algorithms.
I need to reach the best time and space complexity possible.
Due to the unknown matrix size, I think to store the data on the heap.
If the matrix is dense, I think to use something like:
short int** M = new short int*[m];
short int* M_data = new short int[m*n];
for(int i=0; i< m; ++i)
{
M[i] = M_data + i * n;
}
With this structure I can allocate a contiguous space of memory and it is also simple to be accessed with M[i][j].
Now the problem is the structure to choose for the sparse case, and I have to consider also how I can move the cars through the algorithm in the simplest way: for example when I evaluate a car, I need to find easily if in the next position (downward or rightward) there is another car or if it's empty.
Initially I thought to define BlueCar and RedCar objects that inherits from the general Car object. In this objects I can save the matrix coordinates and then put them in:
std::vector<BluCar> sparseBlu;
std::vector<RedCar> sparseRed;
Otherwise I can do something like:
vector< tuple< row, column, value >> sparseMatrix
But the problem of finding what's in the next position still remains.
Probably this is not the best way to do it, so how can I implement the sparse case in a efficient way? (also using a unique structure for sparse)
Why not simply create a memory mapping directly over the file? (assuming your data 0,1,2 is stored in contiguous bytes (or bits) in the file, and the position of those bytes also represents the coordinates of the cars)
This way you don't need to allocate extra memory and read in all the data, and the data can simply and efficiently be accessed with M[i][j].
Going over the rows would be L1-cache friendly.
In case of very sparse data, you could scan through the data once and keep a list of the empty regions/blocks in memory (only need to store startpos and size), which you could then skip (and adjust where needed) in further runs.
With memory mapping, only frequently accessed pages are kept in memory. This means that once you have scanned for the empty regions, memory will only be allocated for the frequently accessed non-empty regions (all this will be done automagically by the kernel - no need to keep track of it yourself).
Another benefit is that you are accessing the OS disk cache directly. Thus no need to keep copying and moving data between kernel space and user space.
To further optimize space- and memory usage, the cars could be stored in 2 bits in the file.
Update:
I'll have to move cars with openMP and MPI... Will the memory mapping
work also with concurrent threads?
You could certainly use multithreading, but not sure if openMP would be the best solution here, because if you work on different parts of the data at the same time, you may need to check some overlapping regions (i.e. a car could move from one block to another).
Or you could let the threads work on the middle parts of the blocks, and then start other threads to do the boundaries (with red cars that would be one byte, with blue cars a full row).
You would also need a locking mechanism for adjusting the list of the sparse regions. I think the best way would be to launch separate threads (depending on the size of the data of course).
In a somewhat similar task, I simply made use of Compressed Row Storage.
The Compressed Row and Column (in the next section) Storage formats
are the most general: they make absolutely no assumptions about the
sparsity structure of the matrix, and they don't store any unnecessary
elements. On the other hand, they are not very efficient, needing an
indirect addressing step for every single scalar operation in a
matrix-vector product or preconditioner solve.
You will need to be a bit more specific about time and space complexity requirements. CSR requires an extra indexing step for simple operations, but that is a minor amount of overhead if you're just doing simple matrix operations.
There's already an existing C++ implementation available online as well.

Reordering of work dimensions may cause a huge performance boost, but why?

I am using OpenCL for stereo image processing on the GPU and after I ported a C++ implementation to OpenCL i was playing around with optimizations. A very simple experiment was to swap around the dimensions.
Consider a simple kernel, which is executed for every pixel along the two dimensional work space (f.e. 640x480). In my case it was a Census transform.
Swapping from:
int globalU = get_global_id(0);
int globalV = get_global_id(1);
too
int globalU = get_global_id(1);
int globalV = get_global_id(0);
while adjusting the NDRange in the same way, gave a performance boost about 500%. Other experiments in 3d Space achieved a execution time from 72ms to 2ms, only with reordering of the dimensions.
Can anybody explain my, how this happens?
Is it just an effect of memory pipelines and cache usage?
EDIT: The image has a standart mamory layout. Thats why i wondered about the effects. I expected the best speed, when the iteration goes like the image is stored in the memory, which is not the case.
After some reading of the AMD APP SDK documantation, i found some interesting details about the memory channels. That could be a reason.
When you access an element in a memory in first being loaded into a CPU's cache. CPU does not load single element (say 1 byte), but instead it loads a single line (for example 64 adjacent bytes) into cache. This is because it is usually likely that you will access subsequent elements, so CPU would not need to access RAM again.
This is make a huge difference since to access cache memory, an electrical signal should not even leave CPU chip, while if CPU need to load data from RAM, the signal need to travel to a separate chip and probably more then one signal from CPU is required since it generally need to specify row and column in RAM to access part of it (Read What every programmer should know about memory for more info). In practice access time to cache may take only 0.5 ns while RAM access will cost 100 ns.
So computer algorithms should take this into account. If you traverse through all elements in a matrix, you should traverse them so you would access elements that are located near each other approximately at the same time. So if your matrix has the following layout in memory:
m_0_0, m_0_1, m_0_2, ... m_1_0, m_1_1 (first column, second column, etc.)
you should access elements in order: m_0_0, m_0_1, m_0_2 (by column)
If you would use different access order (say by row in this case), CPU would load part of first column in cache when you access first element in first column, then part of second column when you access first element in second column etc. When you will traverse first row and access second element in first column cache values for the first column would no longer be present in cache, since it has limited (and very small) size. Therefore such an algorithm would effectively eliminate cache at all.

Searching in large memory mapped files

I have a large data structure stored in memory mapped file. Data structure is very simple:
struct Header {
...some metadata...
uint32_t index_size;
uint64_t index[]
};
This header is placed in the beginning of the file, it uses a structure hack - variable sized structure, size of the last element is not set in stone and can be changed.
char* mmaped_region = ...; // This memory comes from memory mapped file!
Header* pheader = reinterpret_cast<Header*>(mmaped_region);
Memory mapped region starts with Header and Header::index_size contains correct length of the Header::index array. This array contains offsets of the data elements, we can do this:
uint64_t offset = pheader->index[x];
DataItem* item = reinterpret_cast<DataItem*>(mmaped_region + offset);
// At this point, variable item contains pointer to data element
// if variable x contains correct index value (less than pheader->index_size)
All the data elements is sorted (less than relation defined for data elements). Their are stored in the same memory mapped region as Header but starting from the end to the beginning. Data elements can't be moved, because their are of variable size, instead of that - indexes in header are moved during sort procedure. This is very much like B-tree page in modern databases, index array is usually called an indirection vector.
Searches
This data-structure is searched with interpolation search algorithm (with limited amount of steps) and than with binary search. First, I have a whole index array to search, I'm trying to calculate - where searched element can be stored if distribution is uniform. I get some calculated index - look at element at this index and it usually doesn't match. Than I narrow the search range and repeat. Number of interpolation search steps is limited by some small number. After that data-structure is searched with binary search. This works very good with small data-sets, because distribution is usually uniform. Few iterations of the interpolation search and we're done.
Problem definition.
Memory mapped region can be very large in reality. For testing I use 32Gb file backed storage and search for some random keys. This is very slow because this pattern cause lot of random disk reads (all data can't be cached in memory).
What can be done here? I think that setting MADV_RANDOM with madvise syscall can help, but probably not very much. I want to get on par with B-tree search speed. Maybe it is possible to use mincore syscall to check what data-elements can be painlessly checked during interpolation search? Maybe I can use prefetching of some sort?
The interpolation search appears to be a good idea here. It usually has a small benefit, but in this case even a small number of iterations saved helps a lot since they're s slow (disk I/O).
However, real databases duplicate the actual key values in their indices. The space overhead for that is fully justified in the performance improvement. Btrees are a further improvement because they pack multiple related nodes in a single contiguous block of memory, further reducing disk seeks.
This is probably the correct solution for you as well. You should duplicate the keys to avoid disk I/O. You can probably get away by duplicating the keys in a separate structure and keeping that that fully in memory, if you can't alter the existing header.
A compromise is possible, where you just cache the top (2^N)-1 keys for the first N levels of binary search. That means you have to give up your interpolation for that part of the search, but as noted before interpolation is not a huge win anyway. The disk seeks saved will easily pay off. Even caching just the median key (N=1) will already save you one disk seek per lookup. And you can still use interpolation once you've run out of the cache.
In comparison, any attempt to fiddle with memory mapping parameters will give you a few percent speed improvement at best. "On par with B-trees" is not going to happen. If your algorithm needs those physical seeks, you lose. No magical pixie dust will fix a bad algorithm or a bad datastructure.

MPI synchronize matrix of vectors

Excuse me if this question is common or trivial, I am not very familiar with MPI so bear with me.
I have a matrix of vectors. Each vector is empty or has a few items in it.
std::vector<someStruct*> partitions[matrix_size][matrix_size];
When I start the program each process will have the same data in this matrix, but as the code progresses each process might remove several items from some vectors and put them in other vectors.
So when I reach a barrier I somehow have to make sure each process has the latest version of this matrix. The big problem is that each process might manipulate any or all vectors.
How would I go about to make sure that every process has the correct updated matrix after the barrier?
EDIT:
I am sorry I was not clear. Each process may move one or more objects to another vector but only one process may move each object. In other words each process has a list of objects it may move, but the matrix may be altered by everyone. And two processes can't move the same object ever.
In that case you'll need to send messages using MPI_Bcast that inform the other processors about this and instruct them to do the same. Alternatively, if the ordering doesn't matter until you hit the barrier, you can only send the messages to the root process which performs the permutations and then after the barrier sends it to all the others using MPI_Bcast.
One more thing: vectors of pointers are usually quite a bad idea, as you'll need to manage the memory manually in there. If you can use C++11, use std::unique_ptr or std::shared_ptr instead (depending on what your semantics are), or use Boost which provides very similar facilities.
And lastly, representing a matrix as a fixed-size array of fixed-size arrays is readlly bad. First: the matrix size is fixed. Second: adjacent rows are not necessarily stored in contiguous memory, slowing your program down like crazy (it literally can be orders of magnitudes). Instead represent the matrix as a linear array of size Nrows*Ncols, and then index the elements as Nrows*i + j where Nrows is the number of rows and i and j are the row and column indices, respectively. If you don't want column-major storage instead, address the elements by i + Ncols*j. You can wrap this index-juggling in inline functions that have virtually zero overhead.
I would suggest to lay out the data differently:
Each process has a map of his objects and their position in the matrix. How that is implemented depends on how you identify objects. If all local objects are numbered, you could just use a vector<pair<int,int>>.
Treat that as the primary structure you manipulate and communicate that structure with MPI_Allgather (each process sends it data to all other processes, at the end everyone has all data). If you need fast lookup by coordinates, then you can build up a cache.
That may or may not be performing well. Other optimizations (like sharing 'transactions') totally depend on your objects and the operations you perform on them.

What is the most efficient (yet sufficiently flexible) way to store multi-dimensional variable-length data?

I would like to know what the best practice for efficiently storing (and subsequently accessing) sets of multi-dimensional data arrays with variable length. The focus is on performance, but I also need to be able to handle changing the length of an individual data set during runtime without too much overhead.
Note: I know this is a somewhat lengthy question, but I have looked around quite a lot and could not find a solution or example which describes the problem at hand with sufficient accuracy.
Background
The context is a computational fluid dynamics (CFD) code that is based on the discontinuous Galerkin spectral element method (DGSEM) (cf. Kopriva (2009), Implementing Spectral Methods for Partial Differential Equations). For the sake of simplicity, let us assume a 2D data layout (it is in fact in three dimensions, but the extension from 2D to 3D should be straightforward).
I have a grid that consists of K square elements k (k = 0,...,K-1) that can be of different (physical) sizes. Within each grid element (or "cell") k, I have N_k^2 data points. N_k is the number of data points in each dimension, and can vary between different grid cells.
At each data point n_k,i (where i = 0,...,N_k^2-1) I have to store an array of solution values, which has the same length nVars in the whole domain (i.e. everywhere), and which does not change during runtime.
Dimensions and changes
The number of grid cells K is of O(10^5) to O(10^6) and can change during runtime.
The number of data points N_k in each grid cell is between 2 and 8 and can change during runtime (and may be different for different cells).
The number of variables nVars stored at each grid point is around 5 to 10 and cannot change during runtime (it is also the same for every grid cell).
Requirements
Performance is the key issue here. I need to be able to regularly iterate in an ordered fashion over all grid points of all cells in an efficient manner (i.e. without too many cache misses). Generally, K and N_k do not change very often during the simulation, so for example a large contiguous block of memory for all cells and data points could be an option.
However, I do need to be able to refine or coarsen the grid (i.e. delete cells and create new ones, the new ones may be appended to the end) during runtime. I also need to be able to change the approximation order N_k, so the number of data points I store for each cell can change during runtime as well.
Conclusion
Any input is appreciated. If you have experience yourself, or just know a few good resources that I could look at, please let me know. However, while the solution will be crucial to the performance of the final program, it is just one of many problems, so the solution needs to be of an applied nature and not purely academic.
Should this be the wrong venue to ask this question, please let me know what a more suitable place would be.
Often, these sorts of dynamic mesh structures can be very tricky to deal with efficiently, but in block-structured adaptive mesh refinement codes (common in astrophysics, where complex geometries aren't important) or your spectral element code where you have large block sizes, it is often much less of an issue. You have so much work to do per block/element (with at least 10^5 cells x 2 points/cell in your case) that the cost of switching between blocks is comparitively minor.
Keep in mind, too, that you can't generally do too much of the hard work on each element or block until a substantial amount of that block's data is already in cache. You're already going to have to had flushed most of block N's data out of cache before getting much work done on block N+1's anyway. (There might be some operations in your code which are exceptions to this, but those are probably not the ones where you're spending much time anyway, cache or no cache, because there's not a lot of data reuse - eg, elementwise operations on cell values). So keeping each the blocks/elements beside each other isn't necessarily a huge deal; on the other hand, you definitely want the blocks/elements to be themselves contiguous.
Also notice that you can move blocks around to keep them contiguous as things get resized, but not only are all those memory copies also going to wipe your cache, but the memory copies themselves get very expensive. If your problem is filling a significant fraction of memory (and aren't we always?), say 1GB, and you have to move 20% of that around after a refinement to make things contiguous again, that's .2 GB (read + write) / ~20 GB/s ~ 20 ms compared to reloading (say) 16k cache lines at ~100ns each ~ 1.5 ms. And your cache is trashed after the shuffle anyway. This might still be worth doing if you knew that you were going to do the refinement/derefinement very seldom.
But as a practical matter, most adaptive mesh codes in astrophysical fluid dynamics (where I know the codes well enough to say) simply maintain a list of blocks and their metadata and don't worry about their contiguity. YMMV of course. My suggestion would be - before spending too much time crafting the perfect data structure - to first just test the operation on two elements, twice; the first, with the elements in order and computing on them 1-2, and the second, doing the operation in the "wrong" order, 2-1, and timing the two computations several times.
For each cell, store the offset in which to find the cell data in a contiguous array. This offset mapping is very efficient and widely used. You can reorder the cells for cache reuse in traversals. When the order or number of cells changes, create a new array and interpolate, then throw away the old arrays. This storage is much better for external analysis because operations like inner products in Krylov methods and stages in Runge-Kutta methods can be managed without reference to the mesh. It also requires minimal memory per vector (e.g. in Krylov bases and with time integration).