I would like to know how a 3d complex array is laid out in a computer's memory? An example in Fortran would be of some help.
I'm trying to do a 1-dimensional sine transformation of a 3d complex array and I'm using a routine from FFTW. http://www.fftw.org/fftw3_doc/Advanced-Complex-DFTs.html#Advanced-Complex-DFTs
So I need to figure out the values for the parameters(i.e.howmany, istride and idist) in the FFT routine. Thanks.
Complex Fortran scalar values are stored as pairs of values [real, imaginary]. All Fortran arrays are column-major order, that means that the first index is the most frequently changing one. And each element of a complex array is a (real, imaginary) pair.
A Fortran array
complex(c_float_complex) :: A(nx, ny, nz)
is like a C array
float A[nz][ny][nx][2]
(yes, I know there is an optional complex type in modern C as well).
So the sequence goes like this:
(re(1,1,1),im(1,1,1)), (re(2,1,1,),im(2,1,1)), (re(3,1,1,),im(3,1,1)) ... (re(nx,1,1,),im(nx,1,1)), (re(1,2,1),im(1,2,1)) ...
Se the column-major order link for more details.
Related
I'm exclusively using Armadillo matrices in my C++ code for both 2D and 1D arrays for the sake of simplicity and ease of maintenance. For instance, I make use of the vector initializer list
but immediately convert it to a matrix before it is actually used:
mat bArray = mat(colvec({0.1, 0.2, 0.3}));
Most of my code consists of 1D arrays and there are only a few places where I truly need the 2D Armadillo matrix.
Would I gain a significant performance increase if I converted all (nx1) Armadillo matrices to (nx1) Armadillo column vectors? Are there any other nuances between these two data structures that I should know about?
An arma::vec is the same as arma::colvec and they are just typedefs for Col<double>. You can see this in the typedef_mat.hpp file in armadillo source code. Also, arma::Col<T> inherits from Mat<T>, as can been seem in the Col_bones.hpp file. Essentially, a colvec is just a matrix with a single column. You wouldn't gain anything, except that you would avoid the unnecessary copy from the temporary vector to the matrix. Although maybe that is a move, since the colvec is a temporary, and thus even that gain could be marginal.
Nevertheless, if you want a 1D array use a colvec instead of mat at least for the sake of being clear about it.
I understand what an array and a matrix is. I want to learn how to create 3D graphics and I want to know if a multi-demionsional array is the same as a matrix.
There are several uses of the term "matrix". Normally however we say that a matrix is a 2-dimensional array of scalar (integer or floating point) values, with known dimensions, an entry for every position (no missing values allowed), and arranged such that the columns represent observations about or operations on the rows of another matrix. So if we have a matrix with four columns, it only makes sense if we have another matrix or vector with four rows to which the four columns apply.
So the obvious way to represent a matrix in C++ is as a 2D array. But 2D arrays aren't identical with matrices. You might have a 2D array that is not a matrix (missing values which are uninitialised or nan), or a matrix that is not a 2D array (we could represent as a 1D array and do the index calculations manually, or as a "sparse matrix" where most values are expected to be zero and we just have a list of non-zero values).
Matrix is an abstract mathematical concept that can be modeled in C++ using a number of ways:
A two-dimensional array,
An array of pointers to arrays with arrays of identical size
A std::vector<std::vector<T>>
An std::array<N,std::array<M,T>>
A library-specific opaque implementation
The actual implementation is always specific to the drawing library that you have in mind.
I wrote a derived data type to store banded matrices in Compressed Diagonal Storage format; in particular I store each diagonal of the banded matrix in a column of the 2D array cds(1:N,-L:U), where N is the number of rows of the full matrix and L and U are the number of lower and upper diagonals (this question includes the definition of the type).
I also wrote a function to perform the product between a matrix in this CDS format and a full vector. To obtain each element of the product vector, the elements of the corresponding row of cds are used, which are not contiguous in memory, since the language is Fortran. Because of this I was wandering if a better solution would be to store the diagonals in the rows of a 2D array cds2(-L:U,1:N), which seems pretty reasonable to me.
On the contrary here I read
we can allocate for the matrix A an array val(1:n,-p:q). The declaration with reversed dimensions (-p:q,n) corresponds to the LINPACK band format [132], which, unlike compressed diagonal storage (CDS), does not allow for an efficiently vectorizable matrix-vector multiplication if p + q is small.
Which is just what seems appropriate to C in my opinion. What am I missing?
EDIT
The core of the routine performing matrix vector products is the following
DO i = A%lb(1), A%ub(1)
CDS_mat_x_full_vec(i) = DOT_PRODUCT(A%matrix(i,max(-lband,lv-i):min(uband,uv-i)), &
& v(max(i-lband,lv):min(i+uband,uv)))
END DO
(Where lv and uv are used to take into account the case of the vector indexed from an index other than 1.)
The matrix A is then accessed by rows.
I implemented the derived type which stores the diagonals in an array val(-p:q,1:n) and it is faster, as I supposed. So I think that the link I referenced refers to a row major storage language as C and not a column major one as Fortran. (Or it implements the matrix product in a way I can't even imagine.)
I'm working with large sparse matrices that are not exactly very sparse and I'm always wondering how much sparsity is required for storage of a matrix as sparse to be beneficial? We know that sparse representation of a reasonably dense matrix could have a larger size than the original one. So is there a threshold for the density of a matrix so that it would be better to store it as sparse? I know that the answer to this question usually depends on the structure of the sparsity, etc but I was wondering if there is just some guidelines? for example I have a very large matrix with density around 42%. should I store this matrix as dense or sparse?
scipy.coo_matrix format stores the matrix as 3 np.arrays. row and col are integer indices, data has the same data type as the equivalent dense matrix. So it should be straight forward to calculate the memory it will take as a function of overall shape and sparsity (as well as the data type).
csr_matrix may be more compact. data and indices are the same as with coo, but indptr has a value for each row plus 1. I was thinking that indptr would be shorter than the others, but I just constructed a small matrix where it was longer. An empty row, for example, requires a value in indptr, but none in data or indices. The emphasis with this format is computational efficiency.
csc is similar, but working with columns. Again you should be able to the math to calculate this size.
Brief discussion of memory advantages from MATLAB (using similar storage options)
http://www.mathworks.com/help/matlab/math/computational-advantages.html#brbrfxy
background paper from MATLAB designers
http://www.mathworks.com/help/pdf_doc/otherdocs/simax.pdf
SPARSE MATRICES IN MATLAB: DESIGN AND IMPLEMENTATION
I'm thinking of using Boost's Sparse Matrix for a computation where minimal memory usage is the goal. Unfortunately, the documentation page didn't include a discussion of the sparse matrix implementation's memory usage when I looked through it. Nor am I sure how to determine how much memory the sparse matrix is using at any given time.
How much memory will the sparse matrix use? Can you quote a source?
How can I find out how much memory the matrix is using at a given time t?
I cannot give you an exact answer. But generally speaking a sparse matrix uses an amount of memory that is a multiple of the number of nonzero entries of the matrix. A common format stores all nonzero entries in an array 'A' (row by row). Stores than a second array 'B' which gives the column-index of the corresponding nonzero entry from 'A' and a third array telling me where in array 'A' row x begins.
Assuming datatypes type_nnz, type_index a N*N sparse matrix with nnz nonzero elements the memory requirement is
sizeof(type_nnz)*nnz + sizeof(type_index)*(nnz+N)