I am writing smoothing spline in C++ using opencv.
I need to use sparse matrix (like in MATLAB), i.e. large matrix which consists of zeros and a few non-zero diagonals. I use Mat matrices for this purpose, because I want to be able to multiply them, transpose them, etc.
Is there exist some elegant way to initialize such matrix, without processing it element after element?
There is a function called Mat::diag, but this creates a column matrix, it is not what I need. Is it possible to convert this to normal matrix? The most similar thing to what I need is Mat::eye, but I need to initialize more than one diagonal, in addition, I have different numbers in same diagonal, so I cannot use Mat::eye.
Thank you!
I solved myself: :)
Mat B = Mat::zeros(3, 3, CV_8UC1);
Mat C = B.diag(0);
C.at<unsigned char>(0) = 64;
C.at<unsigned char>(1) = 64;
C.at<unsigned char>(2) = 64;
Mat::diag is dynamic, so it works.
You can initialize with Mat::eye and multiply by a 1 by N dimensional matrix containing the diagonal values you want. (Or just set them manually.) If your matrix is large enough that these operations take a significant amount of time you should not be using Mat which is not optimized for sparse matrices.
If your matrices are large enough that the above operations are slow, look here.
Related
I'm exclusively using Armadillo matrices in my C++ code for both 2D and 1D arrays for the sake of simplicity and ease of maintenance. For instance, I make use of the vector initializer list
but immediately convert it to a matrix before it is actually used:
mat bArray = mat(colvec({0.1, 0.2, 0.3}));
Most of my code consists of 1D arrays and there are only a few places where I truly need the 2D Armadillo matrix.
Would I gain a significant performance increase if I converted all (nx1) Armadillo matrices to (nx1) Armadillo column vectors? Are there any other nuances between these two data structures that I should know about?
An arma::vec is the same as arma::colvec and they are just typedefs for Col<double>. You can see this in the typedef_mat.hpp file in armadillo source code. Also, arma::Col<T> inherits from Mat<T>, as can been seem in the Col_bones.hpp file. Essentially, a colvec is just a matrix with a single column. You wouldn't gain anything, except that you would avoid the unnecessary copy from the temporary vector to the matrix. Although maybe that is a move, since the colvec is a temporary, and thus even that gain could be marginal.
Nevertheless, if you want a 1D array use a colvec instead of mat at least for the sake of being clear about it.
How do I construct a sparse tridiagonal matrix in Eigen? The matrix that I want to construct looks like this in Python:
alpha = 0.5j/dx**2
off_diag = alpha*np.ones(N-1)
A_fixed = sp.sparse.diags([-off_diag,(1/dt+2*alpha)*np.ones(N),-off_diag],[-1,0,1],format='csc')
How do I do it in C++ using the Eigen package? It looks like I need to use the 'triplet' as documented here, but are there easier ways to do this, considering that this should be a fairly common operation?
Another side question is whether I should use row-major or column major. I want to solve the matrix equation Ax=b, where A is a tridiagonal matrix. When we do matrix-vector multiplication by hand, we usually multiply each row of the matrix by the column vector, so storing the matrix in row-major seems to make more sense. But what about a computer? Which one is preferred if I want to solve Ax=b?
Thanks
The triplets are the designated method of setting up a sparse matrix.
You could go the even more straightforward way and use A.coeffRef(row, col) = val or A.inser(row,col) = val, i.e. fill the matrix element-by-element.
Since you have a tridiagonal system you know the number of non-zeros of the matrix beforehand and can reserve the space using A.reserve(Nnz).
A dumb way, which nevertheless works, is:
uint N(1000);
CSRMat U(N,N);
U.reserve(N-1);
for(uint j(0); j<N-1; ++j)
U.insert(j,j+1) = -1;
CSRMat D(N,N);
D.setIdentity();
D *= 2;
CSRMat A = U + CSRMat(U.transpose()) + D;
As to the solvers and preferred storage order that is, as I recall, of minor importance. Whilst C(++) stores contiguous data in row-major format it is up to the algorithm whether the data is accessed in an optimal way (row-by-row for row-major storage order). The correctness of an algorithm does not, as a rule, depend on the storage order of the data. Its performance depends on compatibility of storage order and actual data access patterns.
If you intend to use Eigen's own solvers stick with its default choice (col-major). If you intend to interface with other libraries (e.g. ARPACK) choose the storage order the library prefers/requires.
Is there a way to determine determinant of given matrix in C++ using only one variable (first loaded matrix) and in next recursion functions using only a reference for that matrix?
How to use coordinates of elements in matrix to determine determinants of submatrices of given matrix without creating them as matrices, just using elements in first matrix and their coordinates? Can that be done using recursion or recursion should not be used?
If you're trying to calculate a determinant for any matrix of size larger than 3x3 using Cramer's Rule, you're certainly doing something wrong. Performance will be terrible.
Probably the easiest approach for you to think your way through is to use row reduction to make it into an upper triangular matrix. Finding the determinant of an upper triangular matrix is easy - just multiply down the diagonal. As for the rest, just multiply by the constant factors that you used and remember that every swap is a -1.
I am doing program to remove noise from image, in it, i need to compute a lot of sums of pointwise multiplications, right now, i do it through direct approach and it takes huge computation cost:
int ret=0, arr1[n][n].arr2[n][n];
for (int i=0;i<n;i++) for (int j=0;j<n;j++) ret+=arr1[i][j]*arr2[i][j];
I was told, that to compute this convolution between two arrays, i should do this (
more details here here ) :
Calculate the DFT of array 1 (via FFT).
Calculate the DFT of array 2 (via FFT).
Multiply the two DFTs element-wise. It should be a complex multiplication.
Calculate the inverse DFT (via FFT) of the multiplied DFTs. That'll be your convolution result.
It seems, that algorithmic part is more or less clear, but i came to a new problem:
I selected fftw for this task, but after a long time, spent by reading it's docs, i still don't see any function for 2D inverse fft which returns not 2D array, but a single value akin to direct approach, not whole 2D array, what am i missing?
I am trying to implement a Kalman filter for data fusion in C++. As part of the project, I need to implement a function to calculate the inverse of a 3x3 matrix that has each element being a 3x3 matrix itself. Could you help me solve this problem? I would prefer a solution that requires the least amount of calculations (most CPU efficient).
Also another question, since the Kalman filter depends on the inverse matrix, how should I handle the case when the matrix is not invertible?
Thanks for your helps.
You can do a "small matrix" which is each element of a "big matrix", which contains pointers to the "small matrix", so reversing the "big matrix" will take as long as reversing a normal matrix of integers.
Probably this is the fastest algorithm you can do but does it fit in your implementation? How are your matrix declared?