I'm using the Armadillo C++ linear algebra library, and I'm trying to figure out how to convert an sp_mat sparse matrix object to a standard mat dense matrix.
Looking at the internal code doc, sp_mat and mat don't share a common parent class, which leads me to believe there isn't a way to cast an sp_mat as a mat. By the way, conv_to<mat>::from(sp_mat x) doesn't work.
Perhaps there's a tricky way to do this using one of the advanced mat constructors? For example, somehow create a mat of zeros and pass the locations and values of non-zero elements in the sp_mat.
Does anyone know of an efficient method to do this? Thanks in advance.
Casting works perfectly fine:
sp_mat X(2,2);
mat Y(X);
Y.print("Y:");
Related
I'm exclusively using Armadillo matrices in my C++ code for both 2D and 1D arrays for the sake of simplicity and ease of maintenance. For instance, I make use of the vector initializer list
but immediately convert it to a matrix before it is actually used:
mat bArray = mat(colvec({0.1, 0.2, 0.3}));
Most of my code consists of 1D arrays and there are only a few places where I truly need the 2D Armadillo matrix.
Would I gain a significant performance increase if I converted all (nx1) Armadillo matrices to (nx1) Armadillo column vectors? Are there any other nuances between these two data structures that I should know about?
An arma::vec is the same as arma::colvec and they are just typedefs for Col<double>. You can see this in the typedef_mat.hpp file in armadillo source code. Also, arma::Col<T> inherits from Mat<T>, as can been seem in the Col_bones.hpp file. Essentially, a colvec is just a matrix with a single column. You wouldn't gain anything, except that you would avoid the unnecessary copy from the temporary vector to the matrix. Although maybe that is a move, since the colvec is a temporary, and thus even that gain could be marginal.
Nevertheless, if you want a 1D array use a colvec instead of mat at least for the sake of being clear about it.
As title says i need to perform element-wise matrix multiplication on cuda using GpuMat. My desired outcome is one that
cv::Mat mul() function gives for non-gpu Mats. I can use build-in function as well as i can write kernell for that operation, but i need little help as i am new to cuda.
I allready tried to write kernells to perform that but with no success so far. Also i tried to use mulSpectrums which is available for GpuMats but that function requires matrix to be type CV_32FC2 but i need my matrix to be CV_32F. IF there is literally no way to perform that operation on matrix which is not CV_32FC2, then you can show me efficient way to change matrix type from CV_32F to CV_32FC2 and back to CV_32F.
If anyone has time and will, i would love additional explanation how to perform operations on GpuMat matrices inside CUDA's kernell.
I need that to speed up my SSIM algorithm to lowest possible value as 0.01 sec is way to much for me atm.
But any type of help to perform that mul operation on GpuMat CV_32F inside cuda will be great.
An element wise multiplication can be performed with cv::cuda::multiply.
https://docs.opencv.org/master/d8/d34/group__cudaarithm__elem.html
You can also study the NPP libraries:
https://docs.nvidia.com/cuda/npp/group__image__mul.html
I am trying to convert some methods implemented in Eigen C++ dense matrix class (MatrixXd from <Eigen/Dense>) to methods with Eigen C++ sparse matrix (like SparseMatrix<double> from <Eigen/Sparse>).
Many methods can be directly transformed by simply chance MatrixXd to SparseMatrix<double>. However, some methods cannot be.
One problem I met is to convert the following elementwise dividend into sparse matrix method:
(beta.array() / beta.cwiseAbs().array()).sum()
Originally, beta is declared as MatrixXd beta. Now, if I declare beta as SparseMatrix<double> beta, there is no more corresponding array() method to allow me to do the above.
How should I still perform element-wise operations with sparse matrix?
Is there any efficient way that I can convert dense matrix to sparse matrix and vice versa?
This is not supported because rigorously you would compute 0/0 for any explicit zeros. You can workaround if the matrix is in compress mode, to be sure call:
beta.makeCompressed();
then map the nonzeros as a dense array:
Map<ArrayXd> a(beta.valuePtr(), beta.nonZeros();
(a / a.abs()).sum;
I am writing smoothing spline in C++ using opencv.
I need to use sparse matrix (like in MATLAB), i.e. large matrix which consists of zeros and a few non-zero diagonals. I use Mat matrices for this purpose, because I want to be able to multiply them, transpose them, etc.
Is there exist some elegant way to initialize such matrix, without processing it element after element?
There is a function called Mat::diag, but this creates a column matrix, it is not what I need. Is it possible to convert this to normal matrix? The most similar thing to what I need is Mat::eye, but I need to initialize more than one diagonal, in addition, I have different numbers in same diagonal, so I cannot use Mat::eye.
Thank you!
I solved myself: :)
Mat B = Mat::zeros(3, 3, CV_8UC1);
Mat C = B.diag(0);
C.at<unsigned char>(0) = 64;
C.at<unsigned char>(1) = 64;
C.at<unsigned char>(2) = 64;
Mat::diag is dynamic, so it works.
You can initialize with Mat::eye and multiply by a 1 by N dimensional matrix containing the diagonal values you want. (Or just set them manually.) If your matrix is large enough that these operations take a significant amount of time you should not be using Mat which is not optimized for sparse matrices.
If your matrices are large enough that the above operations are slow, look here.
In my algorithm I employ a sparse matrix inverse operation and I solve it by using the A*x=b method using the QR decomposition method. On Matlab the QR operation runs fine.
However, when I tried to convert the code to C++ using the Eigen library, I didn't get the same answer.
In some cases, there is a shift of some value along each element of vector x compared to the results in Matlab. This value which causes the shift is however constant through all the elements in the vector.
A glimpse of what I do:
Eigen::SparseMatrix<float> A(m, n);
Eigen::VectorXf b;
Eigen::SparseQR<Eigen::SparseMatrix<float>, Eigen::COLAMDOrdering<int>> solver;
solver.compute(A);
Eigen::VectorXf x = solver.solve(b);
x is my final vector which contains the result of A.inverse()*b isn't it?
Additionally, I tried to solve it as a full matrix but still produced different answers on C++ compared to Matlab.
Did anyone here face similar problem ? If yes, any help or pointers are welcome.
On the other hand, if there is something wrong with my understanding, any correction is also appreciated.
Thanks.