Is there good library to do NMF fast? - pca

I have a sparse matrix whose shape is 570000*3000. I tried nima to do NMF (using the default nmf method, and set max_iter to 65). However, I found nimfa very slow. Have anyone used a faster library(can be used by Python/R) or software to do NMF?

This question (Sparse matrix factorization with Nimfa is very slow with implicit zeros) shows that you might want to try the .todense() version of the code perhaps?

Related

Different least square errors with armadillo functions

Hello stackoverflow community,
I have troubles in understanding a least-square-error-problem in the c++ armadillo package.
I have a matrix A with many more rows than columns (5000 to 100 for example) so it is overdetermined.
I want to find x so that A*x=b gives me the least square error.
If i use the solve function of armadillo on my data like "x = Solve(A,b)" the error of "(A*x-b)^2" is sometimes way to high.
If on the other hand I solve for x with the analytical form by "x = (A^T * A)^-1 *A^T * b" the results are always right.
The results for x in both cases can differ by 10 magnitudes.
I had thought that armadillo would use this analytical form in the background if the system is overdetermined.
Now I would like to understand why these two methods give such different results.
I wanted to give a short example program, but i can't reproduce this behavior with a short program.
I thought about giving the Matrix here, but with 5000 times 100 it's also very big. I can deliver the values for which this happens though if needed.
So as a short background.
The matrix I get from my program is a numerically solved reaction of a nonlinear oscillator in which I put information inside by wiggeling a parameter of this system.
Because the influence of this parameter on the system is small, the values of my different rows are very similar but never the same, otherwise armadillo should throw an error.
I'm still thinking that this is the problem, but the solve function never threw any error.
Another thing that confuses me is that in a short example program with a random matrix, the analytical form is waaay slower than the solve function.
But on my program, both are nearly identically fast.
I guess this has something to do with the numerical convergence of the pseudo inverse and the special case of my matrix, but for that i don't know enough about how armadillo works.
I hope someone can help me with that problem and thanks a lot in advance.
Thanks for the replies. I think i figured the problem out and wanted to give some feedback for everybody who runs into the same problem.
The Armadillo solve function gives me the x that minimizes (A*x-b)^2.
I looked at the values of x and they are sometimes in the magnitude of 10^13.
This comes from the fact that the rows of my matrix only slightly change. (So nearly linear dependent but not exactly).
Because of that i was in the numerical precision of my doubles and as a result my error sometimes jumped around.
If i use the rearranged analytical formular (A^T * A)*x = *A^T * b with the solve function this problem doesn't occur anymore because the fitted values of x are in the magnitude of 10^4. The least square error is a little bit higher but that is okay, as i want to avoid overfitting.
I now additionally added Tikhonov regularization by solving (A^T * A + lambda*Identity_Matrix)*x = *A^T * b with the solve function of armadillo.
Now the weight vectors are in the order of around 1 and the error nearly doesn't change compared to the formular without regularization.

Matlab Hilbert Transform in C++

First, please excuse my ignorance in this field, I'm a programmer by trade but have been stuck in a situation a little beyond my expertise (in math and signals processing).
I have a Matlab script that I need to port to a C++ program (without compiling the matlab code into a DLL). It uses the hilbert() function with one argument. I'm trying to find a way to implement the same thing in C++ (i.e. have a function that also takes only one argument, and returns the same values).
I have read up on ways of using FFT and IFFT to build it, but can't seem to get anything as simple as the Matlab version. The main thing is that I need it to work on a 128*2000 matrix, and nothing I've found in my search has showed me how to do that.
I would be OK with either a complex value returned, or just the absolute value. The simpler it is to integrate into the code, the better.
Thank you.
The MatLab function hilbert() does actually not compute the Hilbert transform directly but instead it computes the analytical signal, which is the thing one needs in most cases.
It does it by taking the FFT, deleting the negative frequencies (setting the upper half of the array to zero) and applying the inverse FFT. It would be straight forward in C/C++ (three lines of code) if you've got a decent FFT implementation.
This looks pretty good, as long as you can deal with the GPL license. Part of a much larger numerical computing resource.
Simple code below. (Note: this was part of a bigger project). The value for L is based on the your determination of your order, N. With N = 2L-1. Round N to an odd number. xbar below is based on the signal you define as the input to your designed system. This was implemented in MATLAB.
L = 40;
n = -L:L; % index n from [-40,-39,....,-1,0,1,...,39,40];
h = (1 - (-1).^n)./(pi*n); %impulse response of Hilbert Transform
h(41) = 0; %Corresponds to the 0/0 term (for 41st term, 0, in n vector above)
xhat = conv(h,xbar); %resultant from Hilbert Transform H(w);
plot(abs(xhat))
Not a true answer to your question but maybe a way of making you sleep better. I believe that you won't be able to be much faster than Matlab in the particular case of what is basically ffts on a matrix. That is where Matlab excels!
Matlab FFTs are computed using FFTW, the de-facto fastest FFT algorithm written in C which seem to be also parallelized by Matlab. On top of that, quoting from http://www.mathworks.com/help/matlab/ref/fftw.html:
For FFT dimensions that are powers of 2, between 214 and 222, MATLAB
software uses special preloaded information in its internal database
to optimize the FFT computation.
So don't feel bad if your code is slightly slower...

Worse performance using Eigen than using my own class

A couple of weeks ago I asked a question about the performance of matrix multiplication.
I was told that in order to enhance the performance of my program I should use some specialised matrix classes rather than my own class.
StackOverflow users recommended:
uBLAS
EIGEN
BLAS
At first I wanted to use uBLAS however reading documentation it turned out that this library doesn't support matrix-matrix multiplication.
After all I decided to use EIGEN library. So I exchanged my matrix class to Eigen::MatrixXd - however it turned out that now my application works even slower than before.
Time before using EIGEN was 68 seconds and after exchanging my matrix class to EIGEN matrix program runs for 87 seconds.
Parts of program which take the most time looks like that
TemplateClusterBase* TemplateClusterBase::TransformTemplateOne( vector<Eigen::MatrixXd*>& pointVector, Eigen::MatrixXd& rotation ,Eigen::MatrixXd& scale,Eigen::MatrixXd& translation )
{
for (int i=0;i<pointVector.size();i++ )
{
//Eigen::MatrixXd outcome =
Eigen::MatrixXd outcome = (rotation*scale)* (*pointVector[i]) + translation;
//delete prototypePointVector[i]; // ((rotation*scale)* (*prototypePointVector[i]) + translation).ConvertToPoint();
MatrixHelper::SetX(*prototypePointVector[i],MatrixHelper::GetX(outcome));
MatrixHelper::SetY(*prototypePointVector[i],MatrixHelper::GetY(outcome));
//assosiatedPointIndexVector[i] = prototypePointVector[i]->associatedTemplateIndex = i;
}
return this;
}
and
Eigen::MatrixXd AlgorithmPointBased::UpdateTranslationMatrix( int clusterIndex )
{
double membershipSum = 0,outcome = 0;
double currentPower = 0;
Eigen::MatrixXd outcomePoint = Eigen::MatrixXd(2,1);
outcomePoint << 0,0;
Eigen::MatrixXd templatePoint;
for (int i=0;i< imageDataVector.size();i++)
{
currentPower =0;
membershipSum += currentPower = pow(membershipMatrix[clusterIndex][i],m);
outcomePoint.noalias() += (*imageDataVector[i] - (prototypeVector[clusterIndex]->rotationMatrix*prototypeVector[clusterIndex]->scalingMatrix* ( *templateCluster->templatePointVector[prototypeVector[clusterIndex]->assosiatedPointIndexVector[i]]) ))*currentPower ;
}
outcomePoint.noalias() = outcomePoint/=membershipSum;
return outcomePoint; //.ConvertToMatrix();
}
As You can see, these functions performs a lot of matrix operations. That is why I thought using Eigen would speed up my application. Unfortunately (as I mentioned above), the program works slower.
Is there any way to speed up these functions?
Maybe if I used DirectX matrix operations I would get better performance ?? (however I have a laptop with integrated graphic card).
If you're using Eigen's MatrixXd types, those are dynamically sized. You should get much better results from using the fixed size types e.g Matrix4d, Vector4d.
Also, make sure you're compiling such that the code can get vectorized; see the relevant Eigen documentation.
Re your thought on using the Direct3D extensions library stuff (D3DXMATRIX etc): it's OK (if a bit old fashioned) for graphics geometry (4x4 transforms etc), but it's certainly not GPU accelerated (just good old SSE, I think). Also, note that it's floating point precision only (you seem to be set on using doubles). Personally I'd much prefer to use Eigen unless I was actually coding a Direct3D app.
Make sure to have compiler optimization switched on (e.g. at least -O2 on gcc). Eigen is heavily templated and will not perform very well if you don't turn on optimization.
Which version of Eigen are you using? They recently released 3.0.1, which is supposed to be faster than 2.x. Also, make sure you play a bit with the compiler options. For example, make sure SSE is being used in Visual Studio:
C/C++ --> Code Generation --> Enable Enhanced Instruction Set
You should profile and then optimize first the algorithm, then the implementation. In particular, the posted code is quite innefficient:
for (int i=0;i<pointVector.size();i++ )
{
Eigen::MatrixXd outcome = (rotation*scale)* (*pointVector[i]) + translation;
I don't know the library, so I won't even try to guess the number of unnecessary temporaries that you are creating, but a simple refactor:
Eigen::MatrixXd tmp = rotation*scale;
for (int i=0;i<pointVector.size();i++ )
{
Eigen::MatrixXd outcome = tmp*(*pointVector[i]) + translation;
Can save you a good amount of expensive multiplications (and again, probably new temporary matrices that get discarded right away.
A couple of points.
Why are you multiplying rotation*scale inside of the loop when that product will have the same value each iteration? That is a lot of wasted effort.
You are using dynamically sized matrices rather than fixed sized matrices. Someone else mentioned this already, and you said you shaved off 2 sec.
You are passing arguments as a vector of pointers to matrices. This adds an extra pointer indirection and destroys any guarantee of data locality, which will give poor cache performance.
I hope this isn't insulting, but are you compiling in Release or Debug? Eigen is very slow in debug builds, because it uses lots of trivial templated functions that are optimized out of release but remain in debug.
Looking at your code, I am hesitant to blame Eigen for performance problems. However, most linear algebra libraries (including Eigen) are not really designed for your use case of lots of tiny matrices. In general, Eigen will be better optimized for 100x100 or larger matrices. You very well may be better off using your own matrix class or the DirectX math helper classes. The DirectX math classes are completely independent from your video card.
Looking back at your previous post and the code in there, my suggestion would be to use your old code, but improve its efficiency by moving things around. I'm posting on that previous question to keep the answers separate.

CUBLAS - matrix addition.. how?

I am trying to use CUBLAS to sum two big matrices of unknown size. I need a fully optimized code (if possible) so I chose not to rewrite the matrix addition code (simple) but using CUBLAS, in particular the cublasSgemm function which allows to sum A and C (if B is a unit matrix): *C = alpha*op(A)*op(B)+beta*c*
The problem is: C and C++ store the matrices in row-major format, cublasSgemm is intended (for fortran compatibility) to work in column-major format. You can specify whether A and B are to be transposed first, but you can NOT indicate to transpose C. So I'm unable to complete my matrix addition..
I can't transpose the C matrix by myself because the matrix is something like 20000x20000 maximum size.
Any idea on how to solve please?
cublasgeam has been added to CUBLAS5.0.
It computes the weighted sum of 2 optionally transposed matrices
If you're just adding the matrices, it doesn't actually matter. You give it alpha, Aij, beta, and Cij. It thinks you're giving it alpha, Aji, beta, and Cji, and gives you what it thinks is Cji = beta Cji + alpha Aji. But that's the correct Cij as far as you're concerned. My worry is when you start going to things which do matter -- like matrix products. There, there's likely no working around it.
But more to the point, you don't want to be using GEMM to do matrix addition -- you're doing a completely pointless matrix multiplication (which takes takes ~20,0003 operations and many passes through memory) for an operatinon which should only require ~20,0002 operations and a single pass! Treat the matricies as 20,000^2-long vectors and use saxpy.
Matrix multiplication is memory-bandwidth intensive, so there is a huge (factors of 10x or 100x) difference in performance between coding it yourself and a tuned version. Ideally, you'd change structures in your code to match the library. If you can't, in this case you can manage just by using linear algebra identities. The C-vs-Fortran ordering means that when you pass in A, CUBLAS "sees" AT (A transpose). Which is fine, we can work around it. If what you want is C=A.B, pass in the matricies in the opposite order, B.A . Then the library sees (BT . AT), and calculates CT = (A.B)T; and then when it passes back CT, you get (in your ordering) C. Test it and see.

Matrix Template Library matrix inversion

I'm trying to inverse a matrix with version Boost boost_1_37_0 and MTL mtl4-alpha-1-r6418. I can't seem to locate the matrix inversion code. I've googled for examples and they seem to reference lu.h that seems to be missing in the above release(s). Any hints?
#Matt suggested copying lu.h, but that seems to be from MTL2 rather than MTL4. I'm having trouble compiling with MTL2 with VS05 or higher.
So, any idea how to do a matrix inversion in MTL4?
Update: I think I understand Matt better and I'm heading down this ITL path.
Looks like you use lu_factor, and then lu_inverse. I don't remember what you have to do with the pivots, though. From the documentation.
And yeah, like you said, it looks like their documentations says you need lu.h, somehow:
How do I invert a matrix?
The first question you should ask
yourself is whether you want to really
compute the inverse of a matrix or if
you really want to solve a linear
system. For solving a linear system of
equations, it is not necessary to
explicitly compute the matrix inverse.
Rather, it is more efficient to
compute triangular factors of the
matrix and then perform forward and
backward triangular solves with the
factors. More about solving linear
systems is given below. If you really
want to invert a matrix, there is a
function lu_inverse() in mtl/lu.h.
If nothing else, you can look at lu.h on their site.
I've never used boost or MTL for matrix math but I have used JAMA/TNT.
This page http://wiki.cs.princeton.edu/index.php/TNT shows how to take a matrix inverse. The basic method is library-independent:
factor matrix M into XY where X and Y are appropriate factorizations (LU would be OK but for numerical stability I would think you would want to use QR or maybe SVD).
solve I = MN = (XY)N for N with the prerequisite that M has been factored; the library should have a routine for this.
In MTL4 use this:
mtl::matrix::inv(Matrix const &A, MatrixOut &Inv);
Here is a link to the api.