Boost's Linear Algebra Solution for y=Ax - c++

Does boost have one?
Where A, y and x is a matrix (sparse and can be very large) and vectors respectively.
Either y or x can be unknown.
I can't seem to find it here:
http://www.boost.org/doc/libs/1_39_0/libs/numeric/ublas/doc/index.htm

yes, you can solve linear equations with boost's ublas library. Here is one short way using LU-factorize and back-substituting to get the inverse:
using namespace boost::ublas;
Ainv = identity_matrix<float>(A.size1());
permutation_matrix<size_t> pm(A.size1());
lu_factorize(A,pm)
lu_substitute(A, pm, Ainv);
So to solve a linear system Ax=y, you would solve the equation trans(A)Ax=trans(A)y by taking the inverse of (trans(A)A)^-1 to get x: x = (trans(A)A)^-1Ay.

Linear solvers are generally part of the LAPACK library which is a higher level extension of the BLAS library. If you are on Linux, the Intel MKL has some good solvers, optimized both for dense and sparse matrices. If you are on windows, MKL has a one month trial for free... and to be honest I haven't tried any of the other ones out there. I know the Atlas package has a free LAPACK implementation but not sure how hard it is to get running on windows.
Anyways, search around for a LAPACK library which works on your system.

One of the best solvers for Ax = b, when A is sparse, is Tim Davis's UMFPACK
UMFPACK computes a sparse LU decomposition of A. It is the algorithm that
gets used behind the scenes in Matlab when you type x=A\b (and A is sparse
and square). UMFPACK is free software (GPL)
Also note if y=Ax, and x is known, but y is not, you compute y by performing a sparse matrix vector multiply, not by solving a linear system.

Reading the boost documentation, it does not seem like solving w.r.t x is implemented. Solving in y is only a matter of matrix-vector product, which seems implemented in ublas.
One thing to keep in mind is that blas only implement 'easy' operations like addition, multiplication, etc... of vector and matrix types. Anything more advanced (linear problem solving, like your "solve in x y = A x", eigen vectors and co) is part of LAPACK, which built on top of BLAS. I don't know what boost provides in that respect.

Boost's linear algebra package's tuning focused on "dense matrices".
As far as I know, Boost's package do not have any linear-system-solver.
How about use source code in "Numerical Recipe in C (http://www.nr.com/oldverswitcher.html)" ?
Note. There can be subtle index bug in the source code (some code uses array index start from 1)

Take a look at JAMA/TNT. I've only used it for non-sparse matrices (you probably want the QR or LU factorizations, both of which have solver utility methods), but it apparently has some facilities for sparse matrices.

Related

Solving a Tridiagonal Matrix using Eigen package in C++

At present I have a system Ax = b such that A is a tridiagonal matrix. Using Eigen, I can already solve this system using the line:
x = A.colPivHouseholderQr().solve(b);
However, since A is a tridiagonal matrix this works rather slowly compared to say in MATLAB, since the program is mostly likely computing the solution for all values rather than just on the three diagonals. Can Eigen solve this system faster? This is probably quite a dumb questions but I'm fairly new to C++ and I only started using Eigen a few days ago so there's a lot to take in at the moment! Thanks in advance.
The best you can do is to implement the Thomas algorithm yourself. Nothing can beat the speed of that. The algorithm is so simple, that nor Eigen nor BLAS will beat your hand-written code. In case you have to solve a series of matrices, the procedure is very well vectorizable.
https://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm
If you want to stick with standard libraries, the BLAS DGTSV (double precision) or SGTSV (single precision) is probably the best solution.

Most efficient way to solve a system of linear equations

I have an (n x n) symmetric matrix A and a (n x 1) vector B. Basically, I just need to solve Ax = b for x. The issue is that A is going to potentially be massive. So I'm looking for the most efficient algorithm for solving linear equations in C++. I looked over the Eigen library. Apparently it has an SVD method, but I've been told it's slow. Solving x=inverse(A)*b also seems like it would be suboptimal. Is uBLAS faster? Are there any more efficient methods? Thanks.
Edit: matrix A is positive definite and not sparse.
The best way to solve a system of linear equations of the form Ax = b is to do the following.
decompose A into the format A = M1 * M2 (where M1 and M2 are triangular)
Solve M1 * y = b for y using back substitution
Solve M2 * x = y for x using back substitution
For square matrices, step 1 would use LU Decomposition.
For non square matrices, step 1 would use QR Decomposition.
If matrix A is positive definite and not sparse you'd use Cholesky Decomposition for the first step.
If you want to use eigen, you will have to first decompose it and then triangular solve it.
If this is still slow, thankfully, there are numerous linear algebra libraries available that can help improve the performance. The routine you should be looking for is dpotrs. Some libraries that have this implemented are as follows:
Netlib's lapack: Documentation and Download (free)
Intel's MKL: Documentation and Download (Free for non commercial use).
AMD's ACML: Download (free)
PLASMA: Download (free, multi core optimized)
MAGMA : Download (free, Implemented in CUDA, OpenCL)
CULA: Download (freemium, Implemented in CUDA).
If you are using eigen in the overall project, you can interface the LAPACK routine you need as described here.

Can I solve a system of linear equations in the form xA=b using Eigen, with A being sparse?

I need to convert my MATLAB code to C++, which include linear equations in the form xA=0.
I know that Eigen can deal linear equations Ax=b. I am asking: is there a way to solve a system of linear equations xA=b, using Eigen for C++ (Visual Studio 2010), with A being a sparse matrix? If not, what library can I use?
Thank you for any help.
x*A = b
is equivalent to
A.transpose() * z = b.transpose();
x = z.transpose()
which can be solved for x.
Note, that storage operations are cheap in comparison to the solving of the linear system.
A is sparse and the sparsity remains the same for the transpose operation.
Often, transposition is just a flag and a change in the addressing of the elements. From my first glance into the doc this does not apply to Eigen. But, from what I told you before, this does not matter so much.

Matrix logarithm algorithm

is there any way to compute the matrix logarithm in OpenCV? I understand that it's not available as a library function, but, pointers to a good source (paper, textbook, etc) will be appreciated.
As a matter of fact, I'm in the process of programming the matrix logarithm in the Eigen library which is apparently used in some Willow Garage libraries; not sure about OpenCV. Higham's book (see answer by aix) is the best reference in my opinion and I'm implementing Algorithm 11.11 in his book. That is a rather complicated algorithm though.
Diagonalization (as in Alexandre's comment) is an easy-to-program method which works very well for symmetric positive definite matrices. It also works well for many general matrices. However, it is not accurate for matrices whose eigenvalues are close together, and it fails for matrices that are not diagonalizable.
If you want something more robust than diagonalization but less complicated than Higham's Algorithm 11.11 then I'd recommend to do a Schur decomposition followed by inverse scaling and squaring. This is Algorithm 11.10 in Higham's book, and described in the paper "Approximating the Logarithm of a Matrix to Specified Accuracy" (http://dx.doi.org/10.1137/S0895479899364015, preprint at http://eprints.ma.man.ac.uk/318/).
If you use OpenCV matrices, you can easily map them to Eigen3 matrices. See this post:
OpenCV CV::Mat and Eigen::Matrix
Then, Eigen3 library has a matrix logarithm function that you can use:
http://eigen.tuxfamily.org/dox/unsupported/group__MatrixFunctions__Module.html
it is under the unsupported module, but this is not an issue, this just means:
These modules are contributions from various users. They are provided
"as is", without any support.
-- http://eigen.tuxfamily.org/dox/unsupported/
Hope this is of help.

Solving normal equation system in C++

I would like to solve the system of linear equations:
Ax = b
A is a n x m matrix (not square), b and x are both n x 1 vectors. Where A and b are known, n is from the order of 50-100 and m is about 2 (in other words, A could be maximum [100x2]).
I know the solution of x: $x = \inv(A^T A) A^T b$
I found few ways to solve it: uBLAS (Boost), Lapack, Eigen and etc. but i dont know how fast are the CPU computation time of 'x' using those packages. I also don't know if this numerically a fast why to solve 'x'
What is for my important is that the CPU computation time would be short as possible and good documentation since i am newbie.
After solving the normal equation Ax = b i would like to improve my approximation using regressive and maybe later applying Kalman Filter.
My question is which C++ library is the robuster and faster for the needs i describe above?
This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.
If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.
uBlas is not optimized unless you use it with optimized BLAS bindings.
The following are optimized for multi-threading and SIMD:
Intel MKL. FORTRAN library with C interface. Not free but very good.
Eigen. True C++ library. Free and open source. Easy to use and good.
Atlas. FORTRAN and C. Free and open source. Not Windows friendly, but otherwise good.
Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.
If liscencing is not a problem, you might try the gnu scientific library
http://www.gnu.org/software/gsl/
It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.
If you have access to MATLAB, I would recommend using its C libraries.
If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).
Its described in the thesis of Gibbs linked to here (around page 27):
http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz