I'm doing a research involving linear differential equations with complex coefficients in 4-dimensional phase space. To be able to check some hypothesis about the root of the solutions, I need to be able to solve these equations numerically with arbitrary precision. I used to use mpmath Python module, but it works slowsly, so I prefer to rewrite my program in C/C++ to achieve maximum perfomance. So I have a question:
Are there any C/C++ linear algebra library exists which support both arbitrary precision arithmetic and complex numbers? I need some basic functionality like dot-products and so on. (Actually, I need matrix exponential too, but I can implement it by myself if necessary).
I tried to use Eigen with MPFR C++, but failed due to the fact that it doesn't support complex numbers (and construction like complex <mpreal> doesn't work as it assumes that the base type is a standard float).
Try using an arbitrary precision number library (e.g GMP http://gmplib.org/) with a linear algebra math library that supports complex numbers (e.g. Eigen http://eigen.tuxfamily.org/)
Finally, it seems that zkcm did what I want. I'm not sure if it is good from performance viewpoint (didn't do any benchmarks), but at least it works and provides all necessary features.
You could look into uBLAS from boost.
Related
We have been using GSL to solve polynomials. However, we wish to use arbitrary precision to solve polynomials. I looked into GMP and Boost multi-precision library, however, I couldn't find any routine for polynomial solving with floating point coefficients.
Does there exist any library, which is free and open-source, for solving polynomials with arbitrary precision or a very high precision (>200 positions after decimal)?
Is it possible to make use of GSL polynomial solver routine with the change in data-type to be that of GMP arbitrary precision?
Would it rather be easy to write polynomial solver, using one of the standard algorithms, with GMP arbitrary precision data types?
Please feel free to comment if it is not clear.
If you know some algorithm to solve a polynomial equation (and you'll find these in many textbooks) you can adapt and code it to use GMP.
Since GMP has a C++ class interface with usual looking operator + ... etc, you could copy and past some existing C code then adapt it to GMP.
MPSolve provides a library to solve polynomials using multi-precision. Internally it uses GMP.
Following can be observed:
The computations can be done in integer, rational, and floating-point arbitrary precision.
The coefficients of the polynomial and various other options are given as input through a file. One can rig the original code to directly call the function from their own program.
The solutions can be reported in various formats, such as exponential, only real, etc.
The solver has been verified for several standard polynomial test cases and checks out.
The solver internally uses random number which is seeded through the /dev/random on a Linux machine. This causes a problem that the solver is slow on the subsequent runs as the entropy generated is not enough before the start of the future runs. This could be bypassed by replacing it with standard pseudo-random generators.
An attempt was made to integrate the solver as a library. However, serious segmentation faults occur which were difficult to debug. Hence, the solver was being used by calling its executable. Note: this is just from my experience and hopefully it can be done in a better way.
A new C++ version is being developed and, hopefully, shall resolve these issues.
It is tedious to fork the GSL polynomial solver to use GMP data-type. It will be rather easier and also the code will be more in one's control and understanding, if the solver is written (see no. 3).
As suggested in the answer, GMP and MPFR multi-precision libraries can be used and a polynomial solver can be written using standard polynomial solving techniques, such as Jenkins-Traub Algorithm or QR based techniques.
Boost C++ Libraries provide wrappers for using GMP and MPFR which might be very handy to use.
The Arb library has routines for solving real and complex polynomials using arbitrary precision and interval arithmeric.
What library computes the rank of a matrix the fastest? Or, is there any code out in the open that does this fairly rapidly?
I am using Eigen3 and it seems to be slower than Python's numpy rank function. I just need this one function to be fast, absolutely nothing else matters. If you suggest a package everything but this is irrelevant, including ease of use.
The matrices I am looking at tend to be n by ( n choose 3) in size, the entries are 1 or 0....mostly 0's.
Thanks.
Edit 1: the rank is over R.
In general, BLAS/LAPACK functions are frighteningly fast. This link suggests using the GESVD or GESDD functions to compute singular values. The number of non-zero singular values will be the matrix's rank.
LAPACK is what numpy uses.
In short, you can use the same LAPACK library calls. It will be difficult to outperform BLAS/LAPACK functions, unless sparsity and special structure allow more efficient approaches. If that's true, you may want to check around for alternative libraries implementing sparse SVD solvers.
Note also there are multiple BLAS/LAPACK implementations.
Update
This post seems to argue that LU decomposition is unreliable for calculating rank. Better to do SVD. You may want to see how fast that eigen call is before going through all the hassle of using BLAS/LAPACK (I've just never used eigen).
Using double type I made Cubic Spline Interpolation Algorithm.
That work was success as it seems, but there was a relative error around 6% when very small values calculated.
Is double data type enough for accurate scientific numerical analysis?
Double has plenty of precision for most applications. Of course it is finite, but it's always possible to squander any amount of precision by using a bad algorithm. In fact, that should be your first suspect. Look hard at your code and see if you're doing something that lets rounding errors accumulate quicker than necessary, or risky things like subtracting values that are very close to each other.
Scientific numerical analysis is difficult to get right which is why I leave it the professionals. Have you considered using a numeric library instead of writing your own? Eigen is my current favorite here: http://eigen.tuxfamily.org/index.php?title=Main_Page
I always have close at hand the latest copy of Numerical Recipes (nr.com) which does have an excellent chapter on interpolation. NR has a restrictive license but the writers know what they are doing and provide a succinct writeup on each numerical technique. Other libraries to look at include: ATLAS and GNU Scientific Library.
To answer your question double should be more than enough for most scientific applications, I agree with the previous posters it should like an algorithm problem. Have you considered posting the code for the algorithm you are using?
If double is enough for your needs depends on the type of numbers you are working with. As Henning suggests, it is probably best to take a look at the algorithms you are using and make sure they are numerically stable.
For starters, here's a good algorithm for addition: Kahan summation algorithm.
Double precision will be mostly suitable for any problem but the cubic spline will not work well if the polynomial or function is quickly oscillating or repeating or of quite high dimension.
In this case it can be better to use Legendre Polynomials since they handle variants of exponentials.
By way of a simple example if you use, Euler, Trapezoidal or Simpson's rule for interpolating within a 3rd order polynomial you won't need a huge sample rate to get the interpolant (area under the curve). However, if you apply these to an exponential function the sample rate may need to greatly increase to avoid loosing a lot of precision. Legendre Polynomials can cater for this case much more readily.
I would like to solve the system of linear equations:
Ax = b
A is a n x m matrix (not square), b and x are both n x 1 vectors. Where A and b are known, n is from the order of 50-100 and m is about 2 (in other words, A could be maximum [100x2]).
I know the solution of x: $x = \inv(A^T A) A^T b$
I found few ways to solve it: uBLAS (Boost), Lapack, Eigen and etc. but i dont know how fast are the CPU computation time of 'x' using those packages. I also don't know if this numerically a fast why to solve 'x'
What is for my important is that the CPU computation time would be short as possible and good documentation since i am newbie.
After solving the normal equation Ax = b i would like to improve my approximation using regressive and maybe later applying Kalman Filter.
My question is which C++ library is the robuster and faster for the needs i describe above?
This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.
If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.
uBlas is not optimized unless you use it with optimized BLAS bindings.
The following are optimized for multi-threading and SIMD:
Intel MKL. FORTRAN library with C interface. Not free but very good.
Eigen. True C++ library. Free and open source. Easy to use and good.
Atlas. FORTRAN and C. Free and open source. Not Windows friendly, but otherwise good.
Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.
If liscencing is not a problem, you might try the gnu scientific library
http://www.gnu.org/software/gsl/
It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.
If you have access to MATLAB, I would recommend using its C libraries.
If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).
Its described in the thesis of Gibbs linked to here (around page 27):
http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz
I am currently working on a C++-based library for large, sparse linear algebra problems (yes, I know many such libraries exist, but I'm rolling my own mostly to learn about iterative solvers, sparse storage containers, etc..).
I am to the point where I am using my solvers within other programming projects of mine, and would like to test the solvers against problems that are not my own. Primarily, I am looking to test against symmetric sparse systems that are positive definite. I have found several sources for such system matrices such as:
Matrix Market
UF Sparse Matrix Collection
That being said, I have not yet found any sources of good test matrices that include the entire system- system matrix and RHS. This would be great to have in order to check results. Any tips on where I can find such full systems, or alternatively, what I might do to generate a "good" RHS for the system matrices I can get online? I am currently just filling a matrix with random values, or all ones, but suspect that this is not necessarily the best way.
I would suggest using a right-hand-side vector obtained from a predefined 'goal' solution x:
b = A*x
Then you have a goal solution, x, and a resulting solution, x, from the solver.
This means you can compare the error (difference of the goal and resulting solutions) as well as the residuals (A*x - b).
Note that for careful evaluation of an iterative solver you'll also need to consider what to use for the initial x.
The online collections of matrices primarily contain the left-hand-side matrix, but some do include right-hand-sides and also some have solution vectors too.:
http://www.cise.ufl.edu/research/sparse/matrices/rhs.txt
By the way, for the UF sparse matrix collection I'd suggest this link instead:
http://www.cise.ufl.edu/research/sparse/matrices/
I haven't used it yet, I'm about to, but GiNAC seems like the best thing I've found for C++. It is the library used behind Maple for CAS, I don't know the performance it has for .
http://www.ginac.de/
it would do well to specify which kind of problems are you solving...
different problems will require different RHS to be of any use to check validity..... what i'll suggest is get some example code from some projects like DUNE Numerics (i'm working on this right now), FENICS, deal.ii which are already using the solvers to solve matrices... generally they'll have some functionality to output your matrix in some kind of file (DUNE Numerics has functionality to output matrices and RHS in a matlab-compliant files).
This you can then feed to your solvers..
and then again use their the libraries functionality to create output data
(like DUNE Numerics uses a VTK format)... That was, you'll get to analyse data using powerful tools.....
you may have to learn a little bit about compiling and using those libraries...
but it is not much... and i believe the functionality you'll get would be worth the time invested......
i guess even a single well-defined and reasonably complex problem should be good enough for testing your libraries.... well actually two
one for Ax=B problems and another for Ax=cBx (eigenvalue problems) ....