We have been using GSL to solve polynomials. However, we wish to use arbitrary precision to solve polynomials. I looked into GMP and Boost multi-precision library, however, I couldn't find any routine for polynomial solving with floating point coefficients.
Does there exist any library, which is free and open-source, for solving polynomials with arbitrary precision or a very high precision (>200 positions after decimal)?
Is it possible to make use of GSL polynomial solver routine with the change in data-type to be that of GMP arbitrary precision?
Would it rather be easy to write polynomial solver, using one of the standard algorithms, with GMP arbitrary precision data types?
Please feel free to comment if it is not clear.
If you know some algorithm to solve a polynomial equation (and you'll find these in many textbooks) you can adapt and code it to use GMP.
Since GMP has a C++ class interface with usual looking operator + ... etc, you could copy and past some existing C code then adapt it to GMP.
MPSolve provides a library to solve polynomials using multi-precision. Internally it uses GMP.
Following can be observed:
The computations can be done in integer, rational, and floating-point arbitrary precision.
The coefficients of the polynomial and various other options are given as input through a file. One can rig the original code to directly call the function from their own program.
The solutions can be reported in various formats, such as exponential, only real, etc.
The solver has been verified for several standard polynomial test cases and checks out.
The solver internally uses random number which is seeded through the /dev/random on a Linux machine. This causes a problem that the solver is slow on the subsequent runs as the entropy generated is not enough before the start of the future runs. This could be bypassed by replacing it with standard pseudo-random generators.
An attempt was made to integrate the solver as a library. However, serious segmentation faults occur which were difficult to debug. Hence, the solver was being used by calling its executable. Note: this is just from my experience and hopefully it can be done in a better way.
A new C++ version is being developed and, hopefully, shall resolve these issues.
It is tedious to fork the GSL polynomial solver to use GMP data-type. It will be rather easier and also the code will be more in one's control and understanding, if the solver is written (see no. 3).
As suggested in the answer, GMP and MPFR multi-precision libraries can be used and a polynomial solver can be written using standard polynomial solving techniques, such as Jenkins-Traub Algorithm or QR based techniques.
Boost C++ Libraries provide wrappers for using GMP and MPFR which might be very handy to use.
The Arb library has routines for solving real and complex polynomials using arbitrary precision and interval arithmeric.
Related
I'm doing a research involving linear differential equations with complex coefficients in 4-dimensional phase space. To be able to check some hypothesis about the root of the solutions, I need to be able to solve these equations numerically with arbitrary precision. I used to use mpmath Python module, but it works slowsly, so I prefer to rewrite my program in C/C++ to achieve maximum perfomance. So I have a question:
Are there any C/C++ linear algebra library exists which support both arbitrary precision arithmetic and complex numbers? I need some basic functionality like dot-products and so on. (Actually, I need matrix exponential too, but I can implement it by myself if necessary).
I tried to use Eigen with MPFR C++, but failed due to the fact that it doesn't support complex numbers (and construction like complex <mpreal> doesn't work as it assumes that the base type is a standard float).
Try using an arbitrary precision number library (e.g GMP http://gmplib.org/) with a linear algebra math library that supports complex numbers (e.g. Eigen http://eigen.tuxfamily.org/)
Finally, it seems that zkcm did what I want. I'm not sure if it is good from performance viewpoint (didn't do any benchmarks), but at least it works and provides all necessary features.
You could look into uBLAS from boost.
I'm in the middle of a code translation from Matlab to C++, and for some important reasons I must obtain the cumulative distribution function of a 'normal' function (in matlab, 'norm') with mean=0 and variance=1.
The implementation in Matlab is something like this:
map.c = cdf( 'norm', map.c, 0,1 );
Which is suppose to be the equalization of the histogram from map.c.
The problem comes at the moment of translating it into C++, due to the lack of decimals I have. I tried a lot of typical cdf implementations: such as the C++ code I found here,
Cumulative Normal Distribution Function in C/C++ but I got an important lack of decimals, so I tried the boost implementation:
#include "boost/math/distributions.hpp"
boost::math::normal_distribution<> d(0,1);
but still it is not the same implementation as Matlab's (I guess it seems to be even more precise!)
Does anyone know where I could find the original Matlab source for such a process, or which is the correct amount of decimals I should consider?
Thanks in advance!
The Gaussian CDF is an interesting function. I do not know whether my answer will interest you, but it is likely to interest others who look up your question later, so here it is.
One can compute the CDF by integrating the Taylor series of the PDF term by term. This approach works well in the body of the Gaussian bell curve, only it fails numerically in the tails. In the tails, it wants special-function techniques. The best source I have read for this is N. N. Lebedev's Special Functions and Their Applications, Ch. 2, Dover, 1972.
Octave is an open-source Matlab clone. Here's the source code for Octave's implementation of normcdf: http://octave-nan.sourcearchive.com/documentation/1.0.6/normcdf_8m-source.html
It should be (almost) the same as Matlab's, if it helps you.
C and C++ support long double for a more precise floating point type. You could try using that in your implementation. You can check your compiler documentation to see if it provides an even higher precision floating point type. GCC 4.3 and higher provide __float128 which has even more precision.
I wrote a code which solves large systems of PDEs using some discretization methods which basically involves solving large, sparse systems Ax=b many times at each time steps.
I currently use the PARDISO solver (from the intel MKL library) which is a direct LU factorization of A to solve the system. I would like to compare this method with the use of iterative solvers (which, with the use of preconditioners, might perform better since I could use the same preconditioner over many time steps if my Jacobian matrix does not vary too much).
My question is then, what library do you suggest for sparse iterative solvers in fortran ? I found one (SLATEC) which was written in 1993 so I wonder if there is something more performant which was written more recently ?
Thanks :)
I would also add:
LIS
AGMG
Oh, well ... the complete list of Linear Algebra Solvers
Thanks for the comments, PETSc seems to be exactly what I was looking for, just need to learn how to link C calls into fortran now :)
I wrote a research project in matlab that uses quite a few functions which I do not want to re-implement in C++, so I'm looking for libraries to handle these for me. The functions I need are: (by order of importance)
Hilbert transform
Matrix functions(determinant, inverse, multiplication...)
Finding roots of polynomials(for degrees greater than 5)
FFT
Convolutions
correlation(xcorr in matlab)
I don't know about most of those, but FFTW is the 'fastest Fourier transform in the West'. It is used in the MATLAB implementation of fft().
Once you've got an FFT you can knock off everything save for numbers 2. and 3.
The linear algebra requirement can be met with PETSc www.mcs.anl.gov/petsc/ which supports fftw.
I don't know how you're going to go about the root finding. You'll probably have to code that yourself (bisection, Newton's method etc etc.) but it's the easiest thing on the list to implement by far.
I am not sure about the libraries that are available for use, but if you already have the functions written in matlab there is another option.
If you compile the matlab functions to a dll they can be called by a c++ program. This would allow you to access the matlab functions that you already have without rewriting.
I am modelling physical system with heat conduction, and to do numerical calculations I need to solve system of linear equations with tridiagonal matrix. I am using this algorithm to get results: http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm But I am afraid that my method is straightforward and not optimal. What C++ library should be used to solve that system in the fastest way? I should also mention that matrix is not changed often (only right part of the equation is changed). Thanks!
Check out Eigen.
The performance of this algorithm is likely dominated by floating-point division. Use SSE2 to perform two divisions (of ci and di) at once, and you will get close to optimal performance.
It is worth looking at the LAPACK and BLAS interfaces, of which there are several implementation libraries. Originally netlib which is open-source, and then other such as MKL that you have to pay for. The function dgtsv does what you are looking for. The open-source netlib versions don't do any explicit SIMD instructions, but MKL does and will perform best on intel chips.