Does anyone know if the NTL library supports polynomials with real number coefficients, such as from the NTL classes RR or xdouble or just regular C++ floats?
I'm wanting to do polynomial multiplication for polynomials with real coefficients and would like if there was a class "RR_X" (or something similar) like the ZZ_X and ZZ_pX classes that supports FFT polynomial multiplication.
Related
I want to perform some calculations with polynomials in Sympy, such as multiplication and so on. The coefficients of these polynomials are noncommutative . It may be matrices for example. How can I set the appropriate domain in the "Sympy.Poly" class for this problem.
There is a way in java to use BigInteger Class For Big Integer addition,subtraction, multiplication and division. but i want to do it by c++ without any built in class. can anyone give me the idea??
For big integer arithmetic in C++, there's no standard library support. But you can use GMP.
At the same time, I multiply two matrix using Eigen and standard C++. But the results are not the same, though very similar. For example, matrix a and b are C++ style, and matrix c and d are Eigen style. Now the sum of a*b and c*d is the same, but each coeff is not the same. For later derivation calculation, the different result will generate different results. I do not the reason. And I check all the details but no problem can be found.
I'm using the dgesv and dgemm fortran subroutines in C++ to do some simple matrix multiplication and left division.
For random matrices A and B, I do:
A\(A\(A*B));
where * is defined using dgemm and \ using dgesv. Obviously, this expression should simplify to the identity matrix. I'm testing my answers against MATLAB and I'm getting more or less 1's on the diagonal but the other entries are very slightly off (the numbers are on the order of magnitude e-15, so they're close to 0 already).
I'm just wondering if this result is to be expected or not? Because if I do something like this:
C = A+B;
D = A*B;
D\(C\(C*C));
the result should come out to D\C. Basically, C(C*C) is very accurate (matches MATLAB perfectly), but the second I do D\C I get something that's off by e-1 or even e+00. I'm guessing that's not supposed to happen?
Your problem seems to be related to finite accuracy of floating point variables in C/C++. You can read more about it here. There are some techniques of minimizing that effect (some of them described in the wiki article) but there will always be some loss of accuracy after a few operations. You might want to use some third-party mathematical library that supports numbers of arbitrary precision (e.g. GMP). But still - as long as you stick to numerical approach accuracy of your calculations will always be tainted.
I am to implement a function to do a Cholesky factorization of a semidefinite matrix in C++ and am wondering if there is an library/anything out there that is already optimized. It is to work as or similar to what is described in this:
http://www.sciencedirect.com/science/article/pii/S0096300310012713
This is an example for positive definite but it doesn't work for positive semi-definite: http://en.wikipedia.org/wiki/Cholesky_decomposition#The_Cholesky.E2.80.93Banachiewicz_and_Cholesky.E2.80.93Crout_algorithms
The program must be in C++, with no C/FORTRAN libraries, (think pointy hared boss giving instructions) which means ATLAS, LAPACK, ect. are out. I have looked through MTL + Boost but theirs only works for positive definite matrices. Are there any libraries that I haven't found, or even single functions that have been written?
The problem with Cholesky decomposition of semi definite matrices is that 1) it is not unique 2) Crout's algorithm fails.
The existence of a decomposition is usually proven non constructively, via a limiting argument (if M_n -> M, and M_n = U^t_n U_n, then ||U_n|| = ||M_n||^1/2 where ||.|| = Hilbert-Schmidt norm, and U_n is a bounded sequence. Extract a subsequence to find a limit U satisfying U^t U = M and U triangular.)
I have found that in the cases I was interested in, it was satisfactory to multiply the diagonal elements by 1 + epsilon, with epsilon small (take a few thousand times the machine epsilon) to give a perfectly acceptable decomposition.
Indeed, if M is positive semi definite, then for each epsilon > 0, M + epsilon I is definite positive.
As the scheme converges when epsilon goes to zero, you can contemplate computing the decomposition for multiple epsilons and perform a Richardson extrapolation.
As to the positive definite case, you could implement Crout's algorithm yourself (there is a sample code in Numerical Recipes), but I would highly recommend against writing it yourself, and advise using LAPACK instead.
This may involve having your boss pay for Intel MKL if he is concerned by potentially poor implementations of LAPACK. Most of the time I heard such a speech, the rationale was "but we can't control the code, we do want to write it yourself so that we can debug it in case of a problem". Dumb argument. LAPACK is 40 years old and thoroughly tested.
Requiring not to use LAPACK is as silly as requiring not to use the standard library for sine, cosine and logarithms.