Find exact solutions to Linear Program - linear-programming

I need to find an exact real solution to a linear program (where all inputs are integers). It is important that the solver also outputs the solutions as rational numbers, ideally without doing any intermediate steps with floating point numbers.
GLPK can do exact arithmetic, but cannot display the solutions as rational numbers (i.e. I get 0.3333 for 1/3). I could probably attempt to guess which number is meant by that, but that seems very fragile.
I was unable to find an LP solver that can do this kind of thing. Is there one? Performance is not a huge issue; my problems are very small. (I did look into using an SMT solver like Z3; they can solve these kinds of problems and provide exact rational solutions, but they resort to quantifier elimination instead of using a more apt algorithm for Linear Programs like Simplex)

SoPlex can use rational arithmetic to solve LPs exactly. Use it like this:
soplex -X -Y -o0 -f0 problem.lp
Options X and Y will print the primal and dual solution in rational numbers, while o0 and f0 set the optimality and feasibility tolerance to 0, hence solving the LP exactly.
You need GMP installed (or MPIR on Windows) to use the rational functionalities. One advantage over QSopt_exact is that SoPlex uses a hybrid technique combining the speed of double precision computation with the exact precision of rational arithmetic (iterative refinement).

Related

Python2.7: Can I set the step length for the forward-difference approximation of the Jacobian in ODE solvers?

I have a system of coupled ordinary differential equations. The independent variable is time t.
In order to evaluate some of the derivative functions, I need to employ root finding, solving other ODEs and more. It is only feasible (from a computational cost perspective) to evaluate these functions with a certain precision. Thus, during each time step, I am left with some numerical noise.
I try to solve this system of ODEs with scipy.integrate.odeint or scipy.integrate.solve_ivp (using the 'LSODA' method since the problem is stiff and the other solvers for stiff problems fail). However, it seems as if the solver has trouble calculating the Jacobian (which I let the solver approximate by finite differences) due to the numerical noise. I had a similar problem when using scipy.optimize.fsolve to calculate roots and I fixed this by setting epsfcn (see below) to a higher value.
I wonder whether I can do something similar for an ODE solver in python. It seems that odeint and solve_ivp do not have such an optional parameter but maybe there is a way around this?
Any help is appreciated!
(From the scipy documentation of fsolve:
epsfcn : float, optional
A suitable step length for the forward-difference approximation of the Jacobian (for fprime=None). If epsfcn is less than the machine precision, it is assumed that the relative errors in the functions are of the order of the machine precision.)

Polynomial solving with arbitrary precision

We have been using GSL to solve polynomials. However, we wish to use arbitrary precision to solve polynomials. I looked into GMP and Boost multi-precision library, however, I couldn't find any routine for polynomial solving with floating point coefficients.
Does there exist any library, which is free and open-source, for solving polynomials with arbitrary precision or a very high precision (>200 positions after decimal)?
Is it possible to make use of GSL polynomial solver routine with the change in data-type to be that of GMP arbitrary precision?
Would it rather be easy to write polynomial solver, using one of the standard algorithms, with GMP arbitrary precision data types?
Please feel free to comment if it is not clear.
If you know some algorithm to solve a polynomial equation (and you'll find these in many textbooks) you can adapt and code it to use GMP.
Since GMP has a C++ class interface with usual looking operator + ... etc, you could copy and past some existing C code then adapt it to GMP.
MPSolve provides a library to solve polynomials using multi-precision. Internally it uses GMP.
Following can be observed:
The computations can be done in integer, rational, and floating-point arbitrary precision.
The coefficients of the polynomial and various other options are given as input through a file. One can rig the original code to directly call the function from their own program.
The solutions can be reported in various formats, such as exponential, only real, etc.
The solver has been verified for several standard polynomial test cases and checks out.
The solver internally uses random number which is seeded through the /dev/random on a Linux machine. This causes a problem that the solver is slow on the subsequent runs as the entropy generated is not enough before the start of the future runs. This could be bypassed by replacing it with standard pseudo-random generators.
An attempt was made to integrate the solver as a library. However, serious segmentation faults occur which were difficult to debug. Hence, the solver was being used by calling its executable. Note: this is just from my experience and hopefully it can be done in a better way.
A new C++ version is being developed and, hopefully, shall resolve these issues.
It is tedious to fork the GSL polynomial solver to use GMP data-type. It will be rather easier and also the code will be more in one's control and understanding, if the solver is written (see no. 3).
As suggested in the answer, GMP and MPFR multi-precision libraries can be used and a polynomial solver can be written using standard polynomial solving techniques, such as Jenkins-Traub Algorithm or QR based techniques.
Boost C++ Libraries provide wrappers for using GMP and MPFR which might be very handy to use.
The Arb library has routines for solving real and complex polynomials using arbitrary precision and interval arithmeric.

arbitrary precision linear algebra c/c++ library with complex numbers

I'm doing a research involving linear differential equations with complex coefficients in 4-dimensional phase space. To be able to check some hypothesis about the root of the solutions, I need to be able to solve these equations numerically with arbitrary precision. I used to use mpmath Python module, but it works slowsly, so I prefer to rewrite my program in C/C++ to achieve maximum perfomance. So I have a question:
Are there any C/C++ linear algebra library exists which support both arbitrary precision arithmetic and complex numbers? I need some basic functionality like dot-products and so on. (Actually, I need matrix exponential too, but I can implement it by myself if necessary).
I tried to use Eigen with MPFR C++, but failed due to the fact that it doesn't support complex numbers (and construction like complex <mpreal> doesn't work as it assumes that the base type is a standard float).
Try using an arbitrary precision number library (e.g GMP http://gmplib.org/) with a linear algebra math library that supports complex numbers (e.g. Eigen http://eigen.tuxfamily.org/)
Finally, it seems that zkcm did what I want. I'm not sure if it is good from performance viewpoint (didn't do any benchmarks), but at least it works and provides all necessary features.
You could look into uBLAS from boost.

c++ numerical analysis Accurate data structure?

Using double type I made Cubic Spline Interpolation Algorithm.
That work was success as it seems, but there was a relative error around 6% when very small values calculated.
Is double data type enough for accurate scientific numerical analysis?
Double has plenty of precision for most applications. Of course it is finite, but it's always possible to squander any amount of precision by using a bad algorithm. In fact, that should be your first suspect. Look hard at your code and see if you're doing something that lets rounding errors accumulate quicker than necessary, or risky things like subtracting values that are very close to each other.
Scientific numerical analysis is difficult to get right which is why I leave it the professionals. Have you considered using a numeric library instead of writing your own? Eigen is my current favorite here: http://eigen.tuxfamily.org/index.php?title=Main_Page
I always have close at hand the latest copy of Numerical Recipes (nr.com) which does have an excellent chapter on interpolation. NR has a restrictive license but the writers know what they are doing and provide a succinct writeup on each numerical technique. Other libraries to look at include: ATLAS and GNU Scientific Library.
To answer your question double should be more than enough for most scientific applications, I agree with the previous posters it should like an algorithm problem. Have you considered posting the code for the algorithm you are using?
If double is enough for your needs depends on the type of numbers you are working with. As Henning suggests, it is probably best to take a look at the algorithms you are using and make sure they are numerically stable.
For starters, here's a good algorithm for addition: Kahan summation algorithm.
Double precision will be mostly suitable for any problem but the cubic spline will not work well if the polynomial or function is quickly oscillating or repeating or of quite high dimension.
In this case it can be better to use Legendre Polynomials since they handle variants of exponentials.
By way of a simple example if you use, Euler, Trapezoidal or Simpson's rule for interpolating within a 3rd order polynomial you won't need a huge sample rate to get the interpolant (area under the curve). However, if you apply these to an exponential function the sample rate may need to greatly increase to avoid loosing a lot of precision. Legendre Polynomials can cater for this case much more readily.

How to optimize solution of nonlinear equations?

I have nonlinear equations such as:
Y = f1(X)
Y = f2(X)
...
Y = fn(X)
In general, they don't have exact solution, therefore I use Newton's method to solve them. Method is iteration based and I'm looking for way to optimize calculations.
What are the ways to minimize calculation time? Avoid calculation of square roots or other math functions?
Maybe I should use assembly inside C++ code (solution is written in C++)?
A popular approach for nonlinear least squares problems is the Levenberg-Marquardt algorithm. It's kind of a blend between Gauss-Newton and a Gradient-Descent method. It combines the best of both worlds (navigates well the search space for for ill-posed problems and converges quickly). But there's lots of wiggle room in terms of the implementation. For example, if the square matrix J^T J (where J is the Jacobian matrix containing all derivatives for all equations) is sparse you could use the iterative CG algorithm to solve the equation systems quickly instead of a direct method like a Cholesky factorization of J^T J or a QR decomposition of J.
But don't just assume that some part is slow and needs to be written in assembler. Assembler is the last thing to consider. Before you go that route you should always use a profiler to check where the bottlenecks are.
Are you talking about a number of single parameter functions to solve one at a time or a system of multi-parameter equations to solve together?
If the former, then I've often found that a finding a better initial approximation (from where the Newton-Raphson loop starts) can save more execution time than polishing the loop itself, because convergence in the loop can be slow initially but is fast later. If you know nothing about the functions then finding a decent initial approximation is hard, but it might be worth trying a few secant iterations first. You might also want to look at Brent's method
Consider using Rational Root Test in parallel. If impossible to use values of absolute precision then use closest to zero results as the best fit to continue by Newton method.
Once single root found, you may decrease the equation grade by dividing it with monom (x-root).
Dividing and rational root test are implemented here https://github.com/ohhmm/openmind/blob/sh/omnn/math/test/Sum_test.cpp#L260