Solve system of polynomials (4, second order) in C - c++

I'm trying to solve a system of 4 second order polynomial equations using C++. What is the fastest method for solving the system, and if possible, could you link or write a little pseudocode to explain it? I'm aware of solutions involving a Groebners basis or QR decomposition, but I can't find a clear description of how they work and how to implement them. Maybe helpful info about the polynomials:
A solution(s) may exist or may not, but I am only interested in solutions in a certain range (e.g. x,y,z,t in [0,1])
The polynomials are of the form: a + bx + cy + d*x*y = e + fz + gt + h*z*t (solving for x,y,z,t). All coefficients are unique.
The polynomial equations come from bilinear interpolations.
I've tried finding an exact analytic solution, but as others have posted, solving large systems of polynomials in Mathematica and otherwise is time consuming

I would simply use the general-purpose solver IPOPT, written in C++. You can feed it with the [0, 1] bound constraints, it actually helps IPOPT and makes the solution procedure faster.
Does the sparsity pattern of the system change? If not, then you can probably save an initialization step. I am not 100% sure though. Either way, IPOPT is blazing fast compared to the analytic solution in Mathematica.

You can take a look at the Numerical Recipes book (chap. 9 in the c version) that describes solutions of non-linear systems of equations. There is an online version viewable from their web site http://www.nr.com/.
As their licensing is very restrictive, probably you can look at the method and then adapt it using a library such as gsl. I did not try but this page http://na-inet.jp/na/gslsample/nonlinear_system.html gives an example on how to do that with gsl.

Related

Is there a BLAS/LAPACK function for calculating Cholesky factor updates?

Let A be a positive definite matrix, and let A=L*L' be its cholesky factorization, where L is lower triangular.
Let A2 = A + alpha*x*x' be a rank-1 update of matrix A, where x is a vector of appropriate dimension and alpha is a scalar.
The Cholesky factor update is a procedure for obtaining the factorization A2=L2*L2' without calculating A2 first, which is useful to speed up computations in the case of such low-rank matrix updates.
I am using BLAS/LAPACK libraries for elementary algebra manipulations. I can calculate the Cholesky factorization of a positive definite matrix with the routine spptrf. However, I have been looking around and I have not been able to find a BLAS/LAPACK function which performs Cholesky factor updates. May it be that there is not function doing so?
Additionally: In this old post, the addition of such routine was discussed. However, it is a very old post (2013) and I have not been able of finding anything more recent.
There is no such function. You can look at this discussion we had on SciPy. I've made up a Python script that does the update with the relevant paper. You can make use of those information.
https://github.com/scipy/scipy/issues/8188
If you feel competitive and actually write the Fortran code for this, I would really appreciate if you can submit it to LAPACK repo as a PR https://github.com/Reference-LAPACK/lapack
The BLAS libraries on Netlib as you pointed out however I doubt it is on the site. If you are looking for code simply there is code here.
If you want it fast I'd just turn that code into Julia. There is a book I've never checked out which may have these in it. Also, note you cited a paper that the author wrote the code for. You could have simply contacted the author of the paper. His website appears to be here. There is a problem with that link though.

How to improve z3 linear programming performance

I am trying to use z3 to solve an linear programming problem and observing very poor performance relative to GLPK. There are reasons why I would prefer to use z3 so I'm wondering if there is something I'm missing.
The problem is essentially bin packing. I have on the order of 500 weighted items, and 5 bins. Every item must be placed in a bin. Objective is to minimize the total weight in the largest bin.
The problem was solved within 1-2 minutes by GLPK. I have yet to see z3 terminate.
The z3 optimization tutorial suggests both a boolean encoding and an integer encoding for a similar type of problem. Is one of these preferred for performance reasons? Basically I'm wondering if you need to follow a certain pattern for z3 to recognize it as linear programming.
How do I know what method it's applying to solve the problem?
Is there a way to configure logging on z3 so you can see what it's doing.
By the way, I'm using the C++ API.

C++ Libraries for solving Complex Linear systems Ax=b

I am interested in solving a sparse complex linear system Ax=b where A is a square matrix of complex numbers and b is vector of complex numbers.
If possible I would like such a library to be templated (for the ease of installation and use)
sth in the spirit of Eigen
I checked out Eigen but it does not, I think, look like it supports solving linear equations with complex sparse matrices, (although one can create and do elementary operations on complex matrices.)
Another trick someone suggested to me was one can work around this, by solving an extended system of twice the dimension using the fact that (A1 + iA2)(x1 + ix2) = (b1 + ib2)
but I would prefer some simple black box which gets the job done.
Any suggestions?
Transferring it to a real-valued system of twice the dimension might be the most immediate way. You could write an adapter to pack the transformation logic. Also may try this one: http://trilinos.sandia.gov/packages/docs/r4.0/packages/komplex/doc/html/

Solving normal equation system in C++

I would like to solve the system of linear equations:
Ax = b
A is a n x m matrix (not square), b and x are both n x 1 vectors. Where A and b are known, n is from the order of 50-100 and m is about 2 (in other words, A could be maximum [100x2]).
I know the solution of x: $x = \inv(A^T A) A^T b$
I found few ways to solve it: uBLAS (Boost), Lapack, Eigen and etc. but i dont know how fast are the CPU computation time of 'x' using those packages. I also don't know if this numerically a fast why to solve 'x'
What is for my important is that the CPU computation time would be short as possible and good documentation since i am newbie.
After solving the normal equation Ax = b i would like to improve my approximation using regressive and maybe later applying Kalman Filter.
My question is which C++ library is the robuster and faster for the needs i describe above?
This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.
If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.
uBlas is not optimized unless you use it with optimized BLAS bindings.
The following are optimized for multi-threading and SIMD:
Intel MKL. FORTRAN library with C interface. Not free but very good.
Eigen. True C++ library. Free and open source. Easy to use and good.
Atlas. FORTRAN and C. Free and open source. Not Windows friendly, but otherwise good.
Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.
If liscencing is not a problem, you might try the gnu scientific library
http://www.gnu.org/software/gsl/
It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.
If you have access to MATLAB, I would recommend using its C libraries.
If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).
Its described in the thesis of Gibbs linked to here (around page 27):
http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz

Equation Solvers for linear mathematical equations

I need to solve a few mathematical equations in my application. Here's a typical example of such an equation:
a + b * c - d / e = a
Additional rules:
b % 10 = 0
b >= 0
b <= 100
Each number must be integer
...
I would like to get the possible solution sets for a, b, c, d and e.
Are there any libraries out there, either open source or commercial, which I can use to solve such an equation? If yes, what kind of result do they provide?
Solving linear systems can generally be solved using linear programming. I'd recommend taking a look at Boost uBLAS for starters - it has a simple triangular solver. Then you might checkout libraries targeting more domain specific approaches, perhaps QSopt.
You're venturing into the world of numerical analysis, and here be dragons. Seemingly small differences in specification can make a huge difference in what is the right approach.
I hesitate to make specific suggestions without a fairly precise description of the problem domain. It sounds superficiall like you are solving constrained linear problems that are simple enough that there are a lot of ways to do it but "..." could be a problem.
A good resource for general solvers etc. would be GAMS. Much of the software there may be a bit heavy weight for what you are asking.
You want a computer algebra system.
See https://stackoverflow.com/questions/160911/symbolic-math-lib, the answers to which are mostly as relevant to c++ as to c.
I know it is not your real question, but you can simplify the given equation to:
d = b * c * e with e != 0
Pretty sure Numerical Recipes will have something
You're looking for a computer algebra system, and that's not a trivial thing.
Lot's of them are available, though, try this list at Wikipedia:
http://en.wikipedia.org/wiki/Comparison_of_computer_algebra_systems
-Adam
This looks like linear programming. Does this list help?
In addition to the other posts. Your constraint sets make this reminiscent of an integer programming problem, so you might want to check that kind of thing out as well. Perhaps your problem can be (re-)stated as one.
You must know, however that the integer programming problems tends to be one of the harder computational problems so you might end up using many clock cycles to crack it.
Looking only at the "additional rules" part it does look like linear programming, in which case LINDO or a similar program implementing the simplex algorithm should be fine.
However, if the first equation is really typical it shows yours is NOT a linear algebra problem - no 2 variables multiplying or dividing each other should appear on a linear equation!
So I'd say you definitely need either a computer algebra system or solve the problem using a genetic algorithm.
Since you have restrictions similar to those found in linear programming though you're not quite there, if you just want a solution to your specific problem I'd say pick up any of the libraries mentioned at the end of Wikipedia's article on genetic algorithms and develop an app to give you the result. If you want a more generalist approach, then you've got to simulate algebraic manipulations on your computer, no other way around.
The TI-89 Calculator has a 'solver' application.
It was built to solve problems like the one in your example.
I know its not a library. But there are several TI-89 emulators out there.