I have nonlinear equations such as:
Y = f1(X)
Y = f2(X)
...
Y = fn(X)
In general, they don't have exact solution, therefore I use Newton's method to solve them. Method is iteration based and I'm looking for way to optimize calculations.
What are the ways to minimize calculation time? Avoid calculation of square roots or other math functions?
Maybe I should use assembly inside C++ code (solution is written in C++)?
A popular approach for nonlinear least squares problems is the Levenberg-Marquardt algorithm. It's kind of a blend between Gauss-Newton and a Gradient-Descent method. It combines the best of both worlds (navigates well the search space for for ill-posed problems and converges quickly). But there's lots of wiggle room in terms of the implementation. For example, if the square matrix J^T J (where J is the Jacobian matrix containing all derivatives for all equations) is sparse you could use the iterative CG algorithm to solve the equation systems quickly instead of a direct method like a Cholesky factorization of J^T J or a QR decomposition of J.
But don't just assume that some part is slow and needs to be written in assembler. Assembler is the last thing to consider. Before you go that route you should always use a profiler to check where the bottlenecks are.
Are you talking about a number of single parameter functions to solve one at a time or a system of multi-parameter equations to solve together?
If the former, then I've often found that a finding a better initial approximation (from where the Newton-Raphson loop starts) can save more execution time than polishing the loop itself, because convergence in the loop can be slow initially but is fast later. If you know nothing about the functions then finding a decent initial approximation is hard, but it might be worth trying a few secant iterations first. You might also want to look at Brent's method
Consider using Rational Root Test in parallel. If impossible to use values of absolute precision then use closest to zero results as the best fit to continue by Newton method.
Once single root found, you may decrease the equation grade by dividing it with monom (x-root).
Dividing and rational root test are implemented here https://github.com/ohhmm/openmind/blob/sh/omnn/math/test/Sum_test.cpp#L260
Related
Let's put it simple: I have an under-determined linear system of equations
Ax = b
and I want to get one valid solution, no matter which one of the infinite solutions for the system. And I want to get it as efficiently as possible.
I have checked general LAPACK routines and it seems that they cannot handle the under-determined case. For example, dgesv(), whose documentation is found here, will return and integer larger than 1 in INFO if the factor U, from PLU factorization, is singular, and it will not solve the system if that is the case.
I have also checked some routines for Linear Least Squares problems (LLS) (documentation here), which does exactly solve my problem, just not as efficiently as I wished. If the LLS problems I provide is under-determined, the LLS routine will return the vector that minimizes
||Ax-b||
Which is a valid solution. However, it is calculated as the solution to an optimization problem, and I was wondering if there is a more efficient way of obtaining a valid solution for my under-determined problem.
A similar question was made here, but I believe that my question is more concrete than that: I am using LAPACK, and I want to solve an under-determined system of linear equations as efficiently as possible.
For an under-determined system of equations:
The correct approach is to use singular value decomposition (SVD). Lapack offers singular value decomposition in the form of dgesvd.
To perform the SVD you will have to homogenize your problem to turn it into a matrix problem of the form: My = 0. This is easy to do by introducing another degree of freedom (another variable). This will transform the vector x -> y and matrix A -> M. When performing the SVD on the matrix M, the smallest singular vector will be the solution to your under-determined least squares problem.
I would recommend using matlab or octave to experiment before wasting time coding anything up.
I have a relatively simple question regarding the linear solver built into Armadillo. I am a relative newcomer to C++ but have experience coding in other languages. I am solving a fluid flow problem by successive linearization, using the armadillo function Solve(A,b) to get the solution at each iteration.
The issue that I am running into is that my matrix is very ill-conditioned. The determinant is on the order of 10^-20 and the condition number is 75000. I know these are terrible conditions but it's what I've got. Does anyone know if it is possible to specify the precision in my A matrix and in the solve function to something beyond double (long double perhaps)? I know that there are double matrix classes in Armadillo but I haven't found any documentation for higher levels of precision.
To approach this from another angle, I wrote some code in Mathematica and the LinearSolve worked very well and the program converged to the correct answer. My reasoning is that Mathematica variables have higher precision which can handle the higher levels of rounding error.
If anyone has any insight on this, please let me know. I know there are other ways to approach a poorly conditioned matrix (like preconditioning and pivoting), but my work is more in the physics than in the actual numerical solution so I'm trying to steer clear of that.
EDIT: I just limited the precision in the Mathematica version to 15 decimal places and the program still converges. This leads me to believe it is NOT a variable precision question but rather an issue with the method.
As you said "your work is more in the physics": rather than trying to increase the accuracy, I would use the Moore-Penrose Pseudo-Inverse, which in Armadillo can be obtained by the function pinv. You should then experience a bit with the parameter tolerance to set it to a reasonable level.
The geometrical interpretation is as follows: bad condition numbers are due to the fact that the row/column-vectors are linearly dependent. In physics, such linearly dependencies usually have an origin which at least needs to be interpreted. The pseudoinverse first projects the matrix onto a lower dimensional space in which the vectors are "less linearly dependent" by dropping all singular vectors with singular values smaller than the parameter tolerance. The reulting matrix has a better condition number such that the standard inverse can be constructed with less problems.
I am looking for an iterative linear system solver to calculate a continuously changing field. For the simulation to work properly, I need to re-calculate the field (maybe several times) for every time step. Fortunately, I have a good initial guess for each time step, so it is better I can feed it into an iterative solver. And the coefficient matrix is very dense.
The problem is I checked several iterative solvers online like Gmm++, IML++, ITL, DUNE/ISTL and so on. They are either for sparse systems or don't provide interfaces for inputting initial guesses (I might be wrong since I didn't have time to go through all the documents).
So I have two questions:
1 Is there any such c++ solver available online?
2 Since the coefficient matrix can be as large as thousands * thousands, could a direct solver be quicker than an iterative solver with a really good initial guess?
Great Thanks!
He
If you check the header for Conjugate Gradient in IML++ (http://math.nist.gov/iml++/cg.h.txt), you'll see that you can very easily provide the initial guess for the solution in the very variable where you'd expect to get the solution.
Using double type I made Cubic Spline Interpolation Algorithm.
That work was success as it seems, but there was a relative error around 6% when very small values calculated.
Is double data type enough for accurate scientific numerical analysis?
Double has plenty of precision for most applications. Of course it is finite, but it's always possible to squander any amount of precision by using a bad algorithm. In fact, that should be your first suspect. Look hard at your code and see if you're doing something that lets rounding errors accumulate quicker than necessary, or risky things like subtracting values that are very close to each other.
Scientific numerical analysis is difficult to get right which is why I leave it the professionals. Have you considered using a numeric library instead of writing your own? Eigen is my current favorite here: http://eigen.tuxfamily.org/index.php?title=Main_Page
I always have close at hand the latest copy of Numerical Recipes (nr.com) which does have an excellent chapter on interpolation. NR has a restrictive license but the writers know what they are doing and provide a succinct writeup on each numerical technique. Other libraries to look at include: ATLAS and GNU Scientific Library.
To answer your question double should be more than enough for most scientific applications, I agree with the previous posters it should like an algorithm problem. Have you considered posting the code for the algorithm you are using?
If double is enough for your needs depends on the type of numbers you are working with. As Henning suggests, it is probably best to take a look at the algorithms you are using and make sure they are numerically stable.
For starters, here's a good algorithm for addition: Kahan summation algorithm.
Double precision will be mostly suitable for any problem but the cubic spline will not work well if the polynomial or function is quickly oscillating or repeating or of quite high dimension.
In this case it can be better to use Legendre Polynomials since they handle variants of exponentials.
By way of a simple example if you use, Euler, Trapezoidal or Simpson's rule for interpolating within a 3rd order polynomial you won't need a huge sample rate to get the interpolant (area under the curve). However, if you apply these to an exponential function the sample rate may need to greatly increase to avoid loosing a lot of precision. Legendre Polynomials can cater for this case much more readily.
I wanted to calculate the angle between two vectors but I have seen these inverse trig operations such as acos and atan uses lots of cpu cycles. Is there a way where I can get this calculation done without using these functions? Also, does these really hit you when you in your optimization?
There are no such functions in the STL; those are in the math library.
Also, are you sure it's important to be efficient here? Have you profiled to see if there's function calls like this in the hot spots? Do you know that the performance is bad when using these functions? You should always answer these questions before diving into such microoptimizations.
In order to give advice, what are you going to do with it? How accurate does it have to be?
If you need the actual angle to a high precision, you probably can't do better. If you need it for some comparison, you can use absolute values and the dot product to get the cosine of the angle. If you don't need precision, you can do that and use an acos lookup table. If you're using it as input for another calculation, you might be able to use a little geometry or maybe a trig identity to avoid having to find an arccosine or arctangent.
In any case, once you've done what optimization you're going to do, do before and after timing runs to see if you've made any significant difference.
This is totally implementation defined. Of course, you could use a third-paty implementation, or an approximation, but first you should profile and determine what your bottlenecks are.
If these functions are indeed the bottleneck, and you only need an approximation, you can try using the few first terms of the Taylor series expansion of those functions. The magnitude of the next unused term represents the error in your approximation.
Arccos Taylor series
Arctan Taylor series
The implementations of atan and acos depend on the compiler and the optimization settings. Many implementations will use a table and interpolate to get the nearest value.
Try these things first:
Profile the application to find the
where most of the execution time is
spent.
Redesign this area for better
performance.
Consider Data Driven Design
techniques to speed up your program.
Change logic to reduce branches and
if statements, consider using
Karnaugh maps to simplify the
logic.