C++ Newton-Raphson algo? - c++

I have a huge problem. I need to solve a non linear sistem of 3 equations in 3 variables with a C++ function or class. I thought about using Newton-Raphson method to perform the solution. Unlukily I didn't find a source code that can do that for me. There would be someone that knows a program like that? I'm near deciding to build it myself. Thanks

A 3x3 system is not huge; it's actually a very small problem. People routinely solve nonlinear systems of equations with thousands (and more) of variables and constraints.
Given that your system is 3x3 and possibly nasty, a more appropriate choice of method would be a line search method. You get global convergence to a local minimum of the residual this way; it's very easy to make straight Newton's method diverge.
Steepest descent with backtracking line search is the simplest line search method possible. You might try implementing it first.

First, see related questions What good libraries are there for solving a system of non-linear equations in C++? and https://stackoverflow.com/questions/4914967/could-you-explain-how-newton-raphson-for-a-set-of-equations-works-code-inside. Also, try to use boost.

Consider this cozy C++ library

Related

equivalent of wavedec (matlab function) in opencv

I am trying to rewrite a matlab code to cpp and I still blocked with this line :
[c, l]=wavedec(S,4,'Dmey');
Is there something like that in opencv ?
if someone have an idea about it try to share it with and thanks in advance.
Maybe if you would integrate your codes with Python, then PyWavelet might be an option.
I'm seeing the function that you're looking for is in there. That function is called Discrete Meyer (Dmey). Not sure what you're planning to do with that though, maybe you're processing some images or something, but Dmey is OK, not very widely used. You might want to just find some GitHub codes and integrate to whatever you're doing to see if it would work first, and based on those you can also change the details of your currently posted function (might find something more efficient).
1-D wavelet decomposition (wavedec)
In your code, c and l stand for coefficients and level. You're passing level four with a Dmey function. If you'd have one dimensional data, the following map is how your decomposition would look like, roughly I guess:
There are usually two types of decomposition models that are being used in Wavelets, one is called packet which is similar to a Full Binary Tree, from architecture standpoint:
The other one, which is the one you're most likely using, is less computationally expensive, because it does not decompose both branches of a tree, if you will. It'd just do the mathematical decomposition in one branch of the tree. Maybe, these images would shed some lights:
1 D
2 D
Notes:
If you have a working model in MatLab, you might want to see the C/C++ Code Generation in MatLab, will automatically convert MatLab codes to C++.
References:
Images are from Wikipedia or mathworks.com
Wiki
mathworks
Wavelet 2D

How to improve z3 linear programming performance

I am trying to use z3 to solve an linear programming problem and observing very poor performance relative to GLPK. There are reasons why I would prefer to use z3 so I'm wondering if there is something I'm missing.
The problem is essentially bin packing. I have on the order of 500 weighted items, and 5 bins. Every item must be placed in a bin. Objective is to minimize the total weight in the largest bin.
The problem was solved within 1-2 minutes by GLPK. I have yet to see z3 terminate.
The z3 optimization tutorial suggests both a boolean encoding and an integer encoding for a similar type of problem. Is one of these preferred for performance reasons? Basically I'm wondering if you need to follow a certain pattern for z3 to recognize it as linear programming.
How do I know what method it's applying to solve the problem?
Is there a way to configure logging on z3 so you can see what it's doing.
By the way, I'm using the C++ API.

Least Median of Squares robust regression C++

I have a set of data z(0), z(1), z(2)...,z(n) that I am currently fitting with a 2 variables polynomial of the kind p(x,y) = a(1)*x^2+a(2)*y^2+a(3)*x*y+a(4). I have i=1,...,n (x(i),y(i)) coordinates that I impose to be p(x(i),y(i))=z(i). In this way I have a Overdetermined System that I can solve using Eigen SVD . I am looking for a more sophisticated method that can take care of outliers, like a Least Median of Squares robust regression (as described here) but I haven't found a C++ implementation for 2 variables. I looked in GSL but it seems there is nothing for 2 variable functions. The only other solution I can think of is using a TGraph2D in ROOT. Do you know any other solution? Numerical recipes maybe? Since I am writing C++ code I would prefer C or C++ implementations.
Since non answer has been given yet, but I am still working on this problem, I will share my progresses here.
The class TLinearFitter has a fit method that allows you to select Robust fitting - Least Trimmed Squares regression (LTS):
https://root.cern.ch/root/html532/TLinearFitter.html
Another possible solution, more time consuming maybe, but maybe more efficient on the long run is to write my own function to be minimized, and the use:
https://projects.coin-or.org/Ipopt to minimize it. Although in this approach there is a bigger "step". I don't know how to use the library and I haven't (yet?) found a nice tutorial to understand it.
here: https://wis.kuleuven.be/stat/robust/software there is a Fortran implementation of the LMedS algorithm called PROGRESS. So another possible solution could be to port this software to C/C++ and make a library out of it.

Least Squares Regression in C/C++

How would one go about implementing least squares regression for factor analysis in C/C++?
the gold standard for this is LAPACK. you want, in particular, xGELS.
When I've had to deal with large datasets and large parameter sets for non-linear parameter fitting I used a combination of RANSAC and Levenberg-Marquardt. I'm talking thousands of parameters with tens of thousands of data-points.
RANSAC is a robust algorithm for minimizing noise due to outliers by using a reduced data set. Its not strictly Least Squares, but can be applied to many fitting methods.
Levenberg-Marquardt is an efficient way to solve non-linear least-squares numerically.
The convergence rate in most cases is between that of steepest-descent and Newton's method, without requiring the calculation of second derivatives. I've found it to be faster than Conjugate gradient in the cases I've examined.
The way I did this was to set up the RANSAC an outer loop around the LM method. This is very robust but slow. If you don't need the additional robustness you can just use LM.
Get ROOT and use TGraph::Fit() (or TGraphErrors::Fit())?
Big, heavy piece of software to install just of for the fitter, though. Works for me because I already have it installed.
Or use GSL.
If you want to implement an optimization algorithm by yourself Levenberg-Marquard seems to be quite difficult to implement. If really fast convergence is not needed, take a look at the Nelder-Mead simplex optimization algorithm. It can be implemented from scratch in at few hours.
http://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
Have a look at
http://www.alglib.net/optimization/
They have C++ implementations for L-BFGS and Levenberg-Marquardt.
You only need to work out the first derivative of your objective function to use these two algorithms.
I've used TNT/JAMA for linear least-squares estimation. It's not very sophisticated but is fairly quick + easy.
Lets talk first about factor analysis since most of the discussion above is about regression. Most of my experience is with software like SAS, Minitab, or SPSS, that solves the factor analysis equations, so I have limited experience in solving these directly. That said, that the most common implementations do not use linear regression to solve the equations. According to this, the most common methods used are principal component analysis and principal factor analysis. In a text on Applied Multivariate Analysis (Dallas Johnson), no less that seven methods are documented each with their own pros and cons. I would strongly recommend finding an implementation that gives you factor scores rather than programming a solution from scratch.
The reason why there's different methods is that you can choose exactly what you're trying to minimize. There a pretty comprehensive discussion of the breadth of methods here.

Equation Solvers for linear mathematical equations

I need to solve a few mathematical equations in my application. Here's a typical example of such an equation:
a + b * c - d / e = a
Additional rules:
b % 10 = 0
b >= 0
b <= 100
Each number must be integer
...
I would like to get the possible solution sets for a, b, c, d and e.
Are there any libraries out there, either open source or commercial, which I can use to solve such an equation? If yes, what kind of result do they provide?
Solving linear systems can generally be solved using linear programming. I'd recommend taking a look at Boost uBLAS for starters - it has a simple triangular solver. Then you might checkout libraries targeting more domain specific approaches, perhaps QSopt.
You're venturing into the world of numerical analysis, and here be dragons. Seemingly small differences in specification can make a huge difference in what is the right approach.
I hesitate to make specific suggestions without a fairly precise description of the problem domain. It sounds superficiall like you are solving constrained linear problems that are simple enough that there are a lot of ways to do it but "..." could be a problem.
A good resource for general solvers etc. would be GAMS. Much of the software there may be a bit heavy weight for what you are asking.
You want a computer algebra system.
See https://stackoverflow.com/questions/160911/symbolic-math-lib, the answers to which are mostly as relevant to c++ as to c.
I know it is not your real question, but you can simplify the given equation to:
d = b * c * e with e != 0
Pretty sure Numerical Recipes will have something
You're looking for a computer algebra system, and that's not a trivial thing.
Lot's of them are available, though, try this list at Wikipedia:
http://en.wikipedia.org/wiki/Comparison_of_computer_algebra_systems
-Adam
This looks like linear programming. Does this list help?
In addition to the other posts. Your constraint sets make this reminiscent of an integer programming problem, so you might want to check that kind of thing out as well. Perhaps your problem can be (re-)stated as one.
You must know, however that the integer programming problems tends to be one of the harder computational problems so you might end up using many clock cycles to crack it.
Looking only at the "additional rules" part it does look like linear programming, in which case LINDO or a similar program implementing the simplex algorithm should be fine.
However, if the first equation is really typical it shows yours is NOT a linear algebra problem - no 2 variables multiplying or dividing each other should appear on a linear equation!
So I'd say you definitely need either a computer algebra system or solve the problem using a genetic algorithm.
Since you have restrictions similar to those found in linear programming though you're not quite there, if you just want a solution to your specific problem I'd say pick up any of the libraries mentioned at the end of Wikipedia's article on genetic algorithms and develop an app to give you the result. If you want a more generalist approach, then you've got to simulate algebraic manipulations on your computer, no other way around.
The TI-89 Calculator has a 'solver' application.
It was built to solve problems like the one in your example.
I know its not a library. But there are several TI-89 emulators out there.