How to implement Horner's scheme for multivariate polynomials? - fortran

Background
I need to solve polynomials in multiple variables using Horner's scheme in Fortran90/95. The main reason for doing this is the increased efficiency and accuracy that occurs when using Horner's scheme to evaluate polynomials.
I currently have an implementation of Horner's scheme for univariate/single variable polynomials. However, developing a function to evaluate multivariate polynomials using Horner's scheme is proving to be beyond me.
An example bivariate polynomial would be: 12x^2y^2+8x^2y+6xy^2+4xy+2x+2y which would factorised to x(x(y(12y+8))+y(6y+4)+2)+2y and then evaluated for particular values of x & y.
Research
I've done my research and found a number of papers such as:
staff.ustc.edu.cn/~xinmao/ISSAC05/pages/bulletins/articles/147/hornercorrected.pdf
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.8637&rep=rep1&type=pdf
www.is.titech.ac.jp/~kojima/articles/B-433.pdf
Problem
However, I'm not a mathematician or computer scientist, so I'm having trouble with the mathematics used to convey the algorithms and ideas.
As far as I can tell the basic strategy is to turn a multivariate polynomial into separate univariate polynomials and compute it that way.
Can anyone help me? If anyone could help me turn the algorithms into pseudo-code that I can implement into Fortran myself, I would be very grateful.

For two variables one can store the polynomial coefficients in a rank=2 matrix K(n+1,n+1) where n is the order of the polynomial. Then observe the following pattern (in pseudo-code)
p(x,y) = (K(1,1)+y*(K(1,2)+y*(K(1,3)+...y*K(1,n+1))) +
x*(K(2,1)+y*(K(2,2)+y*(K(2,3)+...y*K(2,n+1))) +
x^2*(K(3,1)+y*(K(3,2)+y*(K(3,3)+...y*K(3,n+1))) +
...
x^n*(K(n+1,1)+y*(K(n+1,2)+y*(K(n+1,3)+...y*K(n+1,n+1)))
Each row is a separate homer's scheme in terms of y and all-together is a final homer's scheme in terms of x.
To code in FORTRAN or any language create an intermediate vector z(n+1) such that
z(i) = homers(y,K(i,1:n+1))
and
p = homers(x,z(1:n+1))
where homers(value,vector) is an implementation of the single variable evaluation with polynomial coefficients stored in vector.

Related

Factorization of univariate polynomials over some number field with Sympy

I am working on factoring multivariate polynomials over some extension fields, using Sympy.
If I can factor univariate polynomials over the reals, I think I would have a working code. For my code this bring me down to factoring a univariate polynomial over 'QQ', and if needed, over some number field.
My approach now is to define these univariate polynomials over 'QQ', then look at the roots and decide for each root if it is real or not. If it is real, I add needed terms to 'QQ' and then ask Sympy to factor. This means that I try to automate the following steps:
f=Poly((x^2-3)*(x^2-5),x,domain='QQ')
solve(f,x)
(gives [-sqrt(3),sqrt(3),-sqrt(5),sqrt(5)])
f.factor(f,extension=[sqrt(3),sqrt(5)])
(..or some other way, but with similar steps and runtime I think)
This ofcourse has a very long runtime, since you are sort of twice calculating the factors. And there are also a lot of exceptions I need to think about as well.
Long story short: is there a way to ask Sympy to factor a polynomial over 'QQ' and allowing it to make some extensions if needed?
Is there something like f.factor(numberfield=True)?
Thank you in advance!!
This is planned, but not implemented yet (as of version 1.2). See Factoring polynomials into linear factors (emphasis mine):
Currently SymPy can factor polynomials into irreducibles over various domains, which can result in a splitting factorization (into linear factors). However, there is currently no systematic way to infer a splitting field (algebraic number field) automatically. In future the following syntax will be implemented:
factor(x**3 + x**2 - 7, split=True)
Note this is different from extension=True, because the later only tells how expression parsing should be done, not what should be the domain of computation. One can simulate the split keyword for several classes of polynomials using solve() function.
... where the last sentence refers to what you are doing now.

Polynomial Regression with N features and degree M on C++

I'm new in ML and I was trying this problem https://www.hackerrank.com/challenges/predicting-office-space-price. One of the observation that they made is that
"The prices per square foot, are (approximately) a polynomial function
of the features in the observation table. This polynomial always has
an order less than 4"
So I guess the solution would be a applying a polynomial regression, and I have found a lot of (confusing info) about this but with only 2 features. But in this case they can be at most 5 features, so the answer can be a polynomial like: ax^5+bx^2*y^3+c*z^2*x...
So is seems more difficult to find a way for creating or evaluating this polynomial, in a function like:
float eval(vector<float> x, vector<float> o)
And with this I was hoping to use the same gradient decent that I use on linear regression to minimize the cost function.
Am I doing this right? I right to use polynomial regression? How should I go create and eval that polynomial?

Two point boundary with odeint

I am trying to solve two point boundary problem with odeint. My equation has the form of
y'' + a*y' + b*y + c = 0
It is pretty trivial when I have boundary conditions of y(x_1) = y_1 , y'(x_2) = y_2, but when boundary conditions are y(x_1) = y_1 , y(x_2) = y_2 I am lost. Does anybody know the way to deal with problems like this with odeint or other scientific library?
In this case you need a shooting method. odeint does not have such a method, it solved the initial value problem (IVP) which is your first case. I think in the Numerical Recipies this method is explained and you can use Boost.Odeint to do the time stepping.
An alternative and more efficient method to solve this type of problem is finite differences or finite elements method. For finite differences you can check Numerical Recipes. For finite elements I recommend dealii library.
Another approach is to use b-splines: Assuming you do know the initial x0 and final xfinal points of integration, then you can expand the solution y(x) in a b-spline basis, defined over (x0,xfinal), i.e.
y(x)= \sum_{i=1}^n A_i*B_i(x),
where A_i are constant coefficients to be determined, and B_i(x) are b-spline basis (well defined polynomial functions, that can be differentiated numerically). For scientific applications you can find an implementation of b-splines in GSL.
With this substitution the boundary value problem is reduced to a linear problem, since (am using Einstein summation for repeated indices):
A_i*[ B_i''(x) + a*B_i'(x) + b*B_i(x)] + c =0
You can choose a set of points x and create a linear system from the above equation. You can find information for this type of method in the following review paper "Applications of B-splines in Atomic and Molecular Physics" - H Bachau, E Cormier, P Decleva, J E Hansen and F Martín
http://iopscience.iop.org/0034-4885/64/12/205/
I do not know of any library solving directly this problem, but there are several libraries for B-splines (I recommend GSL for your needs), that will allow you to form the linear system. See this stackoverflow question:
Spline, B-Spline and NURBS C++ library

Weighted linear least square for 2D data point sets

My question is an extension of the discussion How to fit the 2D scatter data with a line with C++. Now I want to extend my question further: when estimating the line that fits 2D scatter data, it would be better if we can treat each 2D scatter data differently. That is to say, if the scatter point is far away from the line, we can give the point a low weighting, and vice versa. Therefore, the question then becomes: given an array of 2D scatter points as well as their weighting factors, how can we estimate the linear line that passes them? A good implementation of this method can be found in this article (weighted least regression). However, the implementation of the algorithm in that article is too complicated as it involves matrix calculation. I am therefore trying to find a method without matrix calculation. The algorithm is an extension of simple linear regression, and in order to illustrate the algorithm, I wrote the following MATLAB codes:
function line = weighted_least_squre_for_line(x,y,weighting);
part1 = sum(weighting.*x.*y)*sum(weighting(:));
part2 = sum((weighting.*x))*sum((weighting.*y));
part3 = sum( x.^2.*weighting)*sum(weighting(:));
part4 = sum(weighting.*x).^2;
beta = (part1-part2)/(part3-part4);
alpha = (sum(weighting.*y)-beta*sum(weighting.*x))/sum(weighting);
a = beta;
c = alpha;
b = -1;
line = [a b c];
In the above codes, x,y,weighting represent the x-coordinate, y-coordinate and the weighting factor respectively. I test the algorithm with several examples but still not sure whether it is right or not as this method gets a different result with Polyfit, which relies on matrix calculation. I am now posting the implementation here and for your advice. Do you think it is a right implementation? Thanks!
If you think it is a good idea to downweight points that are far from the line, you might be attracted by http://en.wikipedia.org/wiki/Least_absolute_deviations, because one way of calculating this is via http://en.wikipedia.org/wiki/Iteratively_re-weighted_least_squares, which will give less weight to points far from the line.
If you think all your points are "good data", then it would be a mistake to weight them naively according to their distance from your initial fit. However, it's a fairly common practice to discard "outliers": if a few data points are implausibly far from the fit, and you have reason to believe that there's an error mechanism that could generate a small subset of "bad" datapoints, you could simply remove the implausible points from the dataset to get a better fit.
As far as the math is concerned, I would recommend biting the bullet and trying to figure out the matrix math. Perhaps you could find a different article, or a book which has a better presentation. I will not comment on your Matlab code, except to say that it looks like you will have some precision problems when subtracting part4 from part3, and probably part2 from part1 as well.
Not exactly what you are asking for, but you should look into robust regression. MATLAB has the function robustfit (requires Statistics Toolbox).
There is even an interactive demo you can play with to compare regular linear regression vs. robust regression:
>> robustdemo
This shows that in the presence of outliers, robust regression tends to give better results.

Linear form of function (a/b) for ampl/cplex

I am trying to solve a minimisation problem and I want to minimise an expression
a/b
Where both a & b are variables. Hence this is not a linear problem...
How can I transform this function into an other one (being a linear one).
There is a detailed section on how to handle ratios in Linear Programming on the lpsolve site. It should be general enough to apply to AMPL and CPLEX as well.
There are several ways to do this, but the simplest to explain requires that you solve a series of linear programs. First, remove the objective and add a constraint
a <= c * b
Where c is a known upper bound on the solution. Then do a binary search on c you can a range where c_l, c_u where the problem is infeasible for
a <= c_l * b
but feasible for
a <= c_u * b
The general form of the obj should be a linear fractional function, something like f_{0}(x)=(c^Tx+d)/(e^Tx+f). For your case, X=(a,b),c=(1,0),(e=0,1),d=f=0.
To solve this kind of opt, something called linear fractional programming can be used. it's like linear constrainted version of linear fractional function and Charnes-Cooper transformation is applied to transform into a LP. You can find the main idea from wiki. Many OR books talk more about this such as pp53, pp165 in the Boyd's "convex optimization" (free to download).