Problems when substituting a matrix in a polynomial - sympy

Example: let
M = Matrix([[1,2],[3,4]]) # and
p = Poly(x**3 + x + 1) # then
p.subs(x,M).expand()
gives the error :
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
which is very plausible since the two first terms become matrices but the last term (the constant term) is not a matrix but a scalar. To remediate to this situation I changed the polynomial to
p = Poly(x**3 + x + x**0) # then
the same error persists. Am I obliged to type the expression by hand, replacing x by M? In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.

So I think the question is mainly revolving around the concept of Matrix polynomial:
(where P is a polynomial, and A is a matrix)
I think this is saying that the free term is a number, and it cannot be added with the rest which is a matrix, effectively the addition operation is undefined between those two types.
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
However, this can be circumvented by defining a function that evaluates the matrix polynomial for a specific matrix. The difference here is that we're using matrix exponentiation, so we correctly compute the free term of the matrix polynomial a_0 * I where I=A^0 is the identity matrix of the required shape:
from sympy import *
x = symbols('x')
M = Matrix([[1,2],[3,4]])
p = Poly(x**3 + x + 1)
def eval_poly_matrix(P,A):
res = zeros(*A.shape)
for t in enumerate(P.all_coeffs()[::-1]):
i, a_i = t
res += a_i * (A**i)
return res
eval_poly_matrix(p,M)
Output:
In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.
The function eval_poly_matrix above can be extended to work for multivariate polynomials by using the .monoms() method to extract monomials with nonzero coefficients, like so:
from sympy import *
x,y = symbols('x y')
M = Matrix([[1,2],[3,4]])
p = poly( x**3 * y + x * y**2 + y )
def eval_poly_matrix(P,*M):
res = zeros(*M[0].shape)
for m in P.monoms():
term = eye(*M[0].shape)
for j in enumerate(m):
i,e = j
term *= M[i]**e
res += term
return res
eval_poly_matrix(p,M,eye(M.rows))
Note: Some sanity checks, edge cases handling and optimizations are possible:
The number of variables present in the polynomial relates to the number of matrices passed as parameters (the former should never be greater than the latter, and if it's lower than some logic needs to be present to handle that, I've only handled the case when the two are equal)
All matrices need to be square as per the definition of the matrix polynomial
A discussion about a multivariate version of the Horner's rule features in the comments of this question. This might be useful to minimize the number of matrix multiplications.
Handle the fact that in a Matrix polynomial x*y is different from y*x because matrix multiplication is non-commutative . Apparently poly functions in sympy do not support non-commutative variables, but you can define symbols with commutative=False and there seems to be a way forward there
About the 4th point above, there is support for Matrix expressions in SymPy, and that can help here:
from sympy import *
from sympy.matrices import MatrixSymbol
A = Matrix([[1,2],[3,4]])
B = Matrix([[2,3],[3,4]])
X = MatrixSymbol('X',2,2)
Y = MatrixSymbol('Y',2,2)
I = eye(X.rows)
p = X**2 * Y + Y * X ** 2 + X ** 3 - I
display(p)
p = p.subs({X: A, Y: B}).doit()
display(p)
Output:
For more developments on this feature follow #18555

Related

Using LAPACK's DORMQR with non-square Q

I want use LAPACK to calculate Q * x and Q^T * x, where Q comes from the reduced QR factorization of an m by n matrix A (m > n), stored in the form of Householder reflectors and a vector tau, as obtained from DGEQRF and x is a vector of length n in the case of Q * x and length m in the case of Q^T * x.
The documentation of DORMQR states that x is overwritten with the result, which already confuses me, since x and Q * x obviuosly have different dimensions if the original matrix A and subsequently its reduced Q are not square. Furthermore it states that
"Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'."
In my case, only the first half applies and M refers to the length of x. What do they mean by order? I have rarely ever heard the term "order" in the context of non-square matrices, and if so, it would be something like m by n, and not just a single number. Do they mean rank?
Can I even use DORMQR to calculate both Q * x and Q^T * x for a non-square Q, or is it not designed for this? Do I need to pad x with zeros?
DORMQR applies only to Q a square matrix. Although the input A to the procedure relates to elementary reflectors, such as output of DGEQRF which can be more general, the documentation has the additional restriction that Q "is a real orthogonal matrix".
Of course, to be orthogonal, Q must be square.

Simplification of derivative of square using sympy

I'm trying to use sympy to generate equations for non-linear least squares fitting. My goal is to make this quite complex but for the moment, here's a simple case (but not too simple!). It's basically fitting a two dimensional sinusoid to data. Here's the sympy code:
from sympy import *
S, l, m = symbols('S l m', real=True)
u, v = symbols('u v', real=True)
Vobs = symbols('Vobs', complex=True)
Vres = Vobs - S * exp(- 1j * 2 * pi * (u*l+v*m))
J=Vres*conjugate(Vres)
axes = [S, l, m]
grad = derive_by_array(J, axes)
hess = derive_by_array(grad, axes)
One element of the grad term looks like:
- 2.0*I*pi*S*u*(-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs)*exp(2.0*I*pi*(l*u + m*v)) + 2.0*I*pi*S*u*(-S*exp(2.0*I*pi*(l*u + m*v)) + conjugate(Vobs))*exp(-2.0*I*pi*(l*u + m*v))
What I'd like is to replace the expanded term (-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs) by Vres and contract the two conjugate terms into the more compact equivalent is:
4.0*pi*S*u*im(Vres*exp(2.0*I*pi*(l*u + m*v)))
I cannot see how to do this with sympy. This problem is bad for the first derivative (grad) but get really out of hand with the second derivative (hess).
First of all, let's not use 1j in SymPy, it's a float and floats are bad for symbolic math. SymPy's imaginary unit is I. So,
Vres = Vobs - S * exp(- I * 2 * pi * (u*l+v*m))
To replace the expression Vres by a symbol, we first need to create such a symbol. I'm going to call it Vres0, but its name will be Vres, so it prints as "Vres" in formulas.
Vres0 = symbols('Vres')
g1 = grad[1].subs(Vres, Vres0).conjugate().subs(Vres, Vres0).conjugate()
The conjugate-substitute-conjugate back is needed because subs doesn't quite recognize the possibility of replacing the conjugate of an expression with the conjugate of the symbol.
Now g1 is
-2*I*pi*S*Vres*u*exp(2*I*pi*(l*u + m*v)) + 2*I*pi*S*u*exp(-2*I*pi*(l*u + m*v))*conjugate(Vres)
and we want to fold the sum of conjugate terms. I use a custom transformation rule for this: the rule fold_conjugates applies to every sum (Add) of two terms (len(f.args) == 2) where the second is a conjugate of the first (f.args[1] == f.args[0].conjugate()). The transformation it performs: replace the sum by twice the real part of first argument (2*re(f.args[0])). Like so:
from sympy.core.rules import Transform
fold_conjugates = Transform(lambda f: 2*re(f.args[0]),
lambda f: isinstance(f, Add) and len(f.args) == 2 and f.args[1] == f.args[0].conjugate())
g = g1.xreplace(fold_conjugates)
Final result: 4*pi*S*u*im(Vres*exp(2*I*pi*(l*u + m*v))).

How to solve an algebraic equation in formal power series?

Motivation. It is well known that generating function for Catalan numbers satisfies quadratic equation. I would like to have first several coefficients of a function, implicitly defined by an algebraic equation (not necessarily a quadratic one!).
Example.
import sympy as sp
sp.init_printing() # math as latex
from IPython.display import display
z = sp.Symbol('z')
F = sp.Function('F')(z)
equation = 1 + z * F**2 - F
display(equation)
solution = sp.solve(equation, F)[0]
display(solution)
display(sp.series(solution))
Question. The approach where we explicitly solve the equation and then expand it as power series, works only for low-degree equations. How to obtain first coefficients of formal power series for more complicated algebraic equations?
Related.
Since algebraic and differential framework may behave differently, I posted another question.
Sympy: how to solve differential equation in formal power series?
I don't know a built-in way, but plugging in a polynomial for F and equating the coefficients works well enough. Although one should not try to find all coefficients at once from a large nonlinear system; those will give SymPy trouble. I take iterative approach, first equating the free term to zero and solving for c0, then equating 2nd and solving for c1, etc.
This assumes a regular algebraic equation, in which the coefficient of z**k in the equation involves the k-th Taylor coefficient of F, and does not involve higher-order coefficients.
from sympy import *
z = Symbol('z')
d = 10 # how many coefficients to find
c = list(symbols('c:{}'.format(d))) # undetermined coefficients
for k in range(d):
F = sum([c[n]*z**n for n in range(k+1)]) # up to z**k inclusive
equation = 1 + z * F**2 - F
coeff_eqn = Poly(series(equation, z, n=k+1).removeO(), z).coeff_monomial(z**k)
c[k] = solve(coeff_eqn, c[k])[0]
sol = sum([c[n]*z**n for n in range(d)]) # solution
print(series(sol + z**d, z, n=d)) # add z**d to get SymPy to print as a series
This prints
1 + z + 2*z**2 + 5*z**3 + 14*z**4 + 42*z**5 + 132*z**6 + 429*z**7 + 1430*z**8 + 4862*z**9 + O(z**10)

How to compute sin(2*m*Pi/n) exactly with CGAL and CORE?

Using Chebyshev polynomials, we can compute sin(2*Pi/n) exactly using the CGAL and CORE library, like the following piece of codes:
#include <CGAL/CORE_Expr.h>
#include <CGAL/Polynomial.h>
#include <CGAL/number_utils.h>
//return sin(theta) and cos(theta) for theta = 2pi/n
static std::pair<AA, AA> sin_cos(unsigned short n) {
// We actually use -x instead of x since root_of will give the k-th
// smallest root but we want the second largest one without counting.
Polynomial x(CGAL::shift(Polynomial(-1), 1));
Polynomial twox(2*x);
Polynomial a(1), b(x);
for (unsigned short i = 2; i <= n; ++i) {
Polynomial c = twox*b - a;
a = b;
b = c;
}
a = b - 1;
AA cos = -CGAL::root_of(2, a.begin(), a.end());
AA sin = CGAL::sqrt(AA(1) - cos*cos);
return std::make_pair(sin, cos);
}
But if I want to compute sin(2*m*Pi/n) exactly, where m and n are integers, what is the formula of the polynomial that I should use? Thanks.
(Partial solution.)
This is essentially computing the real and imaginary part of the roots of unity as algebraic numbers. Let's denote w(m) = exp(2*pi*I*m/n). Then, w(m) itself is a complex root of En(x) = x^n-1.
You need to find a defining polynomial of Re(w(m)). Resultants are a tool to find such a polynomial: 2*Re(w(m)) is a root of Res (En(x-y), En(y); y).
For an explanation why this is the case: Note that 2*Re(w(m)) = w(m) + conj(w(m)), and that the complex roots of En come in conjugate pairs; hence, also conj(w(m)) is a root of En. Now loosely speaking, the En(y) part "constrains" y to be any (complex) root of En, and combining this with the first argument allows x to take any complex value such that x-y is a root of En as well. Hence, a possible assignment is y = conj(w(m)) and x-y = w(m), hence x = w(m)+conj(w(m)) = 2*Re(w(m)).
CGAL can compute resultants of multivariate polynomials, so you can compute this resultant, and you simply have to pick the correct real root. (The largest one will obviously be w(0) = 1, the smallest one is 2*Re(w(floor(n/2))).)
Unfortunately, the resultant has a high complexity (degree n^2), and resultant computation will not be the fastest operation you've ever seen. Also, you'll pay for dense polynomials although your instances are very sparse and structured. YMMV; I have no clue about your use case, and if you need higher degrees.
However, I did a few tests in a computer algebra system, and I found that the resultant splits into factors of more reasonable size, and that all its real roots actually belong to a much simpler polynomial of degree floor(n/2)+1 only. (No proof, just an observation.)
I don't know of a direct formula to write down this factor, and I don't want to speculate about it. But maybe some people at mathoverflow or math.stackexchange can help?
EDIT: Here is a guess for at least a recursive formula.
I write s(n,x) for the significant factor of the resultant polynomial containing all real roots but 0. This means that s(n,x) has all values 2*Re(w(m)) for m != n/4, 3*n/4 as roots.
s(0,x) = 0
s(1,x) = x - 2
s(2,x) = x^2 - 4
s(3,x) = x^2 - x - 2
s(4,x) = x^2 - 4
s(5,x) = x^3 - x^2 - 3*x + 2
s(6,x) = x^4 - 5*x^2 + 4
s(7,x) = x^4 - x^3 - 4*x^2 + 3*x + 2
s(8,x) = x^4 - 6*x^2 + 8
s(n,x) = (x^2-2)*s(n-4,x) - s(n-8,x)
Waiting for a proof...

lagrange approximation -c++

I updated the code.
What i am trying to do is to hold every lagrange's coefficient values in pointer d.(for example for L1(x) d[0] would be "x-x2/x1-x2" ,d1 would be (x-x2/x1-x2)*(x-x3/x1-x3) etc.
My problem is
1) how to initialize d ( i did d[0]=(z-x[i])/(x[k]-x[i]) but i think it's not right the "d[0]"
2) how to initialize L_coeff. ( i am using L_coeff=new double[0] but am not sure if it's right.
The exercise is:
Find Lagrange's polynomial approximation for y(x)=cos(π x), x ∈−1,1 using 5 points
(x = -1, -0.5, 0, 0.5, and 1).
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <cmath>
using namespace std;
const double pi=3.14159265358979323846264338327950288;
// my function
double f(double x){
return (cos(pi*x));
}
//function to compute lagrange polynomial
double lagrange_polynomial(int N,double *x){
//N = degree of polynomial
double z,y;
double *L_coeff=new double [0];//L_coefficients of every Lagrange L_coefficient
double *d;//hold the polynomials values for every Lagrange coefficient
int k,i;
//computations for finding lagrange polynomial
//double sum=0;
for (k=0;k<N+1;k++){
for ( i=0;i<N+1;i++){
if (i==0) continue;
d[0]=(z-x[i])/(x[k]-x[i]);//initialization
if (i==k) L_coeff[k]=1.0;
else if (i!=k){
L_coeff[k]*=d[i];
}
}
cout <<"\nL("<<k<<") = "<<d[i]<<"\t\t\tf(x)= "<<f(x[k])<<endl;
}
}
int main()
{
double deg,result;
double *x;
cout <<"Give the degree of the polynomial :"<<endl;
cin >>deg;
for (int i=0;i<deg+1;i++){
cout <<"\nGive the points of interpolation : "<<endl;
cin >> x[i];
}
cout <<"\nThe Lagrange L_coefficients are: "<<endl;
result=lagrange_polynomial(deg,x);
return 0;
}
Here is an example of lagrange polynomial
As this seems to be homework, I am not going to give you an exhaustive answer, but rather try to send you on the right track.
How do you represent polynomials in a computer software? The intuitive version you want to archive as a symbolic expression like 3x^3+5x^2-4 is very unpractical for further computations.
The polynomial is defined fully by saving (and outputting) it's coefficients.
What you are doing above is hoping that C++ does some algebraic manipulations for you and simplify your product with a symbolic variable. This is nothing C++ can do without quite a lot of effort.
You have two options:
Either use a proper computer algebra system that can do symbolic manipulations (Maple or Mathematica are some examples)
If you are bound to C++ you have to think a bit more how the single coefficients of the polynomial can be computed. You programs output can only be a list of numbers (which you could, of course, format as a nice looking string according to a symbolic expression).
Hope this gives you some ideas how to start.
Edit 1
You still have an undefined expression in your code, as you never set any value to y. This leaves prod*=(y-x[i])/(x[k]-x[i]) as an expression that will not return meaningful data. C++ can only work with numbers, and y is no number for you right now, but you think of it as symbol.
You could evaluate the lagrange approximation at, say the value 1, if you would set y=1 in your code. This would give you the (as far as I can see right now) correct function value, but no description of the function itself.
Maybe you should take a pen and a piece of paper first and try to write down the expression as precise Math. Try to get a real grip on what you want to compute. If you did that, maybe you come back here and tell us your thoughts. This should help you to understand what is going on in there.
And always remember: C++ needs numbers, not symbols. Whenever you have a symbol in an expression on your piece of paper that you do not know the value of you can either find a way how to compute the value out of the known values or you have to eliminate the need to compute using this symbol.
P.S.: It is not considered good style to post identical questions in multiple discussion boards at once...
Edit 2
Now you evaluate the function at point y=0.3. This is the way to go if you want to evaluate the polynomial. However, as you stated, you want all coefficients of the polynomial.
Again, I still feel you did not understand the math behind the problem. Maybe I will give you a small example. I am going to use the notation as it is used in the wikipedia article.
Suppose we had k=2 and x=-1, 1. Furthermore, let my just name your cos-Function f, for simplicity. (The notation will get rather ugly without latex...) Then the lagrangian polynomial is defined as
f(x_0) * l_0(x) + f(x_1)*l_1(x)
where (by doing the simplifications again symbolically)
l_0(x)= (x - x_1)/(x_0 - x_1) = -1/2 * (x-1) = -1/2 *x + 1/2
l_1(x)= (x - x_0)/(x_1 - x_0) = 1/2 * (x+1) = 1/2 * x + 1/2
So, you lagrangian polynomial is
f(x_0) * (-1/2 *x + 1/2) + f(x_1) * 1/2 * x + 1/2
= 1/2 * (f(x_1) - f(x_0)) * x + 1/2 * (f(x_0) + f(x_1))
So, the coefficients you want to compute would be 1/2 * (f(x_1) - f(x_0)) and 1/2 * (f(x_0) + f(x_1)).
Your task is now to find an algorithm that does the simplification I did, but without using symbols. If you know how to compute the coefficients of the l_j, you are basically done, as you then just can add up those multiplied with the corresponding value of f.
So, even further broken down, you have to find a way to multiply the quotients in the l_j with each other on a component-by-component basis. Figure out how this is done and you are a nearly done.
Edit 3
Okay, lets get a little bit less vague.
We first want to compute the L_i(x). Those are just products of linear functions. As said before, we have to represent each polynomial as an array of coefficients. For good style, I will use std::vector instead of this array. Then, we could define the data structure holding the coefficients of L_1(x) like this:
std::vector L1 = std::vector(5);
// Lets assume our polynomial would then have the form
// L1[0] + L2[1]*x^1 + L2[2]*x^2 + L2[3]*x^3 + L2[4]*x^4
Now we want to fill this polynomial with values.
// First we have start with the polynomial 1 (which is of degree 0)
// Therefore set L1 accordingly:
L1[0] = 1;
L1[1] = 0; L1[2] = 0; L1[3] = 0; L1[4] = 0;
// Of course you could do this more elegant (using std::vectors constructor, for example)
for (int i = 0; i < N+1; ++i) {
if (i==0) continue; /// For i=0, there will be no polynomial multiplication
// Otherwise, we have to multiply L1 with the polynomial
// (x - x[i]) / (x[0] - x[i])
// First, note that (x[0] - x[i]) ist just a scalar; we will save it:
double c = (x[0] - x[i]);
// Now we multiply L_1 first with (x-x[1]). How does this multiplication change our
// coefficients? Easy enough: The coefficient of x^1 for example is just
// L1[0] - L1[1] * x[1]. Other coefficients are done similary. Futhermore, we have
// to divide by c, which leaves our coefficient as
// (L1[0] - L1[1] * x[1])/c. Let's apply this to the vector:
L1[4] = (L1[3] - L1[4] * x[1])/c;
L1[3] = (L1[2] - L1[3] * x[1])/c;
L1[2] = (L1[1] - L1[2] * x[1])/c;
L1[1] = (L1[0] - L1[1] * x[1])/c;
L1[0] = ( - L1[0] * x[1])/c;
// There we are, polynomial updated.
}
This, of course, has to be done for all L_i Afterwards, the L_i have to be added and multiplied with the function. That is for you to figure out. (Note that I made quite a lot of inefficient stuff up there, but I hope this helps you understanding the details better.)
Hopefully this gives you some idea how you could proceed.
The variable y is actually not a variable in your code but represents the variable P(y) of your lagrange approximation.
Thus, you have to understand the calculations prod*=(y-x[i])/(x[k]-x[i]) and sum+=prod*f not directly but symbolically.
You may get around this by defining your approximation by a series
c[0] * y^0 + c[1] * y^1 + ...
represented by an array c[] within the code. Then you can e.g. implement multiplication
d = c * (y-x[i])/(x[k]-x[i])
coefficient-wise like
d[i] = -c[i]*x[i]/(x[k]-x[i]) + c[i-1]/(x[k]-x[i])
The same way you have to implement addition and assignments on a component basis.
The result will then always be the coefficients of your series representation in the variable y.
Just a few comments in addition to the existing responses.
The exercise is: Find Lagrange's polynomial approximation for y(x)=cos(π x), x ∈ [-1,1] using 5 points (x = -1, -0.5, 0, 0.5, and 1).
The first thing that your main() does is to ask for the degree of the polynomial. You should not be doing that. The degree of the polynomial is fully specified by the number of control points. In this case you should be constructing the unique fourth-order Lagrange polynomial that passes through the five points (xi, cos(π xi)), where the xi values are those five specified points.
const double pi=3.1415;
This value is not good for a float, let alone a double. You should be using something like const double pi=3.14159265358979323846264338327950288;
Or better yet, don't use pi at all. You should know exactly what the y values are that correspond to the given x values. What are cos(-π), cos(-π/2), cos(0), cos(π/2), and cos(π)?