Integrate Legendre polynomials in SymPy and use these integrals as coefficients - sympy

I am trying to make a simple example in SymPy to compute some coefficients and then use them in a sum of legendre polynomials. Finally plot it. Very simple but can not make it work. I want to use it for my electromagnetism course. I get errors in both the attempts below:
%matplotlib inline
from sympy import *
x, y, z = symbols('x y z')
k, m, n = symbols('k m n', integer=True)
f, step, potential = symbols('f step potential', cls=Function)
var('n x')
A=SeqFormula(2*(2*m+1)*Integral(legendre(2*m+1,x),(x,0,1)).doit(),(m,0,oo)).doit()
Sum(A.coeff(m).doit()*legendre(2*m+1,x),(m,0,10)).doit()
B=Tuple.fromiter(2*(2*n+1)*Integral(legendre(2*n+1,x),(x,0,1)).doit() for n in range(50))
Sum(B[m]*legendre(2*m+1,x),(m,0,10)).doit()
Here is a part of an script in Mathematica of what I would like to replicate:
Nn = 50;
Array[A, Nn]
For[i = 0, i <= Nn, i++, A[i + 1] = Integrate[LegendreP[2*i + 1, x]*(2*(2*i + 1) + 1), {x, 0, 1}]];
Step = Sum[A[n + 1]*LegendreP[2*n + 1, #], {n, 0, Nn}] & Plot[Step[x], {x, -1, 1}]

I think the structure you were searching for with A is Python's lambda.
A = lambda m: 2*(2*m+1)*Integral(legendre(2*m+1, x), (x, 0, 1))
f = Sum(A(m)*legendre(2*m+1, x), (m, 0, 10)).doit()
plot(f, (x, -1, 1))
The key point is that m has to be explicit in order for integration to happen; SymPy does not know a general formula for integrating legendre(n, x). So, the integration here is attempted only when A is called with a concrete value of m, like A(0), A(1), etc.

Related

Getting sympy to simplify infinite sums containing piecewise functions

I am running the following in my jupyter-notebook.
from sympy import *
B = IndexedBase('B')
x, L = symbols('x L', real=True)
n = Symbol('n', integer=True)
n = Idx(n, (0, oo))
Bn = Indexed('B', n)
m = Symbol('m', integer=True)
Sum(Piecewise((0, Ne(m, n)), (L, True))*B[n], (n, 0, oo)).doit()
The last expression should evaluate to $L B_n$. I have tried using simplify, doit, limit and evalf methods on it to without success. I also found more similar issues in github but couldn't taylor this to my specific problem.
I also tried fiddling around with the underlying assumptions for integer m but couldn't find anything suitable.
Is there any direct or indirect way to get sympy to simplify infinite sums containing piecewise functions?
Just as an aside this code below works:
p = Piecewise((1, n<5), (0, True))
Sum(p, (n, 1, oo)).evalf()

Check calculation with summation

How to use Sympy to check calculations step by step, for instance that the 2 formulas below are equal?
from sympy import IndexedBase, Sum, symbols
i, n = symbols('i n', integer=True)
a = IndexedBase('a')
Sum(a[i], (i, 0, n - 1)) + a[n]
Sum(a[i], (i, 0, n))
The simplify function is used to check whether two expressions are equal.
from sympy import IndexedBase, Sum, symbols, simplify
i, n = symbols('i n', integer=True)
a = IndexedBase('a')
#to check calculations
n=5
x=Sum(a[i], (i, 0, n - 1)).doit() + a[n].doit()
y=Sum(a[i], (i, 0, n)).doit()
print x,y
#To check functions are equal
if simplify(x-y) == 0:
print "The two functions are equal"

Can auto differentiation handle separate functions of array slices?

Given a vector v of length say 30, can auto differentiation tools in say theano or tensorflow be able to take the gradient of something like this:
x = np.random.rand(5, 1)
v = f(x, z)
w = v[0:25].reshape(5, 5)
y = g(np.matmul(w, x) + v[25:30])
minimize ( || y - x || )
Would this even make sense? The way I picture it in my mind I would have to do some multiplications by identity vectors/matrices with trailing 0's to convert v --> w
Slice and reshape operations fit into standard reverse mode AD framework in the same way as any other op. Below is a simple TensorFlow program that is similar to the example you gave (I had to change a couple of things to make dimensions match), and the resulting computation graph for the gradient
def f(x, z):
"""Adds values together, reshapes into vector"""
return tf.reshape(x+z, (5,))
x = tf.Variable(np.random.rand(5, 1))
z = tf.Variable(np.random.rand(5, 1))
v = f(x, z)
w = tf.slice(v, 0, 5)
w = tf.reshape(v, (5, 1))
y = tf.matmul(tf.reshape(w, (5, 1)), tf.transpose(x)) + tf.slice(v, 0, 5)
cost = tf.square(tf.reduce_sum(y-x))
print tf.gradients(cost, [x, z])
Let us take a look at the source code:
#ops.RegisterGradient("Reshape")
def _ReshapeGrad(op, grad):
return [array_ops.reshape(grad, array_ops.shape(op.inputs[0])), None]
This is how tensorflow automatically differentiates.

Extratcing all square submatrices from matrix using Numpy

Say I have a NxN numpy matrix. I am looking for the fastest way of extracting all square chunks (sub-matrices) from this matrix. Meaning all CxC parts of the original matrix for 0 < C < N+1. The sub-matrices should correspond to contiguous rows/columns indexes of the original matrix. I want to achieve this in as little time as possible.
You could use Numpy slicing,
import numpy as np
n = 20
x = np.random.rand(n, n)
slice_list = [slice(k, l) for k in range(0, n) for l in range(k, n)]
results = [x[sl,sl] for sl in slice_list]
avoiding loops in Numpy, is not a goal by itself. As long as you are being mindful about it, there shouldn't be much overhead.
Tricky enough, but here is an example of extracting all MxM submatrices in a NxN matrix.
import numpy as NP
import numpy.random as RNG
P = N - M + 1
x = NP.arange(P).repeat(M)
y = NP.tile(NP.arange(M), P) + x
a = RNG.randn(N, N)
b = a[NP.newaxis].repeat(P, axis=0)
c = b[x, y]
d = c.reshape(P, M, N)
e = d[:, NP.newaxis].repeat(P, axis=1)
f = e[:, x, :, y]
g = f.reshape(P, M, P, M)
h = g.transpose(2, 0, 3, 1)
for i in range(0, P):
for j in range(0, P):
assert NP.equal(h[i, j], a[i:i+M, j:j+M]).all()

CGAL Quadratic Programming Solver, how to enter "x^4" in objective function? and in the constraints?

I am trying to minimize a function like the following:
a*x^4+b*y
and constraints like:
x^2 <= a
in CGAL::Quadratic_program.
To input "x^2" in the objective function I can do the following:
qp.set_d(X, X, 2);
but what about "x^4" ?
To add a constraint like "x<=a":
hp.set_a(X, 0, 1);
hp.set_b(0, a);
but what about "x^2 <= a" ?
The solution to solve this
kind of problems is to modify the objective function and constraints, in this case by setting z^2 = z.
//>=
Program hp(CGAL::LARGER, false, 0, false, 0);
//x+y >= -4
hp.set_a(X, 0, 1); hp.set_a(Y, 0, 1);
hp.set_b(0, -4);
//4x+2y+z^2 >= -a*b
//z^2 = z
hp.set_a(X, 1, 4); hp.set_a(Y, 1, 2); hp.set_a(Z, 1, 1);
hp.set_b(1, -a * b);
//-x + y >= −1
hp.set_a(X, 2, -1); hp.set_a(Y, 2, 1);
hp.set_b(2, -1);
//x <= 0
hp.set_a(X,3,1);
hp.set_b(3,0);
hp.set_r(3,CGAL::SMALLER);
//y <= 0
hp.set_a(Y,4,1);
hp.set_b(4,0);
hp.set_r(4,CGAL::SMALLER);
//objective function
//min a*x^2 + b*y + z^4
//z^2 = z
//min a*x^2 + b*y + z^2
hp.set_d(X, X, 2 * a); //2D
hp.set_c(Y, b);
hp.set_d(Z, Z, 2); //2D
// solve the program
Solution s = CGAL::solve_quadratic_program(hp, ET());
assert(s.solves_quadratic_program(hp));
From the given link:
This package lets you solve convex quadratic programs of the general
form ...
Why did you decide you can use quadratic solver to solve your power-of-four polynomial? Quadratic doesn't stand for "quadra" as a "four", it stands for a square as a "quadragon" and means power-of-two.
In short: you cannot use this tool to solve your problem.