Partial derivatives and substituitions with SymPy - sympy

I have the following code where I define a composite function f(r(x,y),theta(x,y)), and I'm trying to simplify f.diff(x) assuming that I know an expression for f.diff(r):
x, y = sp.symbols("x y", real=True)
r = sp.Function('r')(x, y)
phi = sp.Function('phi')(x, y)
f = sp.Function('f')(r, phi)
df_dr = sp.Function('df_dr')(r, phi)
df_phi = sp.Function('df_phi')(r, phi)
Next, I compute the derivatives of f(r,phi) with respect to x:
f.diff(x)
And I get the following:
Derivative(phi(x, y), x)*Subs(Derivative(f(r(x, y), _xi_2), _xi_2), _xi_2, phi(x, y)) + Derivative(r(x, y), x)*Subs(Derivative(f(_xi_1, phi(x, y)), _xi_1), _xi_1, r(x, y))
My question is: how can I replace Subs(Derivative(f(_xi_1, phi(x, y)), _xi_1) by df_dr?

Related

get coefficient of a monomial in a sympy expression

I have a sympy expression like so:
exp_str = '3 * x**2*y + 4*a**2 * x*y + 9*b * x'
my_expr = sp.parsing.sympy_parser.parse_expr(exp_str)
and I want to get the coefficient of x*y, which should be 4*a**2.
Is there a function that I can pass my_expr to along with a list of variables I want my polynomial to be over? For example, I would need to pass this function x and y so that it knows x and y are variables and that a and b are coefficients.
If there is no such function, and recommendations on how to write code to do this would be appreciated. Thanks
There is a coeff method of sympy expressions:
In [28]: x, y, a, b = symbols('x, y, a, b')
In [29]: expr = 3 * x**2*y + 4*a**2 * x*y + 9*b * x
In [30]: expr.coeff(x*y)
Out[30]:
2
4⋅a
https://docs.sympy.org/latest/modules/core.html?highlight=coeff#sympy.core.expr.Expr.coeff
You might find it useful though to work with expressions as structured polynomials e.g.:
In [31]: p = Poly(expr, [x, y])
In [32]: p
Out[32]: Poly(3*x**2*y + 4*a**2*x*y + 9*b*x, x, y, domain='ZZ[a,b]')
In [33]: p.coeff_monomial(x**2 * y)
Out[33]: 3
In [34]: p.coeff_monomial(x * y)
Out[34]:
2
4⋅a
https://docs.sympy.org/latest/modules/polys/basics.html

Bicubic interpolation the surface

Faced the problem of interpolating the surface with a cubic spline, dug the entire Internet, but except for the banal interpolation in the square [0,1] x [[0,1] did not find anything, but I need to interpolate the value inside an arbitrary region (whether 16 points or 4 with the indication of all the derivatives, etc.), I give an example:
We take points that satisfy the equation of an elliptic paraboloid: x = {100..102}, y = {100..102}
x = 100, y = 100, f (x, y) = 1805.56
x = 100, y = 101, f (x, y) = 1816.72
x = 100, y = 102, f (x, y) = 1828
x = 101, y = 100, f (x, y) = 1830.68
x = 101, y = 101, f (x, y) = 1841.85
x = 101, y = 102, f (x, y) = 1853.12
x = 102, y = 100, f (x, y) = 1856.06
x = 102, y = 101, f (x, y) = 1867.22
x = 102, y = 102, f (x, y) = 1878.5
How can I calculate the value at the point {101.5, 101.5}?
I read the article on wikipedia, but there's nothing said about how it works at arbitrary points, outside the square [0,1] x [0,1]
A good implementation that works on a single quadrate and uses a consistent interpolation here
double cubicInterpolate (double p[4], double x) {
return p[1] + 0.5 * x*(p[2] - p[0] + x*(2.0*p[0] - 5.0*p[1] + 4.0*p[2] -
p[3] + x*(3.0*(p[1] - p[2]) + p[3] - p[0])));
}
double bicubicInterpolate (double p[4][4], double x, double y) {
double arr[4];
arr[0] = cubicInterpolate(p[0], y);
arr[1] = cubicInterpolate(p[1], y);
arr[2] = cubicInterpolate(p[2], y);
arr[3] = cubicInterpolate(p[3], y);
return cubicInterpolate(arr, x);
}

Integrate Legendre polynomials in SymPy and use these integrals as coefficients

I am trying to make a simple example in SymPy to compute some coefficients and then use them in a sum of legendre polynomials. Finally plot it. Very simple but can not make it work. I want to use it for my electromagnetism course. I get errors in both the attempts below:
%matplotlib inline
from sympy import *
x, y, z = symbols('x y z')
k, m, n = symbols('k m n', integer=True)
f, step, potential = symbols('f step potential', cls=Function)
var('n x')
A=SeqFormula(2*(2*m+1)*Integral(legendre(2*m+1,x),(x,0,1)).doit(),(m,0,oo)).doit()
Sum(A.coeff(m).doit()*legendre(2*m+1,x),(m,0,10)).doit()
B=Tuple.fromiter(2*(2*n+1)*Integral(legendre(2*n+1,x),(x,0,1)).doit() for n in range(50))
Sum(B[m]*legendre(2*m+1,x),(m,0,10)).doit()
Here is a part of an script in Mathematica of what I would like to replicate:
Nn = 50;
Array[A, Nn]
For[i = 0, i <= Nn, i++, A[i + 1] = Integrate[LegendreP[2*i + 1, x]*(2*(2*i + 1) + 1), {x, 0, 1}]];
Step = Sum[A[n + 1]*LegendreP[2*n + 1, #], {n, 0, Nn}] & Plot[Step[x], {x, -1, 1}]
I think the structure you were searching for with A is Python's lambda.
A = lambda m: 2*(2*m+1)*Integral(legendre(2*m+1, x), (x, 0, 1))
f = Sum(A(m)*legendre(2*m+1, x), (m, 0, 10)).doit()
plot(f, (x, -1, 1))
The key point is that m has to be explicit in order for integration to happen; SymPy does not know a general formula for integrating legendre(n, x). So, the integration here is attempted only when A is called with a concrete value of m, like A(0), A(1), etc.

Can auto differentiation handle separate functions of array slices?

Given a vector v of length say 30, can auto differentiation tools in say theano or tensorflow be able to take the gradient of something like this:
x = np.random.rand(5, 1)
v = f(x, z)
w = v[0:25].reshape(5, 5)
y = g(np.matmul(w, x) + v[25:30])
minimize ( || y - x || )
Would this even make sense? The way I picture it in my mind I would have to do some multiplications by identity vectors/matrices with trailing 0's to convert v --> w
Slice and reshape operations fit into standard reverse mode AD framework in the same way as any other op. Below is a simple TensorFlow program that is similar to the example you gave (I had to change a couple of things to make dimensions match), and the resulting computation graph for the gradient
def f(x, z):
"""Adds values together, reshapes into vector"""
return tf.reshape(x+z, (5,))
x = tf.Variable(np.random.rand(5, 1))
z = tf.Variable(np.random.rand(5, 1))
v = f(x, z)
w = tf.slice(v, 0, 5)
w = tf.reshape(v, (5, 1))
y = tf.matmul(tf.reshape(w, (5, 1)), tf.transpose(x)) + tf.slice(v, 0, 5)
cost = tf.square(tf.reduce_sum(y-x))
print tf.gradients(cost, [x, z])
Let us take a look at the source code:
#ops.RegisterGradient("Reshape")
def _ReshapeGrad(op, grad):
return [array_ops.reshape(grad, array_ops.shape(op.inputs[0])), None]
This is how tensorflow automatically differentiates.

ocaml weired recursion

I am trying to calculate the square root of a number by Ocaml, this is my code :
let isclose x y = abs_float(x -. y)<0.001;;
let average x y = (0.5*.x)+.(0.5*.y);;
let rec guess x y = if isclose y (x /. y) then y else guess x (average y x/.y);;
let sqr x = guess x 1.;;
then, typing
sqr 1.;;
gives 1 as expected, but typing
sqr 2.;;
lasts undefinitely.
Can one help with my error (I tested the algo in Python and it works as expected).
Thanks for help
You want this:
let rec guess x y =
if isclose y (x /. y) then y else guess x (average y (x/.y))
Note the extra parentheses.
The meaning of
average y x /. y
is
(average y x) /. y
Whereas you want:
average y (x /. y)