Sympy diffgeom: Metric dependent on function - sympy

I'm having problems defining a metric using sympy's diffgeom package, where the metric depends on a function f(x,y).
I get the error ValueError: Can't calculate 1st derivative wrt x.
import sympy as sym
from sympy.diffgeom import Manifold, Patch, CoordSystem
m = Manifold("M",2)
patch = Patch("P",m)
cartesian = CoordSystem("cartesian", patch, ["x", "y"])
x, y = cartesian.coord_functions()
f = sym.Function('f')
xi = sym.Symbol('xi')
g = sym.Matrix([
[ xi + sym.diff(f,x)**2 , sym.diff(f,x) * sym.diff(f,y) ],
[ sym.diff(f,x) * sym.diff(f,y) , xi + sym.diff(f,y)**2 ]
])
I have a feeling it's because of how x and y are defined, but I haven't been able to figure it out.

Indeed, differentiation with respect to these x and y (objects of class sympy.diffgeom.diffgeom.BaseScalarField) is not supported. You can see this by accessing the internal property _diff_wrt which indicates if something can be a thing with respect to which to differentiate.
>>> x._diff_wrt
False
Do those derivative (with respect to a scalar field) make mathematical sense here? I'm not sure.
An additional issue is that SymPy does not differentiate functions, so
f = Function('f')
diff(f, x)
is always an error. SymPy can differentiate expressions, e.g., diff(f(x, y), x).
Aside: diff can be used as a method of an expression, which in your case would result in shorter code, like f(x, y).diff(x).

Related

Problems when substituting a matrix in a polynomial

Example: let
M = Matrix([[1,2],[3,4]]) # and
p = Poly(x**3 + x + 1) # then
p.subs(x,M).expand()
gives the error :
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
which is very plausible since the two first terms become matrices but the last term (the constant term) is not a matrix but a scalar. To remediate to this situation I changed the polynomial to
p = Poly(x**3 + x + x**0) # then
the same error persists. Am I obliged to type the expression by hand, replacing x by M? In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.
So I think the question is mainly revolving around the concept of Matrix polynomial:
(where P is a polynomial, and A is a matrix)
I think this is saying that the free term is a number, and it cannot be added with the rest which is a matrix, effectively the addition operation is undefined between those two types.
TypeError: cannot add <class'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.One'>
However, this can be circumvented by defining a function that evaluates the matrix polynomial for a specific matrix. The difference here is that we're using matrix exponentiation, so we correctly compute the free term of the matrix polynomial a_0 * I where I=A^0 is the identity matrix of the required shape:
from sympy import *
x = symbols('x')
M = Matrix([[1,2],[3,4]])
p = Poly(x**3 + x + 1)
def eval_poly_matrix(P,A):
res = zeros(*A.shape)
for t in enumerate(P.all_coeffs()[::-1]):
i, a_i = t
res += a_i * (A**i)
return res
eval_poly_matrix(p,M)
Output:
In this example the polynomial has only three terms but in reality I encounter (multivariate polynomials with) dozens of terms.
The function eval_poly_matrix above can be extended to work for multivariate polynomials by using the .monoms() method to extract monomials with nonzero coefficients, like so:
from sympy import *
x,y = symbols('x y')
M = Matrix([[1,2],[3,4]])
p = poly( x**3 * y + x * y**2 + y )
def eval_poly_matrix(P,*M):
res = zeros(*M[0].shape)
for m in P.monoms():
term = eye(*M[0].shape)
for j in enumerate(m):
i,e = j
term *= M[i]**e
res += term
return res
eval_poly_matrix(p,M,eye(M.rows))
Note: Some sanity checks, edge cases handling and optimizations are possible:
The number of variables present in the polynomial relates to the number of matrices passed as parameters (the former should never be greater than the latter, and if it's lower than some logic needs to be present to handle that, I've only handled the case when the two are equal)
All matrices need to be square as per the definition of the matrix polynomial
A discussion about a multivariate version of the Horner's rule features in the comments of this question. This might be useful to minimize the number of matrix multiplications.
Handle the fact that in a Matrix polynomial x*y is different from y*x because matrix multiplication is non-commutative . Apparently poly functions in sympy do not support non-commutative variables, but you can define symbols with commutative=False and there seems to be a way forward there
About the 4th point above, there is support for Matrix expressions in SymPy, and that can help here:
from sympy import *
from sympy.matrices import MatrixSymbol
A = Matrix([[1,2],[3,4]])
B = Matrix([[2,3],[3,4]])
X = MatrixSymbol('X',2,2)
Y = MatrixSymbol('Y',2,2)
I = eye(X.rows)
p = X**2 * Y + Y * X ** 2 + X ** 3 - I
display(p)
p = p.subs({X: A, Y: B}).doit()
display(p)
Output:
For more developments on this feature follow #18555

Plot a cube root function with SymPy, including negative arguments

I'm trying to plot a cube root function with SymPy. I know what this should look like, but I'm only seeing values for x >= 0, not for negative numbers. I've tried two approaches.
cbrt:
from sympy import symbols, plot
from sympy.functions.elementary.miscellaneous import cbrt
x = symbols('x')
eqn = cbrt(x)
p = plot(eqn)
nthroot:
from sympy import symbols, plot
from sympy.simplify.simplify import nthroot
x = symbols('x')
eqn = nthroot(x, 3)
p = plot(eqn)
SymPy's functions cbrt and root use the principal branch of the root. The principal branch of the multivalued function z->z**(1/3) is equal to -1/2 + I*sqrt(3)/2 at -1. It is not a real number, so you don't see it on the plot.
But it is often desired to get the real-valued root for all real inputs, which is possible for odd degrees. This is provided by the function real_root. So, in principle your code should be
from sympy import symbols, plot, real_root
x = symbols('x')
eqn = real_root(x, 3)
p = plot(eqn)
However, the implementation of real_root does not fit the expectations of the SymPy plotting routine, so the above throws an error as of now. (Different errors in different versions of SymPy). Instead, plot the mathematically equivalent function |x|**(1/3) * sign(x):
from sympy import symbols, plot, root, sign, Abs
x = symbols('x')
eqn = root(Abs(x), 3)*sign(x)
p = plot(eqn)
Remark: The function nthroot from simplify module is not for computing the nth root, it is for simplifying expressions with radicals.

sympy differential equality

I'm trying to setup sympy to calculate derivatives. When I test it with simple equation, I'm finding the same answer (equality is true between sympy calculation and my own calculation). However when I try with more complicated ones, when it doesnt work (I checked answers with wolfram alpha too).
Here is my code:
from __future__ import division
from sympy import simplify, cos, sin, expand
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
equation = (x**3*y-x*y**3)/(x**2+y**2)
equation2 = (x**4*y+4*x**2*y**3-y**5)/((x**2+y**2)**2)
pprint(equation)
print ""
pprint(equation2)
print diff(equation,x) == equation2
This is a common "gotcha" in Sympy. For creating symbolic equalities, you should use sympy.Eq and not = or == (see the tutorial). For your example,
Eq(equation.diff(x), equation2).simplify()
True
Note, as above, that you may have to call simplify() in order to see wheather the Eq object corresponds to True or False

symbolic derivation of a larger function

I want to take the derivative of the following function
y=(np.log(x))/(1+x),
if I am using sympy, it gives me the following error
from sympy import *
y1=Derivative((np.log(x))/(1+x), x)
print y1
sequence too large; cannot be greater than 32
Do it like this:
>>> from sympy import *
>>> var('x')
x
>>> y1 = diff(log(x)/(1+x))
>>> y1
-log(x)/(x + 1)**2 + 1/(x*(x + 1))
As Sanjeev mentioned in a comment you need to define variables in one way or another.
np.log in your code would be a function that accepts a numerical value and returns a numerical value; sympy needs to see a function names that it knows in formal terms, such as log
In this context, you need to use sympy's diff function, rather than Derivative.

Parameter optimization using a genetic algorithm?

I am trying to optimize parameters for a known function to fit an experimental data plot. The function is fairly involved
where x sweeps along a know set of numbers and p, g and c are the independent parameters to be optimized. Any ideas or resources that could be of assistance?
I would not recommend Genetic Algorithms. Instead go for straight forward Optimization.
Scipy has some resources.
You haven't provided any data or so, so I'll just go for something that should run. Below is something to get you started. I can't know if it works without seeing the data. Also, there must probably is a way to dynamically feed objectivefunc your x and y data. That's probably in the docs to scipy.optimize.minimize.
What I've done. Create a function to minimize. Here, I've called it objectivefunc. For that I've taken your function y = x^2 * p^2 * g / ... and transformed it to be of the form x^2 * p^2 * g / (...) - y = 0. Then square the left hand side and try to minimise it. Because you will have multiple (x/y) data samples, I'd minimise the sum of the squares. Put it all in a function and pass it to the minimize from scipy.
import numpy as np
from scipy.optimize import minimize
def objectivefunc(pgq):
"""Your function transformed so that it can be minimised.
I've renamed the input pgq, so that pgq[0] is p, pgq[1] is g, etc.
"""
p = pgq[0]
g = pgq[1]
q = pgq[2]
x = [10, 9.4, 17] # Some input data.
y = [12, 42, 0.8]
sum_ = 0
for i in range(len(x)):
sum_ += (x[i]**2 * p**2 * g - y[i] * ( (c**2 - x**2)**2 + x**2 * g**2) )**2
return sum_
pgq = np.array([1.3, 0.7, 0.5]) # Supply sensible initivial values
res = minimize(objectivefunc, pgq, method='nelder-mead',
options={'xtol': 1e-8, 'disp': True})
Have you tired old good Levenberg-Marquardt as implemented in Levenberg-Marquardt.vi. If it does not suite your needs, you can try Waptia libraryfor LabVIEW with one of the genetic algorithms implemented.