sympy differential equality - python-2.7

I'm trying to setup sympy to calculate derivatives. When I test it with simple equation, I'm finding the same answer (equality is true between sympy calculation and my own calculation). However when I try with more complicated ones, when it doesnt work (I checked answers with wolfram alpha too).
Here is my code:
from __future__ import division
from sympy import simplify, cos, sin, expand
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
equation = (x**3*y-x*y**3)/(x**2+y**2)
equation2 = (x**4*y+4*x**2*y**3-y**5)/((x**2+y**2)**2)
pprint(equation)
print ""
pprint(equation2)
print diff(equation,x) == equation2

This is a common "gotcha" in Sympy. For creating symbolic equalities, you should use sympy.Eq and not = or == (see the tutorial). For your example,
Eq(equation.diff(x), equation2).simplify()
True
Note, as above, that you may have to call simplify() in order to see wheather the Eq object corresponds to True or False

Related

In Sympy is there a way to get a representation of Groebner basis in terms of defining polynomials?

In Sympy one can obtain the representation of a polynomial's reduction by a Grobner basis:
from sympy import groebner, expand
from sympy.abc import x, y
f = 2*x**4 - x**2 + y**3 + y**2
G = groebner([x**3 - x, y**3 - y])
Q, r = G.reduce(f)
assert f == expand(sum(q*g for q, g in zip(Q, G)) + r)
But what I'm looking for is a way to get the expression of the elements of a Groebner basis in terms of the polynomials defining it, essentially just storing the computations performed in the Buchberger algorithm used in producing the basis.
For example
groebner([2*x+3*y+5, 3*x+5*y+2, 5*x+2*y+3]) # GroebnerBasis([1], x, y, domain='ZZ', order='lex')
It indicates that these three polynomials generate the unit ideal, but I would like an explicit combination of these polynomials that equals 1. In the example given it is a linear combination, but I would like a method that works with nonlinear as well.
In the first example I obtained Q,r. In the second example I obtained the analog of the remainder r but I would like the polynomials Q realizing it.
Similarly, the method G.contains() will indicate if the ideal contains the polynomial, but it won't tell you how to produce it. Is there a way to do this too?

Plot a cube root function with SymPy, including negative arguments

I'm trying to plot a cube root function with SymPy. I know what this should look like, but I'm only seeing values for x >= 0, not for negative numbers. I've tried two approaches.
cbrt:
from sympy import symbols, plot
from sympy.functions.elementary.miscellaneous import cbrt
x = symbols('x')
eqn = cbrt(x)
p = plot(eqn)
nthroot:
from sympy import symbols, plot
from sympy.simplify.simplify import nthroot
x = symbols('x')
eqn = nthroot(x, 3)
p = plot(eqn)
SymPy's functions cbrt and root use the principal branch of the root. The principal branch of the multivalued function z->z**(1/3) is equal to -1/2 + I*sqrt(3)/2 at -1. It is not a real number, so you don't see it on the plot.
But it is often desired to get the real-valued root for all real inputs, which is possible for odd degrees. This is provided by the function real_root. So, in principle your code should be
from sympy import symbols, plot, real_root
x = symbols('x')
eqn = real_root(x, 3)
p = plot(eqn)
However, the implementation of real_root does not fit the expectations of the SymPy plotting routine, so the above throws an error as of now. (Different errors in different versions of SymPy). Instead, plot the mathematically equivalent function |x|**(1/3) * sign(x):
from sympy import symbols, plot, root, sign, Abs
x = symbols('x')
eqn = root(Abs(x), 3)*sign(x)
p = plot(eqn)
Remark: The function nthroot from simplify module is not for computing the nth root, it is for simplifying expressions with radicals.

Sympy Solve returning type error?

I have written a program to solve a transcendental equation using Sympy Solvers, but I keep getting a TypeError. The code I have written is the following:
from sympy.solvers import solve
from sympy import Symbol
import sympy as sp
import numpy as np
x = Symbol('x',positive=True)
def converts(d):
M = 1.0
res = solve(-2*M*sp.sqrt(1+2*M/x)-d,x)[0]
return res
print converts(0.2)
which returns the following error:
raise TypeError('invalid input: %s' % p)
TypeError: invalid input: -2.0*sqrt(1 + 2/x)
I've solved transcendental equations this way before, but this is the first time I'm facing this error.
From what I gather, it looks like Sympy is seeing my input as a string instead of a rational number, but I'm not sure if or why it is so. Can someone please tell me why I'm getting this error and/or how to fix it?
Edit: I've rewritten my code to make it clearer but the result is still the same
This is the equation I'm trying to solve
Let's first recreate the actual equation.
from sympy import *
init_printing()
M, x, d = symbols("M, x, d")
eq = Eq(-2*M * sqrt(1 + 2*M/x) - d, 0)
eq
As in your code, we can substitute values: M=1, d=0.2
to_solve = eq.subs({M:1, d:0.2})
to_solve
Now, we may attempt to solve it directly
solve(to_solve, x)
Unfortunately, solve fails to find the solution in this case. If we take a closer look at the equation, the square root part should return a negative number for this equation to be valid.
-2 * (-1/10) - 0.2 = 0
As square root of a number can not be negative, correct me if I'm wrong, sympy is unable to find a value for x such that sqrt(1+2/x) == -1/10
This problem is due to our choice of values for d and M. Solution exists if M and d are of opposite signs.
to_solve = eq.subs({M:-1, d:0.2})
to_solve
solve(to_solve, x)
[2.02020202020202]
Run this code on sympy live and experiment with other values.

How to calculate the log-normal density (starting from the normal density) using SymPy

from sympy import *
x, y, mu, sigma, density1, density2 = symbols('x y mu sigma density1 density2')
eq1 = Eq(density1, 1/(sqrt(2*pi)*sigma)
*exp(-(x-mu)**2/(2*sigma**2))) # normal
eq2 = Eq(y, exp(x)) # substitution
eq3 = Eq(density2, 1/(y*sqrt(2*pi)*sigma)
*exp(-(ln(y)-mu)**2/(2*sigma**2))) # lognormal
[eq1, eq2, eq3]
Output:
How can I make SymPy take the normal density (eq1), apply the x to y substitution (eq2) and output the lognormal density (eq3)?
(I received no answer to this question at https://stats.stackexchange.com/q/55353/14202 .)
When we change a variable in a probability density function, it is also necessary to multiply the density by the derivative of the function that performs the substitution. This is how substitution works in integrals, and probability density must have integral 1, so we have to respect that.
Let's call the substitution-performing function f (it's log(y) in your example). Here is how the process works, starting from your setup:
f = solve(eq2, x)[0]
new_density = eq1.rhs.subs(x, f) * f.diff()
# test it now
Eq(new_density, eq3.rhs) # True

Python (Scipy): Finding the scale parameter (standard deviation) of a gaussian distribution

it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.