Conditional expression in Sympy - sympy

If I try to have an expression whose value depends on another expression
from sympy import *
x = symbols('x')
y1 = x/cos(x)
y2 = y2 if y1>0 else nan
but an exception is raised
File /usr/lib/python3.10/site-packages/sympy/core/relational.py:511, in Relational.__bool__(self)
510 def __bool__(self):
--> 511 raise TypeError("cannot determine truth value of Relational")
TypeError: cannot determine truth value of Relational
Is there a chance of having the effect that I'd want?
In case my problem is a X-Y problem, what I ultimately want to do is to plot x/cos(x) only where the function is positive.
UPDATE
I used the useful suggestion of #Oscar Benjamin (that completely answers my original question) but I have other issues when plotting the Piecewiswe function, that I'll expose in another question, as well as a wrong plot when I use the plot_implicit solution they suggested.

What you're looking for is Piecewise:
In [8]: p = Piecewise((x/cos(x), x/cos(x) > 0), (S.NaN, True))
In [9]: p
Out[9]:
⎧ x x
⎪────── for ────── > 0
⎨cos(x) cos(x)
⎪
⎩ nan otherwise
A more direct solution to your problem though would be something like
plot_implicit(Eq(y, x/cos(x)) & (x/cos(x) > 0))

Related

Sympy Solve returning type error?

I have written a program to solve a transcendental equation using Sympy Solvers, but I keep getting a TypeError. The code I have written is the following:
from sympy.solvers import solve
from sympy import Symbol
import sympy as sp
import numpy as np
x = Symbol('x',positive=True)
def converts(d):
M = 1.0
res = solve(-2*M*sp.sqrt(1+2*M/x)-d,x)[0]
return res
print converts(0.2)
which returns the following error:
raise TypeError('invalid input: %s' % p)
TypeError: invalid input: -2.0*sqrt(1 + 2/x)
I've solved transcendental equations this way before, but this is the first time I'm facing this error.
From what I gather, it looks like Sympy is seeing my input as a string instead of a rational number, but I'm not sure if or why it is so. Can someone please tell me why I'm getting this error and/or how to fix it?
Edit: I've rewritten my code to make it clearer but the result is still the same
This is the equation I'm trying to solve
Let's first recreate the actual equation.
from sympy import *
init_printing()
M, x, d = symbols("M, x, d")
eq = Eq(-2*M * sqrt(1 + 2*M/x) - d, 0)
eq
As in your code, we can substitute values: M=1, d=0.2
to_solve = eq.subs({M:1, d:0.2})
to_solve
Now, we may attempt to solve it directly
solve(to_solve, x)
Unfortunately, solve fails to find the solution in this case. If we take a closer look at the equation, the square root part should return a negative number for this equation to be valid.
-2 * (-1/10) - 0.2 = 0
As square root of a number can not be negative, correct me if I'm wrong, sympy is unable to find a value for x such that sqrt(1+2/x) == -1/10
This problem is due to our choice of values for d and M. Solution exists if M and d are of opposite signs.
to_solve = eq.subs({M:-1, d:0.2})
to_solve
solve(to_solve, x)
[2.02020202020202]
Run this code on sympy live and experiment with other values.

Scipy Optimization: Conditional Constrains (two intervals for 1 variable)

I came across: conditional constraint in optimization
It was close, but not clear enough for me to translate to my specific use case.
What I need (constraint wise):
if x < M:
then x = 0
else x = x
another way to put it:
X = {x: x=0 or x>M}
I want to set a minimum x but for the optimization algorithm to still be able to consider a value of x=0.
Here's the code:
cons = ({'type': 'ineq', 'fun': lambda x: np.array(____constraint - est___(x))},
{'type': 'ineq', 'fun': lambda x: np.array(main.max____ / 100 - (x))},
{'type': 'ineq', 'fun': lambda x: np.array((x) - 5)})
x0 = np.zeros(len(master____optimization_table))
res = minimize(___model, x0, constraints=cons, options=opts)
*The functions calculate important metrics we are constraining. The only one I need help with is the third line of constraints.
This seems extremely useful, if it is possible that is.

Sorted() syntax difference between Python2.7 and Python 3.6.0, or erratum? [duplicate]

In Python 2, I can write:
In [5]: points = [ (1,2), (2,3)]
In [6]: min(points, key=lambda (x, y): (x*x + y*y))
Out[6]: (1, 2)
But that is not supported in 3.x:
File "<stdin>", line 1
min(points, key=lambda (x, y): (x*x + y*y))
^
SyntaxError: invalid syntax
The straightforward workaround is to index explicitly into the tuple that was passed:
>>> min(points, key=lambda p: p[0]*p[0] + p[1]*p[1])
(1, 2)
This is very ugly. If the lambda were a function, I could do
def some_name_to_think_of(p):
x, y = p
return x*x + y*y
But because the lambda only supports a single expression, it's not possible to put the x, y = p part into it.
How else can I work around this limitation?
No, there is no other way. You covered it all. The way to go would be to raise this issue on the Python ideas mailing list, but be prepared to argue a lot over there to gain some traction.
Actually, just not to say "there is no way out", a third way could be to implement one more level of lambda calling just to unfold the parameters - but that would be at once more inefficient and harder to read than your two suggestions:
min(points, key=lambda p: (lambda x,y: (x*x + y*y))(*p))
Python 3.8 update
Since the release of Python 3.8, PEP 572 — assignment expressions — have been available as a tool.
So, if one uses a trick to execute multiple expressions inside a lambda - I usually do that by creating a tuple and just returning the last component of it, it is possible to do the following:
>>> a = lambda p:(x:=p[0], y:=p[1], x ** 2 + y ** 2)[-1]
>>> a((3,4))
25
One should keep in mind that this kind of code will seldom be more readable or practical than having a full function. Still, there are possible uses - if there are various one-liners that would operate on this point, it could be worth to have a namedtuple, and use the assignment expression to effectively "cast" the incoming sequence to the namedtuple:
>>> from collections import namedtuple
>>> point = namedtuple("point", "x y")
>>> b = lambda s: (p:=point(*s), p.x ** 2 + p.y ** 2)[-1]
According to http://www.python.org/dev/peps/pep-3113/ tuple unpacking are gone, and 2to3 will translate them like so:
As tuple parameters are used by lambdas because of the single
expression limitation, they must also be supported. This is done by
having the expected sequence argument bound to a single parameter and
then indexing on that parameter:
lambda (x, y): x + y
will be translated into:
lambda x_y: x_y[0] + x_y[1]
Which is quite similar to your implementation.
I don't know any good general alternatives to the Python 2 arguments unpacking behaviour. Here's a couple of suggestion that might be useful in some cases:
if you can't think of a name; use the name of the keyword parameter:
def key(p): # more specific name would be better
x, y = p
return x**2 + y**3
result = min(points, key=key)
you could see if a namedtuple makes your code more readable if the list is used in multiple places:
from collections import namedtuple
from itertools import starmap
points = [ (1,2), (2,3)]
Point = namedtuple('Point', 'x y')
points = list(starmap(Point, points))
result = min(points, key=lambda p: p.x**2 + p.y**3)
While the destructuring arguments was removed in Python3, it was not removed from comprehensions. It is possible to abuse it to obtain similar behavior in Python 3.
For example:
points = [(1,2), (2,3)]
print(min(points, key=lambda y: next(x*x + y*y for (x,y) in [y])))
In comparison with the accepted answer of using a wrapper, this solution is able to completely destructure the arguments while the wrapper only destructures the first level. That is, you can do
values = [(('A',1),'a'), (('B',0),'b')]
print(min(values, key=lambda y: next(b for ((a,b),c) in (y,))))
In comparison to the accepted answer using an unwrapper lambda:
values = [(('A',1),'a'), (('B',0),'b')]
print(min(points, key=lambda p: (lambda a,b: (lambda x,y: (y))(*a))(*p)))
Alternatively one can also use a list instead of a tuple.
values = [(('A',1),'a'), (('B',0),'b')]
print(min(points, key=lambda y: next(b for (a,b),c in [y])))
This is just to suggest that it can be done, and should not be taken as a recommendation. However, IMO, this is better than the hack of using using multiple expressions in a tuple and returning the last one.
I think the better syntax is x * x + y * y let x, y = point, let keyword should be more carefully chosen.
The double lambda is the closest version.
lambda point: (lambda x, y: x * x + y * y)(*point)
High order function helper would be useful in case we give it a proper name.
def destruct_tuple(f):
return lambda args: f(*args)
destruct_tuple(lambda x, y: x * x + y * y)
Consider whether you need to unpack the tuple in the first place:
min(points, key=lambda p: sum(x**2 for x in p))
or whether you need to supply explicit names when unpacking:
min(points, key=lambda p: abs(complex(*p)**2)
Based on Cuadue suggestion and your comment on unpacking still being present in comprehensions, you can use, using numpy.argmin :
result = points[numpy.argmin(x*x + y*y for x, y in points)]
Another option is to write it into a generator producing a tuple where the key is the first element. Tuples are compared starting from beginning to end so the tuple with the smallest first element is returned. You can then index into the result to get the value.
min((x * x + y * y, (x, y)) for x, y in points)[1]
There may be a real solution to this, using PyFunctional!
Although not currently supported, I've submitted a tuple arg unpacking feature request to support:
(
seq((1, 2), (3, 4))
.map(unpack=lambda a, b: a + b)
) # => [3, 7]
Since questions on Stack Overflow are not supposed to contain the answer in the question, nor have explicit "update" sections, I am converting OP's original "updates" to a proper answer and making it community wiki.
OP originally claimed that this solution was "extending the idea in the answer". I cannot discern which answer that meant, or which idea. The idea is functionally the same as anthony.hl's answer, but that came years later. Considering the state of answers at the time, I think this qualifies as OP's original work.)
Make a wrapper function that generalizes the process of unpacking the arguments, like so:
def star(f):
return lambda args: f(*args)
Now we can use this to transform the lambda we want to write, into one that will receive the argument properly:
min(points, key=star(lambda x,y: (x*x + y*y))
We can further clean this up by using functools.wraps:
import functools
def star(f):
#functools.wraps(f)
def f_inner(args):
return f(*args)
return f_inner

Python (Scipy): Finding the scale parameter (standard deviation) of a gaussian distribution

it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.

SymPy - substitute sybolic entries in a matrix

I have a python function which generates a sympy.Matrix with symbolic entries. It works effectively like:
import sympy as sp
M = sp.Matrix([[1,0,2],[0,1,2],[1,2,0]])
def make_symbolic_matrix(M):
M_sym = sp.zeros(3)
syms = ['a0:3']
for i in xrange(3):
for j in xrange(3):
if M[i,j] == 1:
M_sym = syms[i]
elif M[i,j] == 2:
M_sym = 1 - syms[i]
return M_sym
This works just fine. I get a matrix out, which I can use for all the symbolical calculations I need.
My issue is that now I want to evaluate my matrix at specified parameter-value. Usually I would just use the .subs attribute. However, since the symbols, that are now used as entries in my matrix, were originally defined as temporary elements in a function, I don't know how to call them.
It seems as if it should be possible, since I'm able to perform symbolic calculations.
What I want to do would look something like (following the code above):
M_sym = make_matrix(M)
M_eval = M_sym.subs([(a0,.8),(a1,.3),(a2,.5)])
But all I get is "name 'a0' is not defined".
I'd be super happy if someone out there got a solution!
PS. I'm not just defining the symbols globally, because in the actual problem I don't know how many parameters I have from time to time.
In the general case, I assume you're looking for an n-by-m matrix of symbolic elements.
import sympy
def make_symbolic(n, m):
rows = []
for i in xrange(n):
col = []
for j in xrange(m):
col.append(sympy.Symbol('a%d%d' % (i,j)))
rows.append(col)
return sympy.Matrix(rows)
which could be used in the following way:
make_symbolic(3, 4)
to give:
Matrix([
[a00, a01, a02, a03],
[a10, a11, a12, a13],
[a20, a21, a22, a23]])
once you've got that matrix you can substitute in any values required.
Given that the answer from Andrew was helpful it seems like you might be interested in the MatrixSymbol.
In [1]: from sympy import *
In [2]: X = MatrixSymbol('X', 3, 4)
In [3]: X # completely symbolic
Out[3]: X
In [4]: Matrix(X) # Expand to explicit matrix
Out[4]:
⎡X₀₀ X₀₁ X₀₂ X₀₃⎤
⎢ ⎥
⎢X₁₀ X₁₁ X₁₂ X₁₃⎥
⎢ ⎥
⎣X₂₀ X₂₁ X₂₂ X₂₃⎦
But answering your original question, perhapcs you could get the symbols out of the matrix that you produce?
x12 = X[1, 2]
Symbols are defined by their name. Two symbols with the same name will considered to be the same thing by Sympy. So if you know the name of the symbols you want, just create them again using symbols.
You may also find good use of the zip function of Python.