Tell me please, How to forbid to open brackets? For example,
8 * (x + 1) It should be that way, not 8 * x + 8
Using evaluate = False doesn't help
The global evaluate flag will allow you to do this in the most natural manner:
>>> with evaluate(False):
... 8*(x+1)
...
8*(x + 1)
Otherwise, Mul(8, x + 1, evaluate=False) is a lower level way to do this. And conversion from a string (already in that form) is possible as
>>> S('8*(x+1)',evaluate=False)
8*(x + 1)
In general, SymPy will convert the expression to its internal format, which includes some minimal simplifications. For example, sqrt is represented internally as Pow(x,1/2). Also, some reordering of terms may happen.
In your specific case, you could try:
from sympy import factor
from sympy.abc import x, y
y = x + 1
g = 8 * y
g = factor(g)
print(g) # "8 * (x + 1)"
But, if for example you have g = y * y, SymPy will either represent it as a second power ((x + 1)**2), or expand it to x**2 + 2*x + 1.
PS: See also this answer by SymPy's maintainer for some possible workarounds. (It might complicate things later when you would like to evaluate or simplify this expression in other calculations.)
How about sympy.collect_const(sympy.S("8 * (x + 1)"), 8)?
In general you might be interested in some of these expression manipulations: https://docs.sympy.org/0.7.1/modules/simplify/simplify.html
Related
I am new to sympy, and I cannot understand why the result of the following piece of code does not results in f(x)=0
from sympy import *
f = Function('f')
x = Symbol('x')
simplify(Eq(f(x)+1,1))
When SymPy rewrites x + x as 2*x that is automatic rewriting. Not everything is automatic, however, as you have seen. If you want to know what value of f(x) makes that Equality true, you can solve for it:
>>> solve(Eq(f(x) + 1, 1), f(x))
[0]
Say I would like to solve a parametric limit: in the following example, alpha > 0 is the parameter.
import sympy as sp
x = sp.symbols("x", real=True)
alpha = sp.symbols("alpha", real=True, positive=True, nonzero=True)
expr = (x * sp.exp(x) - sp.exp(2 * sp.sqrt(1 + x**2))) / (sp.exp(alpha * x) + x** alpha)
sp.limit(expr, x, sp.oo)
If I execute the code I get the result -oo, which is arguably incorrect.
If I were to compute this limit by hand I would look at the numerator and conclude that exp(2 * sp.sqrt(1 + x**2)) is of the same order of exp(2*x), which dominates x * exp(x). Similarly, looking at the denominator I would say that exp(alpha * x) dominates the term x**alpha.
Therefore, I can compute the limit of the function -exp((2 - alpha) * x). The correct result would be:
-oo for 0 < alpha < 2
-1 for alpha = 2
0 for alpha > 2
Is there an easy way to achieve this result with sympy?
This should be considered a bug in SymPy. I would suggest opening an issue about it https://github.com/sympy/sympy/issues.
Regarding what you are asking for in general, it isn't implemented yet. See https://github.com/sympy/sympy/issues/13312.
When Sympy generates C code,
is there a way to enforce CSE optimizations for pow (or powf) occurrences in an expression?
For example, this code snippet
c, s = symbols('c s')
myexpr = c**6/1800 - c**5/100 - 0.00833333333333333*c**4*s**2 + 19*c**4/200 + 0.1*c**3*s**2 - 9*c**3/20 + c**2*s**4/120 - 0.57*c**2*s**2 + 43*c**2/40 - c*s**4/20 + 1.35*c*s**2 + 23*c/50 - 0.000555555555555556*s**6 + 19*s**4/200 - 1.075*s**2 - 2107/1800
import sympy
from sympy.codegen.ast import real, float64
sub_exprs,final_expr = sympy.cse([myexpr])
for var,expr in sub_exprs : print "const real", printing.ccode(expr, standard='C99', assign_to=var, type_aliases={real: float64})
print "return ",printing.ccode(final_expr[0], standard='C99', type_aliases={real: float64}),";"
produces the following disappointing output:
const real x0 = pow(c, 2);
const real x1 = pow(c, 3);
const real x2 = pow(c, 4);
const real x3 = pow(s, 2);
const real x4 = pow(s, 4);
return (1.0/1800.0)*pow(c, 6) - 1.0/100.0*pow(c, 5) + 1.3500000000000001*c*x3 - 1.0/20.0*c*x4 + (23.0/50.0)*c - 0.00055555555555555599*pow(s, 6) - 0.56999999999999995*x0*x3 + (1.0/120.0)*x0*x4 + (43.0/40.0)*x0 + 0.10000000000000001*x1*x3 - 9.0/20.0*x1 - 0.0083333333333333297*x2*x3 + (19.0/200.0)*x2 - 1.075*x3 + (19.0/200.0)*x4 - 2107.0/1800.0 ;
Pow optimizations have been completely ignored.
What is the workaround for this?
Remark: I saw that this issue is partially mentioned here:
"The code printers don’t print optimal code in many cases. An example of this is powers in C. x**2 prints as pow(x, 2) instead of x*x. Other optimizations (like mathematical simplifications) should happen before the code printers."
The CSE routine in sympy is not perfect (improved CSE is listed as an area for improvement), e.g.:
>>> sympy.cse([x**4, x**3*y])
([], [x**4, x**3*y])
Expanding pow in the printer or before the printer has been discussed some time, there is now a create_expand_pow optimization which can help some:
>>> expand_opt = create_expand_pow_optimization(3)
>>> expand_opt(x**5 + x**3)
x**5 + x*x*x
Note however that most compilers will already generate optimal assembly (regardless of CSE in the source code) if you pass them the right optimization flags.
I'm trying to use sympy to generate equations for non-linear least squares fitting. My goal is to make this quite complex but for the moment, here's a simple case (but not too simple!). It's basically fitting a two dimensional sinusoid to data. Here's the sympy code:
from sympy import *
S, l, m = symbols('S l m', real=True)
u, v = symbols('u v', real=True)
Vobs = symbols('Vobs', complex=True)
Vres = Vobs - S * exp(- 1j * 2 * pi * (u*l+v*m))
J=Vres*conjugate(Vres)
axes = [S, l, m]
grad = derive_by_array(J, axes)
hess = derive_by_array(grad, axes)
One element of the grad term looks like:
- 2.0*I*pi*S*u*(-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs)*exp(2.0*I*pi*(l*u + m*v)) + 2.0*I*pi*S*u*(-S*exp(2.0*I*pi*(l*u + m*v)) + conjugate(Vobs))*exp(-2.0*I*pi*(l*u + m*v))
What I'd like is to replace the expanded term (-S*exp(-2.0*I*pi*(l*u + m*v)) + Vobs) by Vres and contract the two conjugate terms into the more compact equivalent is:
4.0*pi*S*u*im(Vres*exp(2.0*I*pi*(l*u + m*v)))
I cannot see how to do this with sympy. This problem is bad for the first derivative (grad) but get really out of hand with the second derivative (hess).
First of all, let's not use 1j in SymPy, it's a float and floats are bad for symbolic math. SymPy's imaginary unit is I. So,
Vres = Vobs - S * exp(- I * 2 * pi * (u*l+v*m))
To replace the expression Vres by a symbol, we first need to create such a symbol. I'm going to call it Vres0, but its name will be Vres, so it prints as "Vres" in formulas.
Vres0 = symbols('Vres')
g1 = grad[1].subs(Vres, Vres0).conjugate().subs(Vres, Vres0).conjugate()
The conjugate-substitute-conjugate back is needed because subs doesn't quite recognize the possibility of replacing the conjugate of an expression with the conjugate of the symbol.
Now g1 is
-2*I*pi*S*Vres*u*exp(2*I*pi*(l*u + m*v)) + 2*I*pi*S*u*exp(-2*I*pi*(l*u + m*v))*conjugate(Vres)
and we want to fold the sum of conjugate terms. I use a custom transformation rule for this: the rule fold_conjugates applies to every sum (Add) of two terms (len(f.args) == 2) where the second is a conjugate of the first (f.args[1] == f.args[0].conjugate()). The transformation it performs: replace the sum by twice the real part of first argument (2*re(f.args[0])). Like so:
from sympy.core.rules import Transform
fold_conjugates = Transform(lambda f: 2*re(f.args[0]),
lambda f: isinstance(f, Add) and len(f.args) == 2 and f.args[1] == f.args[0].conjugate())
g = g1.xreplace(fold_conjugates)
Final result: 4*pi*S*u*im(Vres*exp(2*I*pi*(l*u + m*v))).
I have two univariate functions, f(x) and g(x), and I'd like to substitute g(x) = y to rewrite f(x) as some f2(y).
Here is a simple example that works:
In [240]: x = Symbol('x')
In [241]: y = Symbol('y')
In [242]: f = abs(x)**2 + 6*abs(x) + 5
In [243]: g = abs(x)
In [244]: f.subs({g: y})
Out[244]: y**2 + 6*y + 5
But now, if I try a slightly more complex example, it fails:
In [245]: h = abs(x) + 1
In [246]: f.subs({h: y})
Out[246]: Abs(x)**2 + 6*Abs(x) + 5
Is there a general approach that works for this problem?
The expression abs(x)**2 + 6*abs(x) + 5 does not actually contain abs(x) + 1 anywhere, so there is nothing to substitute for.
One can imagine changing it to abs(x)**2 + 5*(abs(x) + 1) + abs(x), with the substitution result being abs(x)**2 + 5*y + abs(x). Or maybe changing it to abs(x)**2 + 6*(abs(x) + 1) - 1, with the result being abs(x)**2 + 6*y - 1. There are other choices too. What should the result be?
There is no general approach to this task because it's not a well-defined task to begin with.
In contrast, the substitution f.subs(abs(x), y-1) is a clear instruction to replace all occurrences of abs(x) in the expression tree with y-1. It returns 6*y + (y - 1)**2 - 1.
The substitution above of abs(x) + 1 in abs(x)**2 + 6*abs(x) + 5 is a clear instruction too: to find exact occurrences of the expression abs(x) + 1 in the syntax tree of the expression abs(x)**2 + 6*abs(x) + 5, and replace those subtrees with the syntax tree of the expression abs(x) + 1. There is a caveat about heuristics though.
Aside: in addition to subs SymPy has a method .replace which supports wildcards, but I don't expect it to help here. In my experience, it is overeager to replace:
>>> a = Wild('a')
>>> b = Wild('b')
>>> f.replace(a*(abs(x) + 1) + b, a*y + b)
5*y/(Abs(x) + 1) + 6*y*Abs(x*y)/(Abs(x) + 1)**2 + (Abs(x*y)/(Abs(x) + 1))**(2*y/(Abs(x) + 1))
Eliminate a variable
There is no "eliminate" in SymPy. One can attempt to emulate it with solve by introducing another variable, e.g.,
fn = Symbol('fn')
solve([Eq(fn, f), Eq(abs(x) + 1, y)], [fn, x])
which attempts to solve for "fn" and "x", and therefore the solution for "fn" is an expression without x. If this works
In fact, it does not work with abs(); solving for something that sits inside an absolute value is not implemented in SymPy. Here is a workaround.
fn, ax = symbols('fn ax')
solve([Eq(fn, f.subs(abs(x), ax)), Eq(ax + 1, y)], [fn, ax])
This outputs [(y*(y + 4), y - 1)] where the first term is what you want; a solution for fn.