I have an indexed symbol x in Sympy and an expression which is a sum of second degree monomials like x[1]*x[2] + x[3]**2 + x[4]*x[1]. I would like to turn such an expression into x[1,2] + x[3,3] + x[4,1], i.e. replacing x[i]*x[j] -> x[i,j]
There is an upper bound on the indices which may appear, so I could construct a large table hard coding each substitution. Is there a better way?
Responding to the comment - to create x I write
from sympy.tensor import IndexedBase
x = IndexedBase('x')
You can use ordered to put the indices in order:
>>> from sympy import *
>>> i, j = symbols('i j', cls=Wild)
>>> x = IndexedBase('x')
>>> e = x[1]*x[3] + x[2]*x[1] + x[3]**2
>>> def new(o, x):
... if o.is_Mul:
... i,j=list(ordered([i.args[1] for i in o.args]))
... elif o.is_Pow:
... i = j = o.base.args[1]
... else:
... raise NotImplementedError
... return x[i, j]
...
>>> e.xreplace(dict([(o, new(o, x)) for o in e.find(x[i]*x[j])]))
x[1, 2] + x[1, 3] + x[3, 3]
But a simpler way to do the same thing is to use a Piecewise result in the replace call:
>>> e.replace(x[i]*x[j], Piecewise((x[i,j],i<j),(x[j,i],True)))
x[1, 2] + x[1, 3] + x[3, 3]
You can use replace with a Wild.
In [1]: i, j = symbols('i j', cls=Wild)
In [2]: x = IndexedBase('x')
In [3]: e = x[1]*x[3] + x[2]*x[1]
In [4]: e.replace(x[i]*x[j], x[i, j])
Out[4]: x[1, 2] + x[1, 3]
Related
Function f (assume n=3 for simplicity):
There are 3 symbols related to entities, corresponding to x[j](j=1,2,3) respectively. R and c is other symbols, which can be treated like constant for now. I try to diff f w.r.t x[j], and solve the results equations together and get x[j]=g(R,c). However, sympy cannot rearrange or split x[j] from the equation.
Derivatives:
Expected Results:
from sympy import *
import sympy as sym
real_n = 3
x = IndexedBase('x')
j, k, n = symbols('j,k n', cls=Idx)
f = x[j]*Symbol("R")/Sum(x[k],(k,1,real_n))-Symbol("c")*x[j]
equ = diff(f,x[j])
ee = solve([equ.subs(j,1),equ.subs(j,2),equ.subs(j,3)], (x[1],x[2],x[3]))
simplify(ee)
Sympy's result:
{x[1]: (R*Sum(x[k], (k, 1, 3)) - c*Sum(x[k], (k, 1, 3))**2)/(R*Sum(KroneckerDelta(1, k), (k, 1, 3))),
x[2]: (R*Sum(x[k], (k, 1, 3)) - c*Sum(x[k], (k, 1, 3))**2)/(R*Sum(KroneckerDelta(2, k), (k, 1, 3))),
x[3]: (R*Sum(x[k], (k, 1, 3)) - c*Sum(x[k], (k, 1, 3))**2)/(R*Sum(KroneckerDelta(3, k), (k, 1, 3)))}
I tried to check if the indexed symbol caused the error, and wrote x[i] as 3 different symbols, but it still didn't work.
from sympy import *
a, b, c = symbols('a b c', cls=Idx)
R = symbols("R")
eq1 = diff(a/(a+b+c)-a*R,a)
eq2 = diff(b/(a+b+c)-b*R,b)
eq3 = diff(c/(a+b+c)-c*R,c)
print(eq1,"\n",eq2,"\n",eq3)
solve([eq1,eq2,eq3], [a,b,c])
Output:
-R + 1/(a + b + c) - a/(a + b + c)**2
-R + 1/(a + b + c) - b/(a + b + c)**2
-R + 1/(a + b + c) - c/(a + b + c)**2
[]
Is there something wrong with my approach? Is it possible to approach this problem in SymPy from another angle?
Any suggestions for the solution of equations are also most welcome.
You can use doit to expand the summation and then solve:
In [6]: solve([equ.subs(j,1).doit(),equ.subs(j,2).doit(),equ.subs(j,3).doit()], (x[1],x[2],x[3]))
Out[6]:
⎡⎛ ____ ⎞⎤
⎢⎜ ╱ 2 ⎟⎥
⎢⎜R + 3⋅╲╱ R 2⋅R 2⋅R⎟⎥
⎢⎜─────────────, ───, ───⎟⎥
⎣⎝ 18⋅c 9⋅c 9⋅c⎠⎦
I am using isympy and have the expression:
expr = x + 2 * y
And I want to substitute x with the values in [0, 1, 2, 3]. Currently I am doing:
Eq(Symbol('X_0'), expr.subs(x, 0))
Eq(Symbol('X_1'), expr.subs(x, 1))
Eq(Symbol('X_2'), expr.subs(x, 2))
Eq(Symbol('X_3'), expr.subs(x, 3))
Output:
X₀ = 2⋅y
X₁ = 2⋅y + 1
X₂ = 2⋅y + 2
X₃ = 2⋅y + 3
Is there are a better way to do this? I would like Xₖ to be a function which can take a list of k values.
Use a list comprehension to return a list given a list input:
In [1]: expr=x+2*y
In [2]: [expr.subs(x,i) for i in range(4)]
Out[2]: [2⋅y, 2⋅y + 1, 2⋅y + 2, 2⋅y + 3]
This can of course be cast as a function.
Let us consider following code
from sympy import *
n = Symbol('n', real=True)
k = Symbol('k', real=True)
f = lambda n: summation(exp(sqrt(k)), (k, 1, n))
display(f(n))
display(f(5))
It results in ( see latex screenshot )
Piecewise((n*exp(c3_), Eq(exp(c2_), 1)), ((exp(c2_) - exp(c2_)**(n + 1))*exp(c3_)/(-exp(c2_) + 1), True))
E + exp(sqrt(2)) + exp(sqrt(3)) + exp(2) + exp(sqrt(5))
Questions
What are the constans c1_, c2_ and c3_?
Why did not the first display return a summation formula?
How did the sympy produce the second output, assumig f is represented as in the first output?
I have a list of expressions like 4.0*x[0] + 5.0*x[10] + 1 = 0
I would like to turn these into vectors according to the coefficients like [4.0, 0, 0, ..., 5.0, ... , 1]. The reason is that some of my equations may be linearly dependent and I want to run QR from the numpy library so I can find a linearly independent subset.
I can get the constant term by doing expr.replace(x[i], 0) with i a wildcard index. I can also get most of the other terms by expr.atoms(Mul) which gives me the set 4.0*x[0], 5.0*x[10] and then for each of these expressions I can do expr.atoms(Indexed).pop() and expr.atoms(Float).pop() to split the parts.
The trouble is when I have an expression like x[0] + 5.0*x[10] + 1 = 0, where the first variable appears with an implicit coefficient of 1. The term is no longer recognized as a Mul object.
In any case, I think there might be a better way to achieve my goal?
If you give your symbols a specific order, as in the code below, you could convert the expression to a polynomial and get its coefficients:
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> a1, a2, a3, a4 = symbols('a[1], a[2], a[3], a[4]')
>>> used_symbols = (a1, a2, a3, a4)
>>> replacements = [(n, x**(enu+1)) for enu,n in enumerate(used_symbols)]
>>> expr = 5 + a1 + 4*a4
>>> Poly(expr.subs(replacements)).all_coeffs()
[4, 0, 0, 1, 5]
And you could retrieve a list of the used symbols too if they are not known beforehand with the following recursive function:
def retrieve_used_symbols(expr):
"""Return the symbols used in the `expr` in a list."""
used_symbols = []
for term in expr.args:
if term.is_Atom and term.is_Symbol:
used_symbols.append(term)
else:
used_symbols.extend(retrieve_used_symbols(term))
return used_symbols
The latter comes in handy when you have mixed symbols:
>>> crazy_expr = expr + 10*y-2*z
>>> crazy_expr
a[1] + 4*a[4] + 10*y - 2*z + 5
>>> used_symbols = retrieve_used_symbols(crazy_expr)
>>> replacements = [(n, x**(enu+1)) for enu,n in enumerate(used_symbols)]
>>> Poly(crazy_expr.subs(replacements)).all_coeffs()
[4, -2, 1, 10, 5]
>>> list(reversed(used_symbols))
[a[4], z, a[1], y]
For an IndexedBase object, it's even simpler:
coeffs = [expr.coeff(x[i]) for i in range(10)]
But you'll still need to add the constant term, which, like you said, you can obtain from a wildcard substitution:
ind = Wild('i')
constant_term = expr.replace(x[ind], 0)
{as requested by #(Oliver W.)}
Given
>>> x = IndexedBase('x')
>>> eqs = 4*x[0] + 5*x[5] + 1, x[1] - x[2]
>>> v = list(ordered(Tuple(*eqs).atoms(Indexed)))
One could do it like this
>>> [[eq.coeff(vi) for vi in v] + [eq.as_coeff_Add()[0]] for eq in eqs]
[[4, 0, 0, 5, 1], [0, 1, -1, 0, 0]]
But much of this is available through the matrix method jacobian. But to use it you have to replace the x[i] with symbols (since diff only works with functions are symbols, IIRC):
>>> d = [Dummy() for vi in v]
>>> z = dict(zip(d, [0]*len(d)))
>>> m = Matrix([eq.xreplace(dict(zip(v, d))) for eq in eqs])
>>> m.jacobian(d)
Matrix([
[4, 0, 0, 5],
[0, 1, -1, 0]])
>>> m.subs(z)
Matrix([
[1],
[0]])
Say I have variables x and y which are indexed
from sympy.tensor import IndexedBase
x = IndexedBase('x')`
And I have an expression like e = x[1] y[2] + x[5] y[10]
. I want to find all indices used by each of x and y. I'm looking for a function which might work like this: e.indices(y) = [2, 10] and e.indicies(x) = [1, 5]
Is there a way I can iterate through the terms x[i] y[j]? And if so, is there a way to split a product into terms and for each of those pull out which letter is being used and which index appears?
The following should get you headed in the right direction:
>>> from sympy.tensor import IndexedBase, Indexed
>>> from sympy import sift
>>> x = IndexedBase('x')
>>> y = IndexedBase('y')
>>> e = x[1]* y[2] + x[5]* y[10]
>>> e.atoms(IndexedBase)
set([y, x])
>>> e.atoms(Indexed)
set([x[5], y[10], x[1], y[2]])
>>> sifted = sift(_,lambda i: i.base)
>>> sifted[x]
[x[5], x[1]]
>>> sifted[y]
[y[10], y[2]]
>>> [i.indices for i in _]
[(10,), (2,)]
>>> flatten(_)
[10, 2]