Systems of Linear Equations - sympy

I am using the following code to find x and y in this linear equation.
I was wondering if there is a way to add two more constrains to the following equation?
For example, how can we add x>0 , and y>0 to the following equation(3x+4y=7 and 5x+6y=8)to get a positive output?
from sympy import *
x, y = symbols(['x', 'y'])
system = [Eq(3*x + 4*y, 7), Eq(5*x + 6*y, 8)]
soln = solve(system, [x, y])
print(soln)

You can declare the symbols x and y to be positive:
In [4]: from sympy import *
...: x, y = symbols(['x', 'y'])
...: system = [Eq(3*x + 4*y, 7), Eq(5*x + 6*y, 8)]
...: soln = solve(system, [x, y])
...: print(soln)
{x: -5, y: 11/2}
In [5]: from sympy import *
...: x, y = symbols(['x', 'y'], positive=True)
...: system = [Eq(3*x + 4*y, 7), Eq(5*x + 6*y, 8)]
...: soln = solve(system, [x, y])
...: print(soln)
[]

Related

Sympy doing subs multiple times

I am using isympy and have the expression:
expr = x + 2 * y
And I want to substitute x with the values in [0, 1, 2, 3]. Currently I am doing:
Eq(Symbol('X_0'), expr.subs(x, 0))
Eq(Symbol('X_1'), expr.subs(x, 1))
Eq(Symbol('X_2'), expr.subs(x, 2))
Eq(Symbol('X_3'), expr.subs(x, 3))
Output:
X₀ = 2⋅y
X₁ = 2⋅y + 1
X₂ = 2⋅y + 2
X₃ = 2⋅y + 3
Is there are a better way to do this? I would like Xₖ to be a function which can take a list of k values.
Use a list comprehension to return a list given a list input:
In [1]: expr=x+2*y
In [2]: [expr.subs(x,i) for i in range(4)]
Out[2]: [2⋅y, 2⋅y + 1, 2⋅y + 2, 2⋅y + 3]
This can of course be cast as a function.

How to pass two parameters into lambda functions in map

I wish to use map to do the following thing:
res = []
arr1 = [1, 2, 3]
arr2 = [5, 0, 10]
for n, m in zip(arr1, arr2):
res.append(n - 0.5 * m)
This is equivalent to do in list comp:
res = [n - 0.5 * m for n, m in zip(arr1 ,arr2)]
But it fails using map:
res = map(lambda x, y: x - 0.5 * y, zip(arr1, arr2))
TypeError: <lambda>() takes exactly 2 arguments (1 given)
Is there a neat way to do this using map?
You zipped the arr1 and arr2 into a single argument
>>> res = []
>>> arr1 = [1, 2, 3]
>>> arr2 = [5, 0, 10]
>>> res = map(lambda x, y: x - 0.5 * y, arr1, arr2)
>>> res
[-1.5, 2.0, -2.0]
>>> map(lambda (x, y): x - 0.5 * y, zip(arr1, arr2))
[-1.5, 2.0, -2.0]
Like that you could take a tuple in lambda to fix it but I prefer what DTing suggested.

Turning a sympy expression into a vector to find linearly independent subset

I have a list of expressions like 4.0*x[0] + 5.0*x[10] + 1 = 0
I would like to turn these into vectors according to the coefficients like [4.0, 0, 0, ..., 5.0, ... , 1]. The reason is that some of my equations may be linearly dependent and I want to run QR from the numpy library so I can find a linearly independent subset.
I can get the constant term by doing expr.replace(x[i], 0) with i a wildcard index. I can also get most of the other terms by expr.atoms(Mul) which gives me the set 4.0*x[0], 5.0*x[10] and then for each of these expressions I can do expr.atoms(Indexed).pop() and expr.atoms(Float).pop() to split the parts.
The trouble is when I have an expression like x[0] + 5.0*x[10] + 1 = 0, where the first variable appears with an implicit coefficient of 1. The term is no longer recognized as a Mul object.
In any case, I think there might be a better way to achieve my goal?
If you give your symbols a specific order, as in the code below, you could convert the expression to a polynomial and get its coefficients:
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> a1, a2, a3, a4 = symbols('a[1], a[2], a[3], a[4]')
>>> used_symbols = (a1, a2, a3, a4)
>>> replacements = [(n, x**(enu+1)) for enu,n in enumerate(used_symbols)]
>>> expr = 5 + a1 + 4*a4
>>> Poly(expr.subs(replacements)).all_coeffs()
[4, 0, 0, 1, 5]
And you could retrieve a list of the used symbols too if they are not known beforehand with the following recursive function:
def retrieve_used_symbols(expr):
"""Return the symbols used in the `expr` in a list."""
used_symbols = []
for term in expr.args:
if term.is_Atom and term.is_Symbol:
used_symbols.append(term)
else:
used_symbols.extend(retrieve_used_symbols(term))
return used_symbols
The latter comes in handy when you have mixed symbols:
>>> crazy_expr = expr + 10*y-2*z
>>> crazy_expr
a[1] + 4*a[4] + 10*y - 2*z + 5
>>> used_symbols = retrieve_used_symbols(crazy_expr)
>>> replacements = [(n, x**(enu+1)) for enu,n in enumerate(used_symbols)]
>>> Poly(crazy_expr.subs(replacements)).all_coeffs()
[4, -2, 1, 10, 5]
>>> list(reversed(used_symbols))
[a[4], z, a[1], y]
For an IndexedBase object, it's even simpler:
coeffs = [expr.coeff(x[i]) for i in range(10)]
But you'll still need to add the constant term, which, like you said, you can obtain from a wildcard substitution:
ind = Wild('i')
constant_term = expr.replace(x[ind], 0)
{as requested by #(Oliver W.)}
Given
>>> x = IndexedBase('x')
>>> eqs = 4*x[0] + 5*x[5] + 1, x[1] - x[2]
>>> v = list(ordered(Tuple(*eqs).atoms(Indexed)))
One could do it like this
>>> [[eq.coeff(vi) for vi in v] + [eq.as_coeff_Add()[0]] for eq in eqs]
[[4, 0, 0, 5, 1], [0, 1, -1, 0, 0]]
But much of this is available through the matrix method jacobian. But to use it you have to replace the x[i] with symbols (since diff only works with functions are symbols, IIRC):
>>> d = [Dummy() for vi in v]
>>> z = dict(zip(d, [0]*len(d)))
>>> m = Matrix([eq.xreplace(dict(zip(v, d))) for eq in eqs])
>>> m.jacobian(d)
Matrix([
[4, 0, 0, 5],
[0, 1, -1, 0]])
>>> m.subs(z)
Matrix([
[1],
[0]])

Getting used indices of indexed terms in a sympy sum

Say I have variables x and y which are indexed
from sympy.tensor import IndexedBase
x = IndexedBase('x')`
And I have an expression like e = x[1] y[2] + x[5] y[10]
. I want to find all indices used by each of x and y. I'm looking for a function which might work like this: e.indices(y) = [2, 10] and e.indicies(x) = [1, 5]
Is there a way I can iterate through the terms x[i] y[j]? And if so, is there a way to split a product into terms and for each of those pull out which letter is being used and which index appears?
The following should get you headed in the right direction:
>>> from sympy.tensor import IndexedBase, Indexed
>>> from sympy import sift
>>> x = IndexedBase('x')
>>> y = IndexedBase('y')
>>> e = x[1]* y[2] + x[5]* y[10]
>>> e.atoms(IndexedBase)
set([y, x])
>>> e.atoms(Indexed)
set([x[5], y[10], x[1], y[2]])
>>> sifted = sift(_,lambda i: i.base)
>>> sifted[x]
[x[5], x[1]]
>>> sifted[y]
[y[10], y[2]]
>>> [i.indices for i in _]
[(10,), (2,)]
>>> flatten(_)
[10, 2]

Sympy substitution of x[i]*x[j] with x[i,j]

I have an indexed symbol x in Sympy and an expression which is a sum of second degree monomials like x[1]*x[2] + x[3]**2 + x[4]*x[1]. I would like to turn such an expression into x[1,2] + x[3,3] + x[4,1], i.e. replacing x[i]*x[j] -> x[i,j]
There is an upper bound on the indices which may appear, so I could construct a large table hard coding each substitution. Is there a better way?
Responding to the comment - to create x I write
from sympy.tensor import IndexedBase
x = IndexedBase('x')
You can use ordered to put the indices in order:
>>> from sympy import *
>>> i, j = symbols('i j', cls=Wild)
>>> x = IndexedBase('x')
>>> e = x[1]*x[3] + x[2]*x[1] + x[3]**2
>>> def new(o, x):
... if o.is_Mul:
... i,j=list(ordered([i.args[1] for i in o.args]))
... elif o.is_Pow:
... i = j = o.base.args[1]
... else:
... raise NotImplementedError
... return x[i, j]
...
>>> e.xreplace(dict([(o, new(o, x)) for o in e.find(x[i]*x[j])]))
x[1, 2] + x[1, 3] + x[3, 3]
But a simpler way to do the same thing is to use a Piecewise result in the replace call:
>>> e.replace(x[i]*x[j], Piecewise((x[i,j],i<j),(x[j,i],True)))
x[1, 2] + x[1, 3] + x[3, 3]
You can use replace with a Wild.
In [1]: i, j = symbols('i j', cls=Wild)
In [2]: x = IndexedBase('x')
In [3]: e = x[1]*x[3] + x[2]*x[1]
In [4]: e.replace(x[i]*x[j], x[i, j])
Out[4]: x[1, 2] + x[1, 3]