Vector not defined by components in sympy - sympy

Is it possible in the Sympy vector package to initialize a vector without having to declare its components? Often when we do symbolic calculations it is not required to explicitate components.

The Vector objects in Vector module always have three slots for components, which have to be filled with numeric or symbolic expressions. If you don't want to provide the names of components, matrix_to_vector can be used, passing a list of freshly created symbols to fill the slots.
from sympy import symbols
from sympy.vector import CoordSys3D, matrix_to_vector
N = CoordSys3D("N")
v = matrix_to_vector(symbols("v_1:4"), N)
Now v is a vector with components v_1, v_2, v_3.
In another direction, one can do linear algebra using Matrix Expressions, representing vectors as matrices with one column. This conforms better to the idea of manipulating vectors without listing their components
from sympy import MatrixSymbol
v = MatrixSymbol("v", 3, 1)
A = MatrixSymbol("A", 3, 3)
print(A*v) # simply A*v
(A*v).as_explicit() # print out in components
prints
Matrix([
[A[0, 0]*v[0, 0] + A[0, 1]*v[1, 0] + A[0, 2]*v[2, 0]],
[A[1, 0]*v[0, 0] + A[1, 1]*v[1, 0] + A[1, 2]*v[2, 0]],
[A[2, 0]*v[0, 0] + A[2, 1]*v[1, 0] + A[2, 2]*v[2, 0]]])

Related

are there any functions in BLAS that can perform skew-symmetric matrix-vector products?

I'm thinking of performing some calculations with Intel-MKL, specifically the matrix-vector Sparse BLAS functions for a program in Fortran.
I can express my calculations in matrices that happen to be sparse and skew-symmetric
From what I can see, Sparse BLAS has sparse functions for general and symmetric matrices, so I wanted to know if I there was a way to work with a sparse skew-symmetric matrix instead, because I imagine it would reduce memory footprint.
TLDR; MKL Sparse BLAS can do matrix-vector multiplications with a sparse matrix expressed as the upper/lower triangle by the mkl_scsrmv subroutine subroutine and supplying 'A' to the first element in the matrix descriptor array.
Ok I managed to find the answer to my question when I started testing the general MKL Sparse BLAS matrix-vector multiplication in CSR format (mkl_?csrmv)
I learnt that there is a character array that is used to describe the input matrix (matdescra). The first character in this array can be set to 'A' which causes the subroutine to interpret the input matrix as skew-symmetric. For example (not necessarily a good one),
Given a matrix, A, and and vector, x,:
A = [ 0 1 2 x = [ 1
-1 0 3 2
-2 -3 0 ] 3 ]
The upper-triangle of A can be represented as
val = [1, 2, 3]
col = [2, 3, 3]
rowstart = [1, 3, 3]
rowend = [3 3 4]
and with the character array matdescra = ['A', 'U', 'N', 'F'],
The matrix-vector product is obtained by
call mkl_scsrmv('n', 3, 3, 1., matdescra, val, rowstart, rowend, x, 1., y
where the output (a vector) is added to the vector-array, y.

Solving system of equations in sympy with matrix variables

I am looking for a matrix that solves a complicated system of equations; i.e., it would be hard to flatten the equations into vector form. Here is a toy example showing the error that I'm getting:
from sympy import nsolve, symbols, Inverse
from sympy.polys.polymatrix import PolyMatrix
import numpy as np
import itertools as itr
nnodes = 2
nodes = list(range(nnodes))
u_mat = PolyMatrix([symbols(f'u{i}{j}') for i, j in itr.product(nodes, nodes)]).reshape(2, 2)
u_mat_inv = Inverse(u_mat)
equations = [
u_mat_inv[0, 0] - 1,
u_mat_inv[0, 1] - 0,
u_mat_inv[1, 0] - 0,
u_mat_inv[1, 1] - 1
]
s = nsolve(equations, u_mat, np.ones(4))
This raises the following error:
TypeError: X must be a row or a column matrix
Is there a way around this without having to write the equations in vector form?
I think nsolve is getting confused because u_mat is a matrix. Passing list(u_mat) gives the input as expected by nsolve. The next problem is your choice of initial guess is a singularity of the system of equations.
You can use normal solve here though:
In [24]: solve(equations, list(u_mat))
Out[24]: [(1, 0, 0, 1)]

Computation of symbolic eigenvalues with sympy

I'm trying to compute eigenvalues of a symbolic complex matrix Mof size 3x3. In some cases, eigenvals() works perfectly. For example, the following code:
import sympy as sp
kx = sp.symbols('kx')
x = 0.
M = sp.Matrix([[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]])
M[0, 0] = 1.
M[0, 1] = 2./3.
M[0, 2] = 2./3.
M[1, 0] = sp.exp(1j*kx) * 1./6. + x
M[1, 1] = sp.exp(1j*kx) * 2./3.
M[1, 2] = sp.exp(1j*kx) * -1./3.
M[2, 0] = sp.exp(-1j*kx) * 1./6.
M[2, 1] = sp.exp(-1j*kx) * -1./3.
M[2, 2] = sp.exp(-1j*kx) * 2./3.
dict_eig = M.eigenvals()
returns me 3 correct complex symbolic eigenvalues of M. However, when I set x=1., I get the following error:
raise MatrixError("Could not compute eigenvalues for {}".format(self))
I also tried to compute eigenvalues as follows:
lam = sp.symbols('lambda')
cp = sp.det(M - lam * sp.eye(3))
eigs = sp.solveset(cp, lam)
but it returns me a ConditionSet in any case, even when eigenvals() can do the job.
Does anyone know how to properly solve this eigenvalue problem, for any value of x?
Your definition of M made life too hard for SymPy because it introduced floating point numbers. When you want a symbolic solution, floats are to be avoided. That means:
instead of 1./3. (Python's floating point number) use sp.Rational(1, 3) (SymPy's rational number) or sp.S(1)/3 which has the same effect but is easier to type.
instead of 1j (Python's imaginary unit) use sp.I (SymPy's imaginary unit)
instead of x = 1., write x = 1 (Python 2.7 habits and SymPy go poorly together).
With these changes either solveset or solve find the eigenvalues, although solve gets them much faster. Also, you can make a Poly object and apply roots to it, which is probably most efficient:
M = sp.Matrix([
[
1,
sp.Rational(2, 3),
sp.Rational(2, 3),
],
[
sp.exp(sp.I*kx) * sp.Rational(1, 6) + x,
sp.exp(sp.I*kx) * sp.Rational(1, 6),
sp.exp(sp.I*kx) * sp.Rational(-1, 3),
],
[
sp.exp(-sp.I*kx) * sp.Rational(1, 6),
sp.exp(-sp.I*kx) * sp.Rational(-1, 3),
sp.exp(-sp.I*kx) * sp.Rational(2, 3),
]
])
lam = sp.symbols('lambda')
cp = sp.det(M - lam * sp.eye(3))
eigs = sp.roots(sp.Poly(cp, lam))
(It would be easier to do from sympy import * than type all these sp.)
I'm not quite clear on why SymPy's eigenvals method reports failure even with the above modifications. As you can see in the source, it doesn't do much more than what the above code does: call roots on the characteristic polynomial. The difference appears to be in the way this polynomial is created: M.charpoly(lam) returns
PurePoly(lambda**3 + (I*sin(kx)/2 - 5*cos(kx)/6 - 1)*lambda**2 + (-I*sin(kx)/2 + 11*cos(kx)/18 - 2/3)*lambda + 1/6 + 2*exp(-I*kx)/3, lambda, domain='EX')
with mysterious (to me) domain='EX'. Subsequently, an application of roots returns {}, no roots found. Looks like a deficiency of the implementation.

Index numpy arrays columns by another numpy array

I am trying to index a 2d matrix in numpy so that I can get all rows but only particular columns given by another numpy array. It's something as following:
a = [0,1,1,2,0,2,1]
d = [[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]]
I want to get all rows from d such that column is given by a. So for above example I want,
t = [1,2,2,3,1,3,2]
I tried some of the methods given on numpy documentation but am not able to get it.
I think this is doable in matlab without any iteration. Can I do this is python without looping over something?
This can be done with advanced indexing:
>>> a = numpy.array([0, 1, 1, 2, 0, 2, 1])
>>> d = numpy.array([[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]])
>>> d[numpy.arange(d.shape[0]), a]
array([1, 2, 2, 3, 1, 3, 2])
For arrays a, b, and c where b and c have integer dtype and b.shape == c.shape, advanced indexing d = a[b, c] gives d[i] == a[b[i], c[i]].

Method for evaluating the unit vector ( or normalising a vector ) in Python or in the numerical libraries: numpy, scipy [duplicate]

I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function:
def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
This function handles the situation where vector v has the norm value of 0.
Is there any similar functions provided in sklearn or numpy?
If you're using scikit-learn you can use sklearn.preprocessing.normalize:
import numpy as np
from sklearn.preprocessing import normalize
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True
I agree that it would be nice if such a function were part of the included libraries. But it isn't, as far as I know. So here is a version for arbitrary axes that gives optimal performance.
import numpy as np
def normalized(a, axis=-1, order=2):
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
A = np.random.randn(3,3,3)
print(normalized(A,0))
print(normalized(A,1))
print(normalized(A,2))
print(normalized(np.arange(3)[:,None]))
print(normalized(np.arange(3)))
This might also work for you
import numpy as np
normalized_v = v / np.sqrt(np.sum(v**2))
but fails when v has length 0.
In that case, introducing a small constant to prevent the zero division solves this.
As proposed in the comments one could also use
v/np.linalg.norm(v)
To avoid zero division I use eps, but that's maybe not great.
def normalize(v):
norm=np.linalg.norm(v)
if norm==0:
norm=np.finfo(v.dtype).eps
return v/norm
If you have multidimensional data and want each axis normalized to its max or its sum:
def normalize(_d, to_sum=True, copy=True):
# d is a (n x dimension) np array
d = _d if not copy else np.copy(_d)
d -= np.min(d, axis=0)
d /= (np.sum(d, axis=0) if to_sum else np.ptp(d, axis=0))
return d
Uses numpys peak to peak function.
a = np.random.random((5, 3))
b = normalize(a, copy=False)
b.sum(axis=0) # array([1., 1., 1.]), the rows sum to 1
c = normalize(a, to_sum=False, copy=False)
c.max(axis=0) # array([1., 1., 1.]), the max of each row is 1
If you don't need utmost precision, your function can be reduced to:
v_norm = v / (np.linalg.norm(v) + 1e-16)
You mentioned sci-kit learn, so I want to share another solution.
sci-kit learn MinMaxScaler
In sci-kit learn, there is a API called MinMaxScaler which can customize the the value range as you like.
It also deal with NaN issues for us.
NaNs are treated as missing values: disregarded in fit, and maintained
in transform. ... see reference [1]
Code sample
The code is simple, just type
# Let's say X_train is your input dataframe
from sklearn.preprocessing import MinMaxScaler
# call MinMaxScaler object
min_max_scaler = MinMaxScaler()
# feed in a numpy array
X_train_norm = min_max_scaler.fit_transform(X_train.values)
# wrap it up if you need a dataframe
df = pd.DataFrame(X_train_norm)
Reference
[1] sklearn.preprocessing.MinMaxScaler
There is also the function unit_vector() to normalize vectors in the popular transformations module by Christoph Gohlke:
import transformations as trafo
import numpy as np
data = np.array([[1.0, 1.0, 0.0],
[1.0, 1.0, 1.0],
[1.0, 2.0, 3.0]])
print(trafo.unit_vector(data, axis=1))
If you work with multidimensional array following fast solution is possible.
Say we have 2D array, which we want to normalize by last axis, while some rows have zero norm.
import numpy as np
arr = np.array([
[1, 2, 3],
[0, 0, 0],
[5, 6, 7]
], dtype=np.float)
lengths = np.linalg.norm(arr, axis=-1)
print(lengths) # [ 3.74165739 0. 10.48808848]
arr[lengths > 0] = arr[lengths > 0] / lengths[lengths > 0][:, np.newaxis]
print(arr)
# [[0.26726124 0.53452248 0.80178373]
# [0. 0. 0. ]
# [0.47673129 0.57207755 0.66742381]]
If you want to normalize n dimensional feature vectors stored in a 3D tensor, you could also use PyTorch:
import numpy as np
from torch import FloatTensor
from torch.nn.functional import normalize
vecs = np.random.rand(3, 16, 16, 16)
norm_vecs = normalize(FloatTensor(vecs), dim=0, eps=1e-16).numpy()
If you're working with 3D vectors, you can do this concisely using the toolbelt vg. It's a light layer on top of numpy and it supports single values and stacked vectors.
import numpy as np
import vg
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = vg.normalize(x)
print np.all(norm1 == norm2)
# True
I created the library at my last startup, where it was motivated by uses like this: simple ideas which are way too verbose in NumPy.
Without sklearn and using just numpy.
Just define a function:.
Assuming that the rows are the variables and the columns the samples (axis= 1):
import numpy as np
# Example array
X = np.array([[1,2,3],[4,5,6]])
def stdmtx(X):
means = X.mean(axis =1)
stds = X.std(axis= 1, ddof=1)
X= X - means[:, np.newaxis]
X= X / stds[:, np.newaxis]
return np.nan_to_num(X)
output:
X
array([[1, 2, 3],
[4, 5, 6]])
stdmtx(X)
array([[-1., 0., 1.],
[-1., 0., 1.]])
For a 2D array, you can use the following one-liner to normalize across rows. To normalize across columns, simply set axis=0.
a / np.linalg.norm(a, axis=1, keepdims=True)
If you want all values in [0; 1] for 1d-array then just use
(a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
Where a is your 1d-array.
An example:
>>> a = np.array([0, 1, 2, 4, 5, 2])
>>> (a - a.min(axis=0)) / (a.max(axis=0) - a.min(axis=0))
array([0. , 0.2, 0.4, 0.8, 1. , 0.4])
Note for the method. For saving proportions between values there is a restriction: 1d-array must have at least one 0 and consists of 0 and positive numbers.
A simple dot product would do the job. No need for any extra package.
x = x/np.sqrt(x.dot(x))
By the way, if the norm of x is zero, it is inherently a zero vector, and cannot be converted to a unit vector (which has norm 1). If you want to catch the case of np.array([0,0,...0]), then use
norm = np.sqrt(x.dot(x))
x = x/norm if norm != 0 else x