Python: Solving equation system (coefficients are arrays) - python-2.7

I can solve a system equation (using NumPY) like this:
>>> a = np.array([[3,1], [1,2]])
>>> b = np.array([9,8])
>>> y = np.linalg.solve(a, b)
>>> y
array([ 2., 3.])
But, if I got something like this:
>>> x = np.linspace(1,10)
>>> a = np.array([[3*x,1-x], [1/x,2]])
>>> b = np.array([x**2,8*x])
>>> y = np.linalg.solve(a, b)
It doesnt work, where the matrix's coefficients are arrays and I want calculate the array solution "y" for each element of the array "x". Also, I cant calculate
>>> det(a)
The question is: How can do that?

Check out the docs page. If you want to solve multiple systems of linear equations you can send in multiple arrays but they have to have shape (N,M,M). That will be considered a stack of N MxM arrays. A quote from the docs page below,
Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array. This is indicated in the documentation via input parameter specifications such as a : (..., M, M) array_like. This means that if for instance given an input array a.shape == (N, M, M), it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has det : (...) and will in this case return an array of shape det(a).shape == (N,). This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation.
When I run your code I get,
>>> a.shape
(2, 2)
>>> b.shape
(2, 50)
Not sure exactly what problem you're trying to solve, but you need to rethink your inputs. You want a to have shape (N,M,M) and b to have shape (N,M). You will then get back an array of shape (N,M) (i.e. N solution vectors).

Related

Using LAPACK's DORMQR with non-square Q

I want use LAPACK to calculate Q * x and Q^T * x, where Q comes from the reduced QR factorization of an m by n matrix A (m > n), stored in the form of Householder reflectors and a vector tau, as obtained from DGEQRF and x is a vector of length n in the case of Q * x and length m in the case of Q^T * x.
The documentation of DORMQR states that x is overwritten with the result, which already confuses me, since x and Q * x obviuosly have different dimensions if the original matrix A and subsequently its reduced Q are not square. Furthermore it states that
"Q is of order M if SIDE = 'L' and of order N if SIDE = 'R'."
In my case, only the first half applies and M refers to the length of x. What do they mean by order? I have rarely ever heard the term "order" in the context of non-square matrices, and if so, it would be something like m by n, and not just a single number. Do they mean rank?
Can I even use DORMQR to calculate both Q * x and Q^T * x for a non-square Q, or is it not designed for this? Do I need to pad x with zeros?
DORMQR applies only to Q a square matrix. Although the input A to the procedure relates to elementary reflectors, such as output of DGEQRF which can be more general, the documentation has the additional restriction that Q "is a real orthogonal matrix".
Of course, to be orthogonal, Q must be square.

Solving a matrix equation containing MatrixSymbols of symbolic size in Sympy?

As an introduction i want to point out that if one has a matrix A consisting of 4 submatrices in a 2x2 pattern, where the diagonal matrices are square, then if we denote its inverse as X, the submatrix X22 = (A22-A21(A11^-1)A12)^-1, which is quite easy to show by hand.
I was trying to do the same for a matrix of 4x4 submatrices, but its quite tedious by hand. So I thought Sympy would be of some help. But I cannot figure out how (I have started by just trying to reproduce the 2x2 result).
I've tried:
import sympy as s
def blockmatrix(name, sizes, names=None):
if names is None:
names = sizes
ll = []
for i, (s1, n1) in enumerate(zip(sizes, names)):
l = []
for j, (s2, n2) in enumerate(zip(sizes, names)):
l.append(s.MatrixSymbol(name+str(n1)+str(n2), s1, s2))
ll.append(l)
return ll
def eyes(*sizes):
ll = []
for i, s1 in enumerate(sizes):
l = []
for j, s2 in enumerate(sizes):
if i==j:
l.append(s.Identity(s1))
continue
l.append(s.ZeroMatrix(s1, s2))
ll.append(l)
return ll
n1, n2 = s.symbols("n1, n2", integer=True, positive=True, nonzero=True)
M = s.Matrix(blockmatrix("m", (n1, n2)))
X = s.Matrix(blockmatrix("x", (n1, n2)))
I = s.Matrix(eyes(n1, n2))
s.solve(M*X[:, 1:]-I[:, 1:], X[:, 1:])
but it just returns an empty list instead of the result.
I have also tried:
Using M*X==I but that just returns False (boolean, not an Expression)
Entering a list of equations
Using 'ordinary' symbols with commutative=False instead of MatrixSymbols -- this gives an exception with GeneratorsError: non-commutative generators: (x12, x22)
but all without luck.
Can you show how to derive a result with Sympy similar to the one I gave as an example for X22?
The most similar other questions on solving with MatrixSymbols seem to have been solved by working around doing exactly that, by using an array of the inner symbols or some such instead. But since I am dealing with symbolically sized MatrixSymbols, that is not an option for me.
Is this what you mean by a matrix of 2x2 matrices?
>>> a = [MatrixSymbol(i,2,2) for i in symbols('a1:5')]
>>> A = Matrix(2,2,a)
>>> X = A.inv()
>>> print(X[1,1]) # [1,1] instead of [2,2] because indexing starts at 0
a1*(a1*a3 - a3*a1)**(-1)
[You indicated not and pointed out that the above is not correct -- that appears to be an issue that should be resolved.]
I am not sure why this isn't implemented, but we can do the solving manually as follows:
>>> n = 2
>>> v = symbols('b:%s'%n**2,commutative=False)
>>> A = Matrix(n,n,symbols('a:%s'%n**2,commutative=False))
>>> B = Matrix(n,n,v)
>>> eqs = list(A*B - eye(n))
>>> for i in range(n**2):
... s = solve(eqs[i],v[i])[0]
... eqs[i+1:] = [e.subs(v[i],s) for e in eqs[i+1:]]
...
>>> s # solution for v[3] which is B22
(-a2*a0**(-1)*a1 + a3)**(-1)
You can change n to 3 and see a modestly complicated expression. Change it to 4 and check the result by hand to give a new definition to the word "tedious" ;-)
The special structure of the equations to be solved can allow for a faster solution, too: the variable of interest is the last factor in each term containing it:
>>> for i in range(n**2):
... c,d = eqs[i].expand().as_independent(v[i])
... assert all(j.args[-1]==v[i] for j in Add.make_args(d))
... s = 1/d.subs(v[i], 1)*-c
... eqs[i+1:] = [e.subs(v[i], s) for e in eqs[i+1:]]

How to do weighted sum as tensor operation in tensorflow?

I am trying to do a weighted sum of matrices in tensorflow.
Unfortunately, my dimensions are not small and I have a problem with memory. Another option is that I doing something completely wrong
I have two tensors U with shape (B,F,M) and A with shape (C,B). I would like to do weighted sum and stacking.
Weighted sum
For each index c from C, I have vector of weights a from A, with shape (B,).
I want to use it for the weighted sum of U to get matrix U_t with shape (F, M). This is pretty same with this, where I found small help.
Concatenation
Unfortunately, I want to do this for each vector a in A to get C matrices U_tc in list. U_tc have mentioned shape (F,M). After that I concatenate all matrices in list to get super matrix with shape (C*F,M)
My values are C=2500, M=500, F=80, B=300
In the beginning, I tried the very naive approach with many loop and element selection which generate very much operation.
Now with help from this, I have following:
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
U_t = []
for ccc in xrange(C):
a = A[ccc,:]
a_broadcasted = tf.tile(tf.reshape(a,[B,1,1]), tf.stack([1,F,M]))
T_p.append(tf.reduce_sum(tf.multiply(U,a_broadcasted), axis=0))
U_tcs = tf.concat(U_t,axis=0)
Unfortunately, this is failing at memory error. I am not sure if I did something wrong, or it is because computation has a too much mathematic operation? Because I think... variables aren't too large for memory, right? At least, I had larger variables before and it was ok. (I have 16 GB GPU memory)
Am I doing that weighted sum correctly?
Any idea how to do it more effective?
I will appreciate any help. Thanks.
1. Weighted sum and Concatenation
You can use vector operations directly without loops when memory is not limited.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
# shape=(B,F,M)
U_t = tf.reduce_sum(tf.multiply(A_new , U),axis=1)
# shape=(C*F,M)
U_tcs = tf.reshape(U_t,(C*F,M))
2. Memory error
In fact, I also had memory errors when I ran the above code.
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[2500,300,80,500]...
With a little modification of the above code, it works properly on my 8GB GPU memory.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
U_t = []
for ccc in range(C):
a = A_new[ccc,:]
a_broadcasted = tf.reduce_sum(tf.multiply(a, U),axis=0)
U_t.append(a_broadcasted)
U_tcs = tf.concat(U_t,axis=0)

ValueError: object of too small depth for desired array

I've searched and find out this may be a problem concerning types. But I tried to force the array to float using astype didn't work out. This must be a simple error, however im a beginner.
About the problem: im trying to form the spatial correlation matrix bewteen the signals of all mics.
R_a[k][l] = np.correlate(self.mic_list[k].delayed_signal,self.mic_list[l].delayed_signal)
where this class has a mic_list which is a list of mic, which is another class that has this method
def add_delayed_signal (self, delayed_signal):
self.delayed_signal = delayed_signal
Thanks you in advanced.
I'm guessing R_a is a 2-dimensional array. What np.correlate does is to compute the cross-correlation between two signals, and gives you a vector as a result (not a scalar).
What you're looking for is probably np.cov or np.corrcoef. These are also vectorized approaches to getting the result you want.
For example:
>>> x = np.random.randn(10)
>>> y = np.random.randn(10)
>>> X = np.vstack((x, y))
>>> X
array([[ 1.45841294, -0.16430013, -0.20782822, 0.08979425, -1.38337166,
0.36488053, -2.57135737, 0.81215918, -0.54081983, 0.30421112],
[-0.79416305, 1.14511318, -0.4962483 , -0.42647021, -0.59925241,
-0.45612051, -0.02566026, -1.7668091 , -1.63098627, 0.3761437 ]])
>>> np.cov(X)
array([[ 1.28563113, -0.20563105],
[-0.20563105, 0.74178773]])
Is this what you're looking for?

Scipy.optimize.minpack.leastsq with 2 dimensional args

I am trying to calculate the 3x3 calibration matrix P of a camera based on 2D to 3D point correspondences as described in this paper http://cronos.rutgers.edu/~meer/TEACHTOO/PAPERS/zhang.pdf (section 2.3) using python 2.7. I have been able to find an initial estimate of P, but now need to refine it using the Levenberg-Marquardt Algorithm (2.3.4). It appears to me that this can be done with scipy.optimize.minpack.leastsq. However, my attemps at implementing this function have failed. Here is a simplified version of what I have (M is a numpy array of homogenized 3d points in the format (x,y,z,1) with a shape of (18,4) and m is a numpy array of homogenized 2d points in the format (u,v,1) with a shape of (18,3)):
import numpy as N
from scipy.optimize.minpack import leastsq
def e(P,M,m):
a = P.dot(M.T)
print a.shape
b = m.T-a
b1 = b[0]
b2 = b[1]
b3 = b[2]
dist = sqrt((b1**2)+(b2**2)+(b3**2))
return dist
P = N.array( [ [4.66135353e+01,1.24341518e+02,-9.07923056e+00,9.59292826e+02],
[-3.60062368e+01,3.56319152e+01,1.14245572e+02,2.32061401e-02],
[-4.04188199e-02,4.00793699e-02,-9.48804649e-03,1.00000e+00] ] )
m = []
M = []
#define m list and M list
for i in range(0,len(uv)):
uv[i].append(1) #uv is unhomogenized uv coordinate list (source left out to simplify)
xyz[i].append(1) #xyz is unhomogenized xyz coordinate list (source left out to simplify)
m.append(N.array( [ [uv[i][0]],[uv[i][1]],[uv[i][2]] ] ))
m = N.array( uv )
M = N.array( xyz )
#the shape of m is (18,3) and the shape of M is (18,4)
P_new, success = leastsq(e, P, args=(M,m))
I think the problem is with the M and m variables, the arrays of vectors. I looked at an example for the scipy.optimize.lstsq function and I could get that to work but it had args with only one dimension.
Does anyone know what I am doing wrong here? I am fairly new to programming so take it easy on me if this is idiotic haha. Thanks so much to all who read this and let me know if I can provide anymore info
It seems that leastsq doesn't know how to optimize multidimensional variables, a problem that is easy to work around:
def e2(P, M, m) :
return np.sqrt(np.sum((m.T - np.dot(P.reshape(3,4), M.T))**2, axis=0))
P = P.reshape((12,))
P_new, success = leastsq(e2, P, args=(M, m))
This runs, although with my made up random data has trouble converging. The basic idea is to treat matrix P as a 12 item long vector, and reshape it inside the function when needed to convert M to m.
I have also taken the liberty of rewriting your e function in a more numpythonic way...