Scipy.optimize.minpack.leastsq with 2 dimensional args - python-2.7

I am trying to calculate the 3x3 calibration matrix P of a camera based on 2D to 3D point correspondences as described in this paper http://cronos.rutgers.edu/~meer/TEACHTOO/PAPERS/zhang.pdf (section 2.3) using python 2.7. I have been able to find an initial estimate of P, but now need to refine it using the Levenberg-Marquardt Algorithm (2.3.4). It appears to me that this can be done with scipy.optimize.minpack.leastsq. However, my attemps at implementing this function have failed. Here is a simplified version of what I have (M is a numpy array of homogenized 3d points in the format (x,y,z,1) with a shape of (18,4) and m is a numpy array of homogenized 2d points in the format (u,v,1) with a shape of (18,3)):
import numpy as N
from scipy.optimize.minpack import leastsq
def e(P,M,m):
a = P.dot(M.T)
print a.shape
b = m.T-a
b1 = b[0]
b2 = b[1]
b3 = b[2]
dist = sqrt((b1**2)+(b2**2)+(b3**2))
return dist
P = N.array( [ [4.66135353e+01,1.24341518e+02,-9.07923056e+00,9.59292826e+02],
[-3.60062368e+01,3.56319152e+01,1.14245572e+02,2.32061401e-02],
[-4.04188199e-02,4.00793699e-02,-9.48804649e-03,1.00000e+00] ] )
m = []
M = []
#define m list and M list
for i in range(0,len(uv)):
uv[i].append(1) #uv is unhomogenized uv coordinate list (source left out to simplify)
xyz[i].append(1) #xyz is unhomogenized xyz coordinate list (source left out to simplify)
m.append(N.array( [ [uv[i][0]],[uv[i][1]],[uv[i][2]] ] ))
m = N.array( uv )
M = N.array( xyz )
#the shape of m is (18,3) and the shape of M is (18,4)
P_new, success = leastsq(e, P, args=(M,m))
I think the problem is with the M and m variables, the arrays of vectors. I looked at an example for the scipy.optimize.lstsq function and I could get that to work but it had args with only one dimension.
Does anyone know what I am doing wrong here? I am fairly new to programming so take it easy on me if this is idiotic haha. Thanks so much to all who read this and let me know if I can provide anymore info

It seems that leastsq doesn't know how to optimize multidimensional variables, a problem that is easy to work around:
def e2(P, M, m) :
return np.sqrt(np.sum((m.T - np.dot(P.reshape(3,4), M.T))**2, axis=0))
P = P.reshape((12,))
P_new, success = leastsq(e2, P, args=(M, m))
This runs, although with my made up random data has trouble converging. The basic idea is to treat matrix P as a 12 item long vector, and reshape it inside the function when needed to convert M to m.
I have also taken the liberty of rewriting your e function in a more numpythonic way...

Related

Sympy expression simplification

I'm solving an eigenvalue problem when the matrix and the eigenvectors are time dependent. The matrix has dimension 8x8 and is hermitian. The time dependent matrix has the form:
import sympy as sp
t, lbd = sp.symbols(r't,\lambda', real=True)
Had = ...
print(repr(Had))
Matrix([[2*t,0, 0, 0, 0, 0, 0,0],
[ 0,-2*t, 2*t*(1 - t), 0, 0, 0,0,0],
[0, 2*t*(1 - t),0,0, 2 - 2*t, 0,0,0],
[0,0,0,0, 0, 2 - 2*t, 0,0],
[0,0,2 - 2*t,0,0,0,0,0],
[0,0,0, 2 - 2*t,0,0, 2*t*(1 - t),0],
[0,0,0,0,0, 2*t*(1 - t),-2*t,0],
[0,0,0,0,0,0,0,2*t]])
Now the characteristic polynomial has the following for:
P = p.simplify(sp.collect(sp.factor(Had.charpoly(lbd).as_expr()),lbd))
and get
Then I choose the second term and find the solution for lambda:
P_list = sp.factor_list(P)
a,b = P_list[1]
eq,exp = sp.simplify(b)
sol = sp.solve(eq)
With that I get the roots in a list:
r_list = []
for i in range(len(sol)):
a = list(sol[i].values())
r_list.append(a[0])
Solving the problem using sp.eigenvecs:
val_mult_vec = Had.eigenvects()
e_vals = []
mults = []
e_vecs = []
for i in range(len(val_mult_vec)):
val, mult, [vec_i, vec_j] = val_mult_vec[i]
e_vals.append(val)
e_vals.append(val)
mults.append(mult)
e_vecs.append(vec_i)
e_vecs.append(vec_j)
Solving the eigenvectors I get complicated expressions like this:
But I know that this complicated expression can be expressed in terms of the solution of the second term in the characteristic polynomial something like this:
Where r1 are one of the roots of that equation. With the solution to the characteristic polynomial how can I rewrite the eigenvectors in a simplified way like the last image using sympy? rewrite e_vec[i] in terms of r_list[j]
Seems like you want to obtain a compact version of the eigenvectors.
Recepy:
We can create as many symbols as the number of eigenvalues. Each symbol represents an eigenvalue.
Loop over the eigenvectors and for each of its elements substitute the long eigenvalue expression with the respective symbol.
r_symbols = symbols("r0:%s" % len(e_vals))
final_evecs = []
for vec, val, s in zip(e_vecs, e_vals, r_symbols):
final_evecs.append(
vec.applyfunc(lambda t: t.subs(val, s))
)
final_evecs is a list containing eigenvectors in a compact notation.
Let's test one output:
final_evecs[7]

Solving a matrix equation containing MatrixSymbols of symbolic size in Sympy?

As an introduction i want to point out that if one has a matrix A consisting of 4 submatrices in a 2x2 pattern, where the diagonal matrices are square, then if we denote its inverse as X, the submatrix X22 = (A22-A21(A11^-1)A12)^-1, which is quite easy to show by hand.
I was trying to do the same for a matrix of 4x4 submatrices, but its quite tedious by hand. So I thought Sympy would be of some help. But I cannot figure out how (I have started by just trying to reproduce the 2x2 result).
I've tried:
import sympy as s
def blockmatrix(name, sizes, names=None):
if names is None:
names = sizes
ll = []
for i, (s1, n1) in enumerate(zip(sizes, names)):
l = []
for j, (s2, n2) in enumerate(zip(sizes, names)):
l.append(s.MatrixSymbol(name+str(n1)+str(n2), s1, s2))
ll.append(l)
return ll
def eyes(*sizes):
ll = []
for i, s1 in enumerate(sizes):
l = []
for j, s2 in enumerate(sizes):
if i==j:
l.append(s.Identity(s1))
continue
l.append(s.ZeroMatrix(s1, s2))
ll.append(l)
return ll
n1, n2 = s.symbols("n1, n2", integer=True, positive=True, nonzero=True)
M = s.Matrix(blockmatrix("m", (n1, n2)))
X = s.Matrix(blockmatrix("x", (n1, n2)))
I = s.Matrix(eyes(n1, n2))
s.solve(M*X[:, 1:]-I[:, 1:], X[:, 1:])
but it just returns an empty list instead of the result.
I have also tried:
Using M*X==I but that just returns False (boolean, not an Expression)
Entering a list of equations
Using 'ordinary' symbols with commutative=False instead of MatrixSymbols -- this gives an exception with GeneratorsError: non-commutative generators: (x12, x22)
but all without luck.
Can you show how to derive a result with Sympy similar to the one I gave as an example for X22?
The most similar other questions on solving with MatrixSymbols seem to have been solved by working around doing exactly that, by using an array of the inner symbols or some such instead. But since I am dealing with symbolically sized MatrixSymbols, that is not an option for me.
Is this what you mean by a matrix of 2x2 matrices?
>>> a = [MatrixSymbol(i,2,2) for i in symbols('a1:5')]
>>> A = Matrix(2,2,a)
>>> X = A.inv()
>>> print(X[1,1]) # [1,1] instead of [2,2] because indexing starts at 0
a1*(a1*a3 - a3*a1)**(-1)
[You indicated not and pointed out that the above is not correct -- that appears to be an issue that should be resolved.]
I am not sure why this isn't implemented, but we can do the solving manually as follows:
>>> n = 2
>>> v = symbols('b:%s'%n**2,commutative=False)
>>> A = Matrix(n,n,symbols('a:%s'%n**2,commutative=False))
>>> B = Matrix(n,n,v)
>>> eqs = list(A*B - eye(n))
>>> for i in range(n**2):
... s = solve(eqs[i],v[i])[0]
... eqs[i+1:] = [e.subs(v[i],s) for e in eqs[i+1:]]
...
>>> s # solution for v[3] which is B22
(-a2*a0**(-1)*a1 + a3)**(-1)
You can change n to 3 and see a modestly complicated expression. Change it to 4 and check the result by hand to give a new definition to the word "tedious" ;-)
The special structure of the equations to be solved can allow for a faster solution, too: the variable of interest is the last factor in each term containing it:
>>> for i in range(n**2):
... c,d = eqs[i].expand().as_independent(v[i])
... assert all(j.args[-1]==v[i] for j in Add.make_args(d))
... s = 1/d.subs(v[i], 1)*-c
... eqs[i+1:] = [e.subs(v[i], s) for e in eqs[i+1:]]

How to do weighted sum as tensor operation in tensorflow?

I am trying to do a weighted sum of matrices in tensorflow.
Unfortunately, my dimensions are not small and I have a problem with memory. Another option is that I doing something completely wrong
I have two tensors U with shape (B,F,M) and A with shape (C,B). I would like to do weighted sum and stacking.
Weighted sum
For each index c from C, I have vector of weights a from A, with shape (B,).
I want to use it for the weighted sum of U to get matrix U_t with shape (F, M). This is pretty same with this, where I found small help.
Concatenation
Unfortunately, I want to do this for each vector a in A to get C matrices U_tc in list. U_tc have mentioned shape (F,M). After that I concatenate all matrices in list to get super matrix with shape (C*F,M)
My values are C=2500, M=500, F=80, B=300
In the beginning, I tried the very naive approach with many loop and element selection which generate very much operation.
Now with help from this, I have following:
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
U_t = []
for ccc in xrange(C):
a = A[ccc,:]
a_broadcasted = tf.tile(tf.reshape(a,[B,1,1]), tf.stack([1,F,M]))
T_p.append(tf.reduce_sum(tf.multiply(U,a_broadcasted), axis=0))
U_tcs = tf.concat(U_t,axis=0)
Unfortunately, this is failing at memory error. I am not sure if I did something wrong, or it is because computation has a too much mathematic operation? Because I think... variables aren't too large for memory, right? At least, I had larger variables before and it was ok. (I have 16 GB GPU memory)
Am I doing that weighted sum correctly?
Any idea how to do it more effective?
I will appreciate any help. Thanks.
1. Weighted sum and Concatenation
You can use vector operations directly without loops when memory is not limited.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
# shape=(B,F,M)
U_t = tf.reduce_sum(tf.multiply(A_new , U),axis=1)
# shape=(C*F,M)
U_tcs = tf.reshape(U_t,(C*F,M))
2. Memory error
In fact, I also had memory errors when I ran the above code.
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[2500,300,80,500]...
With a little modification of the above code, it works properly on my 8GB GPU memory.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
U_t = []
for ccc in range(C):
a = A_new[ccc,:]
a_broadcasted = tf.reduce_sum(tf.multiply(a, U),axis=0)
U_t.append(a_broadcasted)
U_tcs = tf.concat(U_t,axis=0)

Object Looking Skewed After Essential Matrix Calculation and Projection

I am trying to calculate an essential and a projection matrix from two images. I will then use them to project a 3D object onto the image. The two images I used are
I picked a few pixel correspondences, and fed that to a SVD based least square mechanism which the books say gives me the essential matrix. I used the code below for this task (code is based mostly on Eric Solem's Programming Computer Vision with Python book):
import scipy.linalg as lin
import pandas as pd
def skew(a):
return np.array([[0,-a[2],a[1]],[a[2],0,-a[0]],[-a[1],a[0],0]])
def essential(x1,x2):
n = x1.shape[1]
A = np.zeros((n,9))
for i in range(n):
A[i] = [ x1[0,i]*x2[0,i], \
x1[0,i]*x2[1,i], \
x1[0,i]*x2[2,i], \
x1[1,i]*x2[0,i], \
x1[1,i]*x2[1,i], \
x1[1,i]*x2[2,i], \
x1[2,i]*x2[0,i], \
x1[2,i]*x2[1,i], \
x1[2,i]*x2[2,i]]
U,S,V = lin.svd(A)
F = V[-1].reshape(3,3)
return F
def compute_P_from_essential(E):
U,S,V = lin.svd(E)
if lin.det(np.dot(U,V))<0: V = -V
E = np.dot(U,np.dot(np.diag([1,1,0]),V))
Z = skew([0,0,-1])
W = np.array([[0,-1,0],[1,0,0],[0,0,1]])
P2 = [np.vstack((np.dot(U,np.dot(W,V)).T,U[:,2])).T,
np.vstack((np.dot(U,np.dot(W,V)).T,-U[:,2])).T,
np.vstack((np.dot(U,np.dot(W.T,V)).T,U[:,2])).T,
np.vstack((np.dot(U,np.dot(W.T,V)).T,-U[:,2])).T]
return P2
points = [ \
[266,163,296,160],[265,237,297,266],\
[76,288,51,340],[135,31,142,4],\
[344,167,371,156],[48,165,71,164],\
[151,68,166,56],[237,26,259,19],\
[226,147,254,140]]
df = pd.DataFrame(points)
df['uno'] = 1.
x1 = np.array(df[[0,1,'uno']].T)
x2 = np.array(df[[2,3,'uno']].T)
print x1
print x2
E = essential(x1,x2)
P = compute_P_from_essential(E)
import pandas as pd
x0 = 3.; y0 = 1.; z0 = 1.
print df.shape
e = 1
cube = [[x0,y0,z0],[x0+e,y0,z0],[x0+e,y0+e,z0],[x0,y0+e,z0],
[x0,y0,z0+e],[x0+e,y0,z0+e],[x0+e,y0+e,z0+e],[x0,y0+e,z0+e]]
cube = pd.DataFrame(cube)
cube['1'] = 1.
xx = np.dot(P[1], cube.T) * 100.
xx[1,:] = 360-xx[1,:]
#xx = xx / xx[2]
print xx[0].shape
plt.plot(xx[0], xx[1],'.')
plt.xlim(0,640)
plt.ylim(0,360)
I calculated the essential matrix, then the projection matrix, then used that to project a 3D cube. The result:
This looks skewed, I am not sure why this happened. Any ideas on how to fix this?
Thanks,
First of all, it looks like you are computing the essential matrix using exactly 9 points. You can do this using only 8 (since scale is a free parameter, you can multiply the essential by a scalar and it will stay the same so you can fix one of the parameters and just use 8 points, but I digress.) However, in practice this is a very bad idea because your 8 points might have poor spatial configuration. So what you want to do is to select N matches (600 for example), and use an algorithm like RANSAC to determine the best Essential matrix. But aside from that, what I'd recommend to debug such applications is this: compute the Fundalental matrix F based on the Essential you just computed. Now you can select a point in image 1 and then display the corresponding epipolar line in the second one. That will help you visually evaluate and thus debug the estimation of the Essential.

Python: Solving equation system (coefficients are arrays)

I can solve a system equation (using NumPY) like this:
>>> a = np.array([[3,1], [1,2]])
>>> b = np.array([9,8])
>>> y = np.linalg.solve(a, b)
>>> y
array([ 2., 3.])
But, if I got something like this:
>>> x = np.linspace(1,10)
>>> a = np.array([[3*x,1-x], [1/x,2]])
>>> b = np.array([x**2,8*x])
>>> y = np.linalg.solve(a, b)
It doesnt work, where the matrix's coefficients are arrays and I want calculate the array solution "y" for each element of the array "x". Also, I cant calculate
>>> det(a)
The question is: How can do that?
Check out the docs page. If you want to solve multiple systems of linear equations you can send in multiple arrays but they have to have shape (N,M,M). That will be considered a stack of N MxM arrays. A quote from the docs page below,
Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array. This is indicated in the documentation via input parameter specifications such as a : (..., M, M) array_like. This means that if for instance given an input array a.shape == (N, M, M), it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has det : (...) and will in this case return an array of shape det(a).shape == (N,). This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation.
When I run your code I get,
>>> a.shape
(2, 2)
>>> b.shape
(2, 50)
Not sure exactly what problem you're trying to solve, but you need to rethink your inputs. You want a to have shape (N,M,M) and b to have shape (N,M). You will then get back an array of shape (N,M) (i.e. N solution vectors).