I am trying to do a weighted sum of matrices in tensorflow.
Unfortunately, my dimensions are not small and I have a problem with memory. Another option is that I doing something completely wrong
I have two tensors U with shape (B,F,M) and A with shape (C,B). I would like to do weighted sum and stacking.
Weighted sum
For each index c from C, I have vector of weights a from A, with shape (B,).
I want to use it for the weighted sum of U to get matrix U_t with shape (F, M). This is pretty same with this, where I found small help.
Concatenation
Unfortunately, I want to do this for each vector a in A to get C matrices U_tc in list. U_tc have mentioned shape (F,M). After that I concatenate all matrices in list to get super matrix with shape (C*F,M)
My values are C=2500, M=500, F=80, B=300
In the beginning, I tried the very naive approach with many loop and element selection which generate very much operation.
Now with help from this, I have following:
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
U_t = []
for ccc in xrange(C):
a = A[ccc,:]
a_broadcasted = tf.tile(tf.reshape(a,[B,1,1]), tf.stack([1,F,M]))
T_p.append(tf.reduce_sum(tf.multiply(U,a_broadcasted), axis=0))
U_tcs = tf.concat(U_t,axis=0)
Unfortunately, this is failing at memory error. I am not sure if I did something wrong, or it is because computation has a too much mathematic operation? Because I think... variables aren't too large for memory, right? At least, I had larger variables before and it was ok. (I have 16 GB GPU memory)
Am I doing that weighted sum correctly?
Any idea how to do it more effective?
I will appreciate any help. Thanks.
1. Weighted sum and Concatenation
You can use vector operations directly without loops when memory is not limited.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
# shape=(B,F,M)
U_t = tf.reduce_sum(tf.multiply(A_new , U),axis=1)
# shape=(C*F,M)
U_tcs = tf.reshape(U_t,(C*F,M))
2. Memory error
In fact, I also had memory errors when I ran the above code.
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[2500,300,80,500]...
With a little modification of the above code, it works properly on my 8GB GPU memory.
import tensorflow as tf
C,M,F,B=2500,500,80,300
U = tf.Variable(tf.truncated_normal([B, F, M],stddev=1.0 ,dtype=tf.float32)) #just for example
A = tf.Variable(tf.truncated_normal([C, B],stddev=1.0) ,dtype=tf.float32) #just for example
# shape=(C,B,1,1)
A_new = tf.expand_dims(tf.expand_dims(A,-1),-1)
U_t = []
for ccc in range(C):
a = A_new[ccc,:]
a_broadcasted = tf.reduce_sum(tf.multiply(a, U),axis=0)
U_t.append(a_broadcasted)
U_tcs = tf.concat(U_t,axis=0)
Related
I'm writing a code in python to evolve the time-dependent Schrodinger equation using the Crank-Nicolson scheme. I didn't know how to deal with the potential so I looked around and found a way from this question, which I have verified from a couple other sources. According to them, for a harmonic oscillator potential, the C-N scheme gives
AΨn+1=A∗Ψn
where the elements on the main diagonal of A are dj=1+[(iΔt) / (2m(Δx)^2)]+[(iΔt(xj)^2)/4] and the elements on the upper and lower diagonals are a=−iΔt/[4m(Δx)^2]
The way I understand it, I'm supposed to give an initial condition(I've chosen a coherent state) in the form of the matrix Ψn and I need to compute the matrix Ψn+1 , which is the wave function after time Δt. To obtain Ψn+1 for a given step, I'm inverting the matrix A and multiplying it with the matrix A* and then multiplying the result with Ψn. The resulting matrix then becomes Ψn for the next step.
But when I'm doing this, I'm getting an incorrect animation. The wave packet is supposed to oscillate between the boundaries but in my animation, it is barely moving from its initial mean value. I just don't understand what I'm doing wrong. Is my understanding of the problem wrong? Or is it a flaw in my code?Please help! I've posted my code below and the video of my animation here. I'm sorry for the length of the code and the question but it's driving me crazy not knowing what my mistake is.
import numpy as np
import matplotlib.pyplot as plt
L = 30.0
x0 = -5.0
sig = 0.5
dx = 0.5
dt = 0.02
k = 1.0
w=2
K=w**2
a=np.power(K,0.25)
xs = np.arange(-L,L,dx)
nn = len(xs)
mu = k*dt/(dx)**2
dd = 1.0+mu
ee = 1.0-mu
ti = 0.0
tf = 100.0
t = ti
V=np.zeros(len(xs))
u=np.zeros(nn,dtype="complex")
V=K*(xs)**2/2 #harmonic oscillator potential
u=(np.sqrt(a)/1.33)*np.exp(-(a*(xs - x0))**2)+0j #initial condition for wave function
u[0]=0.0 #boundary condition
u[-1] = 0.0 #boundary condition
A = np.zeros((nn-2,nn-2),dtype="complex") #define A
for i in range(nn-3):
A[i,i] = 1+1j*(mu/2+w*dt*xs[i]**2/4)
A[i,i+1] = -1j*mu/4.
A[i+1,i] = -1j*mu/4.
A[nn-3,nn-3] = 1+1j*mu/2+1j*dt*xs[nn-3]**2/4
B = np.zeros((nn-2,nn-2),dtype="complex") #define A*
for i in range(nn-3):
B[i,i] = 1-1j*mu/2-1j*w*dt*xs[i]**2/4
B[i,i+1] = 1j*mu/4.
B[i+1,i] = 1j*mu/4.
B[nn-3,nn-3] = 1-1j*(mu/2)-1j*dt*xs[nn-3]**2/4
X = np.linalg.inv(A) #take inverse of A
plt.ion()
l, = plt.plot(xs,np.abs(u),lw=2,color='blue') #plot initial wave function
T=np.matmul(X,B) #multiply A inverse with A*
while t<tf:
u[1:-1]=np.matmul(T,u[1:-1]) #updating u but leaving the boundary conditions unchanged
l.set_ydata((abs(u))) #update plot with new u
t += dt
plt.pause(0.00001)
After a lot of tinkering, it came down to reducing my step size. That did the job for me- I reduced the step size and the program worked. If anyone is facing the same problem as I am, I recommend playing around with the step sizes. Provided that the rest of the code is fine, this is the only possible area of error.
I have a simple Python code which reads two matrices in from a file, and each matrix element contains two undefined variables, A and B. After being read in, A and B are assigned values and the string is evaluated using eval() (I am aware of the security issues with this, but this code will only ever be used by me). A numpy array is formed for each of the matrices and the general eigenvalue problem is solved using both matrices. The code is as follows:
from __future__ import division
from scipy.linalg import eigh
import numpy as np
Mat_size = 4 ## Define matrix size
## Input the two matrices from each side of the generalised eigenvalue problem
HHfile = "Matrix_4x4HH.mat" ## left
SSfile = "Matrix_4x4SS.mat" ## right
## Read matrix files
f1 = open(HHfile, 'r')
HH_data = f1.read()
f2 = open(SSfile, 'r')
SS_data = f2.read()
# Define dictionary of variables
trans = {'A':2.1,
'B':3,
}
HHmat = eval(HH_data, trans)
SSmat = eval(SS_data, trans)
## Convert to numpy array
HHmat2 = np.array(HHmat)
SSmat2 = np.array(SSmat)
## Form correct matrix dimensions
shape = ( Mat_size, Mat_size )
HH = HHmat2.reshape( shape )
SS = SSmat2.reshape( shape )
ev, ew = eigh(HH,SS)
## solve eigenvalue problem
eigenValues,eigenVectors = eigh(HH,SS)
print("Eigenvalue:",eigenValues[0])
This works and outputs an answer. However I wish to vary the parameters A and B to minimise the value that it gives me. (Something like e.g.):
def func_min(x):
A, B = x
ev, ew=eigh(HH,SS)
return ev[0]
x0 = np.array([2.1, 3]) # Preliminary guess
minimize(func_min, x0)
I am having an issue with the fact that A and B have to be defined in order for the eval call to work; thus disallowing the optimisation routine as A and B cannot be varied since they have specified values. I have tried a few different methods of reading in the files (pandas, csv, np.genfromtxt etc...) but always seem to be left with the same issue. Is there a better way, or a modification to the current code to implement this?
Here are the inputs from the files. I was unsure of how to include them, but I can use a file upload service if that is better.
Input from Matrix_4x4HH.mat
-.7500000000000000/B**3*A**5/(A+B)**6-4.500000000000000/B**2*A**4/(A+B)**6-12./B*A**3/(A+B)**6-19.50000000000000*A**2/(A+B)**6-118.5000000000000*B*A/(A+B)**6-19.50000000000000*B**2/(A+B)**6-12.*B**3/A/(A+B)**6-4.500000000000000*B**4/A**2/(A+B)**6-.7500000000000000*B**5/A**3/(A+B)**6+2./B**3*A**4/(A+B)**6+13./B**2*A**3/(A+B)**6+36./B*A**2/(A+B)**6+165.*A/(A+B)**6+165.*B/(A+B)**6+36.*B**2/A/(A+B)**6+13.*B**3/A**2/(A+B)**6+2.*B**4/A**3/(A+B)**6,-.3750000000000000/B**3*A**5/(A+B)**6-2.125000000000000/B**2*A**4/(A+B)**6-4.750000000000000/B*A**3/(A+B)**6-23.87500000000000*A**2/(A+B)**6-1.750000000000000*B*A/(A+B)**6-23.87500000000000*B**2/(A+B)**6-4.750000000000000*B**3/A/(A+B)**6-2.125000000000000*B**4/A**2/(A+B)**6-.3750000000000000*B**5/A**3/(A+B)**6-1.500000000000000/B**2*A**3/(A+B)**6-5.500000000000000/B*A**2/(A+B)**6-25.*A/(A+B)**6-25.*B/(A+B)**6-5.500000000000000*B**2/A/(A+B)**6-1.500000000000000*B**3/A**2/(A+B)**6,1.500000000000000/B**3*A**6/(A+B)**7+10.12500000000000/B**2*A**5/(A+B)**7+29.12500000000000/B*A**4/(A+B)**7+45.25000000000000*A**3/(A+B)**7-135.1250000000000*B*A**2/(A+B)**7+298.7500000000000*B**2*A/(A+B)**7+14.87500000000000*B**3/(A+B)**7-5.750000000000000*B**4/A/(A+B)**7-2.375000000000000*B**5/A**2/(A+B)**7-.3750000000000000*B**6/A**3/(A+B)**7-4./B**3*A**5/(A+B)**7-27./B**2*A**4/(A+B)**7-76.50000000000000/B*A**3/(A+B)**7-9.500000000000000*A**2/(A+B)**7-305.*B*A/(A+B)**7-362.*B**2/(A+B)**7-14.50000000000000*B**3/A/(A+B)**7-1.500000000000000*B**4/A**2/(A+B)**7,-.3750000000000000/B**3*A**6/(A+B)**7-2.375000000000000/B**2*A**5/(A+B)**7-5.750000000000000/B*A**4/(A+B)**7+14.87500000000000*A**3/(A+B)**7+298.7500000000000*B*A**2/(A+B)**7-135.1250000000000*B**2*A/(A+B)**7+45.25000000000000*B**3/(A+B)**7+29.12500000000000*B**4/A/(A+B)**7+10.12500000000000*B**5/A**2/(A+B)**7+1.500000000000000*B**6/A**3/(A+B)**7-1.500000000000000/B**2*A**4/(A+B)**7-14.50000000000000/B*A**3/(A+B)**7-362.*A**2/(A+B)**7-305.*B*A/(A+B)**7-9.500000000000000*B**2/(A+B)**7-76.50000000000000*B**3/A/(A+B)**7-27.*B**4/A**2/(A+B)**7-4.*B**5/A**3/(A+B)**7,-.3750000000000000/B**3*A**5/(A+B)**6-2.125000000000000/B**2*A**4/(A+B)**6-4.750000000000000/B*A**3/(A+B)**6-23.87500000000000*A**2/(A+B)**6-1.750000000000000*B*A/(A+B)**6-23.87500000000000*B**2/(A+B)**6-4.750000000000000*B**3/A/(A+B)**6-2.125000000000000*B**4/A**2/(A+B)**6-.3750000000000000*B**5/A**3/(A+B)**6-1.500000000000000/B**2*A**3/(A+B)**6-5.500000000000000/B*A**2/(A+B)**6-25.*A/(A+B)**6-25.*B/(A+B)**6-5.500000000000000*B**2/A/(A+B)**6-1.500000000000000*B**3/A**2/(A+B)**6,-1.500000000000000/B**3*A**5/(A+B)**6-10.50000000000000/B**2*A**4/(A+B)**6-35./B*A**3/(A+B)**6-101.5000000000000*A**2/(A+B)**6-343.*B*A/(A+B)**6-101.5000000000000*B**2/(A+B)**6-35.*B**3/A/(A+B)**6-10.50000000000000*B**4/A**2/(A+B)**6-1.500000000000000*B**5/A**3/(A+B)**6+2./B**3*A**4/(A+B)**6+16./B**2*A**3/(A+B)**6+45./B*A**2/(A+B)**6+201.*A/(A+B)**6+201.*B/(A+B)**6+45.*B**2/A/(A+B)**6+16.*B**3/A**2/(A+B)**6+2.*B**4/A**3/(A+B)**6,.7500000000000000/B**3*A**6/(A+B)**7+5.625000000000000/B**2*A**5/(A+B)**7+18.87500000000000/B*A**4/(A+B)**7+19.62500000000000*A**3/(A+B)**7+170.3750000000000*B*A**2/(A+B)**7+73.87500000000000*B**2*A/(A+B)**7+57.62500000000000*B**3/(A+B)**7+4.875000000000000*B**4/A/(A+B)**7+.3750000000000000*B**5/A**2/(A+B)**7+1.500000000000000/B**2*A**4/(A+B)**7+7.500000000000000/B*A**3/(A+B)**7-1.*A**2/(A+B)**7+39.*B*A/(A+B)**7+47.50000000000000*B**2/(A+B)**7+1.500000000000000*B**3/A/(A+B)**7,.3750000000000000/B**2*A**5/(A+B)**7+4.875000000000000/B*A**4/(A+B)**7+57.62500000000000*A**3/(A+B)**7+73.87500000000000*B*A**2/(A+B)**7+170.3750000000000*B**2*A/(A+B)**7+19.62500000000000*B**3/(A+B)**7+18.87500000000000*B**4/A/(A+B)**7+5.625000000000000*B**5/A**2/(A+B)**7+.7500000000000000*B**6/A**3/(A+B)**7+1.500000000000000/B*A**3/(A+B)**7+47.50000000000000*A**2/(A+B)**7+39.*B*A/(A+B)**7-1.*B**2/(A+B)**7+7.500000000000000*B**3/A/(A+B)**7+1.500000000000000*B**4/A**2/(A+B)**7,1.500000000000000/B**3*A**6/(A+B)**7+10.12500000000000/B**2*A**5/(A+B)**7+29.12500000000000/B*A**4/(A+B)**7+45.25000000000000*A**3/(A+B)**7-135.1250000000000*B*A**2/(A+B)**7+298.7500000000000*B**2*A/(A+B)**7+14.87500000000000*B**3/(A+B)**7-5.750000000000000*B**4/A/(A+B)**7-2.375000000000000*B**5/A**2/(A+B)**7-.3750000000000000*B**6/A**3/(A+B)**7-4./B**3*A**5/(A+B)**7-27./B**2*A**4/(A+B)**7-76.50000000000000/B*A**3/(A+B)**7-9.500000000000000*A**2/(A+B)**7-305.*B*A/(A+B)**7-362.*B**2/(A+B)**7-14.50000000000000*B**3/A/(A+B)**7-1.500000000000000*B**4/A**2/(A+B)**7,.7500000000000000/B**3*A**6/(A+B)**7+5.625000000000000/B**2*A**5/(A+B)**7+18.87500000000000/B*A**4/(A+B)**7+19.62500000000000*A**3/(A+B)**7+170.3750000000000*B*A**2/(A+B)**7+73.87500000000000*B**2*A/(A+B)**7+57.62500000000000*B**3/(A+B)**7+4.875000000000000*B**4/A/(A+B)**7+.3750000000000000*B**5/A**2/(A+B)**7+1.500000000000000/B**2*A**4/(A+B)**7+7.500000000000000/B*A**3/(A+B)**7-1.*A**2/(A+B)**7+39.*B*A/(A+B)**7+47.50000000000000*B**2/(A+B)**7+1.500000000000000*B**3/A/(A+B)**7,-5.250000000000000/B**3/(A+B)**8*A**7-40.50000000000000/B**2/(A+B)**8*A**6-138.7500000000000/B/(A+B)**8*A**5-280.7500000000000/(A+B)**8*A**4-629.7500000000000*B/(A+B)**8*A**3+770.2500000000000*B**2/(A+B)**8*A**2-836.7500000000000*B**3/(A+B)**8*A-180.2500000000000*B**4/(A+B)**8-52.*B**5/(A+B)**8/A-12.75000000000000*B**6/(A+B)**8/A**2-1.500000000000000*B**7/(A+B)**8/A**3+14./B**3/(A+B)**8*A**6+107./B**2/(A+B)**8*A**5+355./B/(A+B)**8*A**4+778./(A+B)**8*A**3+284.*B/(A+B)**8*A**2+597.*B**2/(A+B)**8*A+911.*B**3/(A+B)**8+100.*B**4/(A+B)**8/A+20.*B**5/(A+B)**8/A**2+2.*B**6/(A+B)**8/A**3,.7500000000000000/B**3/(A+B)**8*A**7+5.625000000000000/B**2/(A+B)**8*A**6+18.25000000000000/B/(A+B)**8*A**5+52.50000000000000/(A+B)**8*A**4+397.*B/(A+B)**8*A**3-2356.250000000000*B**2/(A+B)**8*A**2+397.*B**3/(A+B)**8*A+52.50000000000000*B**4/(A+B)**8+18.25000000000000*B**5/(A+B)**8/A+5.625000000000000*B**6/(A+B)**8/A**2+.7500000000000000*B**7/(A+B)**8/A**3+1.500000000000000/B**2/(A+B)**8*A**5+10.50000000000000/B/(A+B)**8*A**4-276.5000000000000/(A+B)**8*A**3+1848.500000000000*B/(A+B)**8*A**2+1848.500000000000*B**2/(A+B)**8*A-276.5000000000000*B**3/(A+B)**8+10.50000000000000*B**4/(A+B)**8/A+1.500000000000000*B**5/(A+B)**8/A**2,-.3750000000000000/B**3*A**6/(A+B)**7-2.375000000000000/B**2*A**5/(A+B)**7-5.750000000000000/B*A**4/(A+B)**7+14.87500000000000*A**3/(A+B)**7+298.7500000000000*B*A**2/(A+B)**7-135.1250000000000*B**2*A/(A+B)**7+45.25000000000000*B**3/(A+B)**7+29.12500000000000*B**4/A/(A+B)**7+10.12500000000000*B**5/A**2/(A+B)**7+1.500000000000000*B**6/A**3/(A+B)**7-1.500000000000000/B**2*A**4/(A+B)**7-14.50000000000000/B*A**3/(A+B)**7-362.*A**2/(A+B)**7-305.*B*A/(A+B)**7-9.500000000000000*B**2/(A+B)**7-76.50000000000000*B**3/A/(A+B)**7-27.*B**4/A**2/(A+B)**7-4.*B**5/A**3/(A+B)**7,.3750000000000000/B**2*A**5/(A+B)**7+4.875000000000000/B*A**4/(A+B)**7+57.62500000000000*A**3/(A+B)**7+73.87500000000000*B*A**2/(A+B)**7+170.3750000000000*B**2*A/(A+B)**7+19.62500000000000*B**3/(A+B)**7+18.87500000000000*B**4/A/(A+B)**7+5.625000000000000*B**5/A**2/(A+B)**7+.7500000000000000*B**6/A**3/(A+B)**7+1.500000000000000/B*A**3/(A+B)**7+47.50000000000000*A**2/(A+B)**7+39.*B*A/(A+B)**7-1.*B**2/(A+B)**7+7.500000000000000*B**3/A/(A+B)**7+1.500000000000000*B**4/A**2/(A+B)**7,.7500000000000000/B**3/(A+B)**8*A**7+5.625000000000000/B**2/(A+B)**8*A**6+18.25000000000000/B/(A+B)**8*A**5+52.50000000000000/(A+B)**8*A**4+397.*B/(A+B)**8*A**3-2356.250000000000*B**2/(A+B)**8*A**2+397.*B**3/(A+B)**8*A+52.50000000000000*B**4/(A+B)**8+18.25000000000000*B**5/(A+B)**8/A+5.625000000000000*B**6/(A+B)**8/A**2+.7500000000000000*B**7/(A+B)**8/A**3+1.500000000000000/B**2/(A+B)**8*A**5+10.50000000000000/B/(A+B)**8*A**4-276.5000000000000/(A+B)**8*A**3+1848.500000000000*B/(A+B)**8*A**2+1848.500000000000*B**2/(A+B)**8*A-276.5000000000000*B**3/(A+B)**8+10.50000000000000*B**4/(A+B)**8/A+1.500000000000000*B**5/(A+B)**8/A**2,-1.500000000000000/B**3/(A+B)**8*A**7-12.75000000000000/B**2/(A+B)**8*A**6-52./B/(A+B)**8*A**5-180.2500000000000/(A+B)**8*A**4-836.7500000000000*B/(A+B)**8*A**3+770.2500000000000*B**2/(A+B)**8*A**2-629.7500000000000*B**3/(A+B)**8*A-280.7500000000000*B**4/(A+B)**8-138.7500000000000*B**5/(A+B)**8/A-40.50000000000000*B**6/(A+B)**8/A**2-5.250000000000000*B**7/(A+B)**8/A**3+2./B**3/(A+B)**8*A**6+20./B**2/(A+B)**8*A**5+100./B/(A+B)**8*A**4+911./(A+B)**8*A**3+597.*B/(A+B)**8*A**2+284.*B**2/(A+B)**8*A+778.*B**3/(A+B)**8+355.*B**4/(A+B)**8/A+107.*B**5/(A+B)**8/A**2+14.*B**6/(A+B)**8/A**3
Input from Matrix_4x4SS.mat
1./B**3*A**3/(A+B)**6+6./B**2*A**2/(A+B)**6+15./B*A/(A+B)**6+84./(A+B)**6+15.*B/A/(A+B)**6+6.*B**2/A**2/(A+B)**6+1.*B**3/A**3/(A+B)**6,-.5000000000000000/B**3*A**3/(A+B)**6-3.500000000000000/B**2*A**2/(A+B)**6-9.500000000000000/B*A/(A+B)**6-53./(A+B)**6-9.500000000000000*B/A/(A+B)**6-3.500000000000000*B**2/A**2/(A+B)**6-.5000000000000000*B**3/A**3/(A+B)**6,-2./B**3*A**4/(A+B)**7-12.50000000000000/B**2*A**3/(A+B)**7-32.50000000000000/B*A**2/(A+B)**7+18.50000000000000*A/(A+B)**7-253.*B/(A+B)**7-17.50000000000000*B**2/A/(A+B)**7-4.500000000000000*B**3/A**2/(A+B)**7-.5000000000000000*B**4/A**3/(A+B)**7,-.5000000000000000/B**3*A**4/(A+B)**7-4.500000000000000/B**2*A**3/(A+B)**7-17.50000000000000/B*A**2/(A+B)**7-253.*A/(A+B)**7+18.50000000000000*B/(A+B)**7-32.50000000000000*B**2/A/(A+B)**7-12.50000000000000*B**3/A**2/(A+B)**7-2.*B**4/A**3/(A+B)**7,-.5000000000000000/B**3*A**3/(A+B)**6-3.500000000000000/B**2*A**2/(A+B)**6-9.500000000000000/B*A/(A+B)**6-53./(A+B)**6-9.500000000000000*B/A/(A+B)**6-3.500000000000000*B**2/A**2/(A+B)**6-.5000000000000000*B**3/A**3/(A+B)**6,2./B**3*A**3/(A+B)**6+14./B**2*A**2/(A+B)**6+38./B*A/(A+B)**6+212./(A+B)**6+38.*B/A/(A+B)**6+14.*B**2/A**2/(A+B)**6+2.*B**3/A**3/(A+B)**6,1./B**3*A**4/(A+B)**7+6.500000000000000/B**2*A**3/(A+B)**7+16.50000000000000/B*A**2/(A+B)**7-19.*A/(A+B)**7+118.*B/(A+B)**7+4.500000000000000*B**2/A/(A+B)**7+.5000000000000000*B**3/A**2/(A+B)**7,.5000000000000000/B**2*A**3/(A+B)**7+4.500000000000000/B*A**2/(A+B)**7+118.*A/(A+B)**7-19.*B/(A+B)**7+16.50000000000000*B**2/A/(A+B)**7+6.500000000000000*B**3/A**2/(A+B)**7+1.*B**4/A**3/(A+B)**7,-2./B**3*A**4/(A+B)**7-12.50000000000000/B**2*A**3/(A+B)**7-32.50000000000000/B*A**2/(A+B)**7+18.50000000000000*A/(A+B)**7-253.*B/(A+B)**7-17.50000000000000*B**2/A/(A+B)**7-4.500000000000000*B**3/A**2/(A+B)**7-.5000000000000000*B**4/A**3/(A+B)**7,1./B**3*A**4/(A+B)**7+6.500000000000000/B**2*A**3/(A+B)**7+16.50000000000000/B*A**2/(A+B)**7-19.*A/(A+B)**7+118.*B/(A+B)**7+4.500000000000000*B**2/A/(A+B)**7+.5000000000000000*B**3/A**2/(A+B)**7,7./B**3/(A+B)**8*A**5+50./B**2/(A+B)**8*A**4+154./B/(A+B)**8*A**3+331./(A+B)**8*A**2-147.*B/(A+B)**8*A+848.*B**2/(A+B)**8+80.*B**3/(A+B)**8/A+19.*B**4/(A+B)**8/A**2+2.*B**5/(A+B)**8/A**3,1./B**3/(A+B)**8*A**5+8.500000000000000/B**2/(A+B)**8*A**4+31./B/(A+B)**8*A**3-152.5000000000000/(A+B)**8*A**2+1568.*B/(A+B)**8*A-152.5000000000000*B**2/(A+B)**8+31.*B**3/(A+B)**8/A+8.500000000000000*B**4/(A+B)**8/A**2+1.*B**5/(A+B)**8/A**3,-.5000000000000000/B**3*A**4/(A+B)**7-4.500000000000000/B**2*A**3/(A+B)**7-17.50000000000000/B*A**2/(A+B)**7-253.*A/(A+B)**7+18.50000000000000*B/(A+B)**7-32.50000000000000*B**2/A/(A+B)**7-12.50000000000000*B**3/A**2/(A+B)**7-2.*B**4/A**3/(A+B)**7,.5000000000000000/B**2*A**3/(A+B)**7+4.500000000000000/B*A**2/(A+B)**7+118.*A/(A+B)**7-19.*B/(A+B)**7+16.50000000000000*B**2/A/(A+B)**7+6.500000000000000*B**3/A**2/(A+B)**7+1.*B**4/A**3/(A+B)**7,1./B**3/(A+B)**8*A**5+8.500000000000000/B**2/(A+B)**8*A**4+31./B/(A+B)**8*A**3-152.5000000000000/(A+B)**8*A**2+1568.*B/(A+B)**8*A-152.5000000000000*B**2/(A+B)**8+31.*B**3/(A+B)**8/A+8.500000000000000*B**4/(A+B)**8/A**2+1.*B**5/(A+B)**8/A**3,2./B**3/(A+B)**8*A**5+19./B**2/(A+B)**8*A**4+80./B/(A+B)**8*A**3+848./(A+B)**8*A**2-147.*B/(A+B)**8*A+331.*B**2/(A+B)**8+154.*B**3/(A+B)**8/A+50.*B**4/(A+B)**8/A**2+7.*B**5/(A+B)**8/A**3
Edit
I guess the important part of this question is how to read in an array of equations from a file into a numpy array and be able to assign values to the variables without the need for eval. As an example of what can usually be done using numpy:
import numpy as np
A = 1
B = 1
arr = np.array([[A+2*B,2],[2,B+6*(B+A)]])
print arr
This gives:
[[ 3 2]
[ 2 13]]
The numpy array contained algebraic terms and when printed the specified values of A and B were inserted. Is this possible with equations read from a file into a numpy array? I have looked through the numpy documentation and is the issue to do with the type of data being read in? i.e. I cannot specify the dtype to be int or float as each array element contains ints, floats and text. I feel there is a very simple aspect of Python I am missing that can do this.
I can solve a system equation (using NumPY) like this:
>>> a = np.array([[3,1], [1,2]])
>>> b = np.array([9,8])
>>> y = np.linalg.solve(a, b)
>>> y
array([ 2., 3.])
But, if I got something like this:
>>> x = np.linspace(1,10)
>>> a = np.array([[3*x,1-x], [1/x,2]])
>>> b = np.array([x**2,8*x])
>>> y = np.linalg.solve(a, b)
It doesnt work, where the matrix's coefficients are arrays and I want calculate the array solution "y" for each element of the array "x". Also, I cant calculate
>>> det(a)
The question is: How can do that?
Check out the docs page. If you want to solve multiple systems of linear equations you can send in multiple arrays but they have to have shape (N,M,M). That will be considered a stack of N MxM arrays. A quote from the docs page below,
Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array. This is indicated in the documentation via input parameter specifications such as a : (..., M, M) array_like. This means that if for instance given an input array a.shape == (N, M, M), it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has det : (...) and will in this case return an array of shape det(a).shape == (N,). This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation.
When I run your code I get,
>>> a.shape
(2, 2)
>>> b.shape
(2, 50)
Not sure exactly what problem you're trying to solve, but you need to rethink your inputs. You want a to have shape (N,M,M) and b to have shape (N,M). You will then get back an array of shape (N,M) (i.e. N solution vectors).
I'm trying to implement some code from here
And I have trained the HMM with my coefficients but do not understand how the Viterbi Decoder algorithm works, for example:
viterbi_decode(MFCC, M, model, q);
where MFCC = coefficents
M = size of MFCC
model = Model of HMM training using the MFCC coefficients
q = unknown (believed to be the outputted path).
But here is what I do not understand: I am attempting to compare two speech signals (training, sample) to find out the closest possible match. With the DTW algorithm for example, a single integer is returned where I can then find the closest, however, with this algorithm it returns a int* array and therefore differentiating is difficult.
Here is how the current program works:
vector<DIMENSIONS_2> MFCC = mfcc.transform(rawData, sample_rate);
int N = MFCC.size();
int M = 13;
double** mfcc_setup = setupHMM(MFCC, N, M);
model_t* model = hmm_init(mfcc_setup, N, M, 10);
hmm_train(mfcc_setup, N, model);
int* q = new int[N];
viterbi_decode(mfcc_setup, M, model, q);
Could anyone please tell me how the Viterbi Decoder works for the problem of identifying which is the best path to take from the training, to the input? I've tried both the Euclidean distance as well as the Hamming Distance on the decode path (q) but had no such luck.
Any help would be greatly appreciated
In this example it seems to me that (q) is the hidden state sequence, so a list of numbers from 0->9. If you have two audio samples say, test and train, and you generate two sequences q_test and q_train, then thinking about |q_test - q_train|, where the norm is componentwise distance, is not useful because it isn't representing a notion of distance correctly, since hidden state labels in HMM may be arbitrary.
A more natural way to think about distance may be the following, given q_train, you are interested in the probability that your test sample took that same path, which you can compute once you have the transition matrix and emission probabilites.
Please let me know if I am misunderstanding your question.
I am trying to calculate the 3x3 calibration matrix P of a camera based on 2D to 3D point correspondences as described in this paper http://cronos.rutgers.edu/~meer/TEACHTOO/PAPERS/zhang.pdf (section 2.3) using python 2.7. I have been able to find an initial estimate of P, but now need to refine it using the Levenberg-Marquardt Algorithm (2.3.4). It appears to me that this can be done with scipy.optimize.minpack.leastsq. However, my attemps at implementing this function have failed. Here is a simplified version of what I have (M is a numpy array of homogenized 3d points in the format (x,y,z,1) with a shape of (18,4) and m is a numpy array of homogenized 2d points in the format (u,v,1) with a shape of (18,3)):
import numpy as N
from scipy.optimize.minpack import leastsq
def e(P,M,m):
a = P.dot(M.T)
print a.shape
b = m.T-a
b1 = b[0]
b2 = b[1]
b3 = b[2]
dist = sqrt((b1**2)+(b2**2)+(b3**2))
return dist
P = N.array( [ [4.66135353e+01,1.24341518e+02,-9.07923056e+00,9.59292826e+02],
[-3.60062368e+01,3.56319152e+01,1.14245572e+02,2.32061401e-02],
[-4.04188199e-02,4.00793699e-02,-9.48804649e-03,1.00000e+00] ] )
m = []
M = []
#define m list and M list
for i in range(0,len(uv)):
uv[i].append(1) #uv is unhomogenized uv coordinate list (source left out to simplify)
xyz[i].append(1) #xyz is unhomogenized xyz coordinate list (source left out to simplify)
m.append(N.array( [ [uv[i][0]],[uv[i][1]],[uv[i][2]] ] ))
m = N.array( uv )
M = N.array( xyz )
#the shape of m is (18,3) and the shape of M is (18,4)
P_new, success = leastsq(e, P, args=(M,m))
I think the problem is with the M and m variables, the arrays of vectors. I looked at an example for the scipy.optimize.lstsq function and I could get that to work but it had args with only one dimension.
Does anyone know what I am doing wrong here? I am fairly new to programming so take it easy on me if this is idiotic haha. Thanks so much to all who read this and let me know if I can provide anymore info
It seems that leastsq doesn't know how to optimize multidimensional variables, a problem that is easy to work around:
def e2(P, M, m) :
return np.sqrt(np.sum((m.T - np.dot(P.reshape(3,4), M.T))**2, axis=0))
P = P.reshape((12,))
P_new, success = leastsq(e2, P, args=(M, m))
This runs, although with my made up random data has trouble converging. The basic idea is to treat matrix P as a 12 item long vector, and reshape it inside the function when needed to convert M to m.
I have also taken the liberty of rewriting your e function in a more numpythonic way...