I need to solve a classic problem of the form Ax = b for a vector x that is of size 4. A is on the order of ~500 data points and thus is a dense 500x4 matrix.
Currently I can solve this using the normal equations described here and it works fine however I would like to constrain one of my parameters in x to never be above a certain value.
Is there a good way to do this programmatically with Eigen?
You can try my quadradic programming solver based on Eigen there. You'll still have to form the normal equation.
Here is some python-demo (using numpy which is not that far away from Eigen) showing an accelerated projected-gradient algorithm for your kind of problem. Typically this approach is used for large-scale problems (where other algorithms incorporating second-order information might struggle), but it's also nice to implement.
This version, which is a small modification from some old code of mine is not the most simple approach which can be used, as we are using:
acceleration / momentum (faster iteration)
line-search (saves us some step-size tuning trouble)
You could remove the line-search and tune the step-size. Momentum is also not needed.
As i'm not doing much C++ right now, i don't think i will port this to Eigen. But i'm sure, that if you would wanto to port it, it's not that hard. Eigen should not be too different from numpy.
I did not measure the performance, but the results are calculated instantly (perceptually).
Edit: some non-scientific timings (more momentum; lesser tolerance than in following code):
A=(500,4):
Solve non-bounded with scipys lsq_linear
used: 0.004898870004675975
cost : 244.58267993
Solve bounded (0.001, 0.05) with scipys lsq_linear
used: 0.005605718416479959
cost : 246.990611225
Solve bounded (0.001, 0.05) with accelerated projected gradient
early-stopping # it: 3
used: 0.002282825315435914
cost: 246.990611225
A=(50000, 500):
Solve non-bounded with scipys lsq_linear
used: 4.118898701951786 secs
cost : 24843.7115776
Solve bounded (0.001, 0.05) with scipys lsq_linear
used: 14.727660030288007 secs
cost : 25025.0328661
Solve bounded (0.001, 0.05) with accelerated projected gradient
early-stopping # it: 14
used: 5.319953458329618 secs
cost: 25025.0330754
The basic idea is to use some gradient-descent-like algorithm, and project onto our constraints after each gradient-step. This approach is very powerful, if that projection can be done efficiently. Box-constraint-projections are simple!
Page 4 in this pdf shows you the box-constraint projection.
We just clip our solution-vector to lower_bound, upper_bound. Clipping in numpy is described as: Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1.
It's an iterative-algorithm approximating the solution and i think every algorithm in use will be an iterated one.
Code
import numpy as np
from scipy.optimize import lsq_linear
np.random.seed(1)
A = np.random.normal(size=(500, 4))
b = np.random.normal(size=500)
""" Solve Ax=b
----------
"""
print('Solve non-bounded with scipys lsq_linear')
sol = lsq_linear(A, b)
print('Ax=b sol: ', sol['x'])
print('cost : ', sol['cost'])
print()
""" Solve Ax=b with box-constraints
-------------------------------
"""
print('Solve bounded (0.001, 0.05) with scipys lsq_linear')
sol = lsq_linear(A, b, bounds=(0.001, 0.05))
print('Ax=b constrained sol: ', sol['x'])
print('cost : ', sol['cost'])
print()
""" Solve Ax=b with box-constraints using a projected gradient algorithm
--------------------------------------------------------------------
"""
def solve_pg(A, b, bounds=(-np.inf, np.inf), momentum=0.9, maxiter=1000):
""" remarks:
algorithm: accelerated projected gradient
projection: proj onto box constraints
line-search: armijo-rule along projection-arc (Bertsekas book)
stopping-criterion: naive
gradient-calculation: precomputes AtA
"""
lb = np.empty(A.shape[1])
ub = np.empty(A.shape[1])
if len(bounds) == 2:
# apply lb & ub to all variables
lb = bounds[0]
ub = bounds[1]
else:
# assume dimensions are ok
lb = np.array(bounds[0])
ub = np.array(bounds[1])
M, N = A.shape
x = np.zeros(N)
AtA = A.T.dot(A)
Atb = A.T.dot(b)
stop_count = 0
def gradient(x):
return AtA.dot(x) - Atb
def obj(x):
return 0.5 * np.linalg.norm(A.dot(x) - b)**2
it = 0
while True:
grad = gradient(x)
# line search
alpha = 1
beta = 0.5
sigma=1e-2
old_obj = obj(x)
while True:
new_x = x - alpha * grad
new_obj = obj(new_x)
if old_obj - new_obj >= sigma * grad.dot(x - new_x):
break
else:
alpha *= beta
x_old = x[:]
x = x - alpha*grad
# projection
np.clip(x, lb, ub, out=x) # Projection onto box constraints
# see SO-text
# in-place clipping
y = x + momentum * (x - x_old)
if np.abs(old_obj - obj(x)) < 1e-2:
stop_count += 1
else:
stop_count = 0
if stop_count == 3:
print('early-stopping # it: ', it)
return x
it += 1
if it == maxiter:
return x
print('Solve bounded (0.001, 0.05) with accelerated projected gradient')
sol = solve_pg(A, b, bounds=(0.001, 0.05))
print(sol)
print('cost: ', 0.5 * (np.square(np.linalg.norm(A.dot(sol) - b))))
Output
Solve non-bounded with scipys lsq_linear
Ax=b sol: [ 0.06627173 -0.06104991 -0.07010355 0.04024075]
cost : 244.58267993
Solve bounded (0.001, 0.05) with scipys lsq_linear
Ax=b constrained sol: [ 0.05 0.001 0.001 0.03902291]
cost : 246.990611225
Solve bounded (0.001, 0.05) with accelerated projected gradient
early-stopping # it: 3
[ 0.05 0.001 0.001 0.03902229]
cost: 246.990611225
Related
I am trying to write a multivariate Singular Spectrum Analysis with Monte Carlo test. To this extent I am working on a code piece that can reconstruct the input series using the lagged trajectory matrix and projection base (ST-PCs) that result from the pca/ssa decomposition of the input series. The attached code piece works for a lagged univariate (that is, single) time series, but I am struggling to make this reconstruction for a lagged multivariate time series. I don't quite get the procedure mathematically and - not surprisingly - I also did not manage to program it. Useful links are attached to the function descriptions of the accompanying code. Input data should be of the form (time * number of series), so say 288x3 implying 3 time series of 288 time levels.
I hope you can help me out!
import numpy as np
def lagged_covariance_matrix(data, M):
""" Computes the lagged covariance matrix using the Broomhead & King method
Background: Plaut, G., & Vautard, R. (1994). Spells of low-frequency oscillations and
weather regimes in the Northern Hemisphere. Journal of the atmospheric sciences, 51(2), 210-236.
Arguments:
data : pxn time series, where p denotes the length of the time series and n the number of channels
M : window length """
# explicitely 'add' spatial dimension if input is a single time series
if np.ndim(data) == 1:
data = np.reshape(data,(len(data),1))
T = data.shape[0]
L = data.shape[1]
N = T - M + 1
X = np.zeros((T, L, M))
for i in range(M):
X[:,:,i] = np.roll(data, -i, axis = 0)
X = X[:N]
# X constitutes the trajectory matrix and is a stacked hankel matrix
X = np.reshape(X, (N, M*L), order = 'C') # https://www.jstatsoft.org/article/viewFile/v067i02/v67i02.pdf
# choose the smallest projection basis for computation of the covariance matrix
if M*L >= N:
return 1/(M*L) * X.dot(X.T), X
else:
return 1/N * X.T.dot(X), X
def sort_by_eigenvalues(eigenvalues, PCs):
""" Sorts the PCs and eigenvalues by descending size of the eigenvalues """
desc = np.argsort(-eigenvalues)
return eigenvalues[desc], PCs[:,desc]
def Reconstruction(M, E, X):
""" Reconstructs the series as the sum of M subseries.
See: https://en.wikipedia.org/wiki/Singular_spectrum_analysis, 'Basic SSA' &
the work of Vivien Sainte Fare Garnot on univariate time series (https://github.com/VSainteuf/mcssa)
Arguments:
M : window length
E : eigenvector basis
X : trajectory matrix """
time = len(X) + M - 1
RC = np.zeros((time, M))
# step 3: grouping
for i in range(M):
d = np.zeros(M)
d[i] = 1
I = np.diag(d)
Q = np.flipud(X # E # I # E.T)
# step 4: diagonal averaging
for k in range(time):
RC[k, i] = np.diagonal(Q, offset = -(time - M - k)).mean()
return RC
#=====================================================================================================
#=====================================================================================================
#=====================================================================================================
# input data
data = None
# number of lags a.k.a. window length
M = 45 # M = 1 means no lag
covmat, X = lagged_covariance_matrix(data, M)
# get the eigenvalues and vectors of the covariance matrix
vals, vecs = np.linalg.eig(covmat)
eig_data, eigvec_data = sort_by_eigenvalues(vals, vecs)
# component reconstruction
recons_data = Reconstruction(M, eigvec_data, X)
The following works but does not make direct use of the projection base (ST-PCs). Hence the original question still stands, but this already helps a great lot and solves the problem for me. This code piece makes use of the similarity between the ST-PCs projection base and the u & vt matrices obtained from the single value decomposition of the lagged trajectory matrix. I think it gives back the same answer as one would obtain using the ST-PCs projection base?
def lag_reconstruction(data, X, M, pairs = None):
""" Reconstructs the series as the sum of M subseries using the lagged trajectory matrix.
Based on equation 2.9 of Plaut, G., & Vautard, R. (1994). Spells of low-frequency oscillations and weather regimes in the Northern Hemisphere. Journal of Atmospheric Sciences, 51(2), 210-236.
Inspired by work of R. van Westen and C. Wieners """
time = data.shape[0] # number of time levels of the original series
L = data.shape[1] # number of input series
N = time - M + 1
u, s, vt = np.linalg.svd(X, full_matrices = False)
rc = np.zeros((time, L, M))
for t in range(time):
counter = 0
for i in range(M):
if t-i >= 0 and t-i < N:
counter += 1
if pairs:
for k in pairs:
rc[t,:,i] += u[t-i, k] * s[k] * vt[k, i*L : i*L + L]
else:
for k in range(len(s)):
rc[t,:,i] += u[t-i, k] * s[k] * vt[k, i*L : i*L + L]
rc[t] = rc[t]/counter
return rc
I'm writing a code in python to evolve the time-dependent Schrodinger equation using the Crank-Nicolson scheme. I didn't know how to deal with the potential so I looked around and found a way from this question, which I have verified from a couple other sources. According to them, for a harmonic oscillator potential, the C-N scheme gives
AΨn+1=A∗Ψn
where the elements on the main diagonal of A are dj=1+[(iΔt) / (2m(Δx)^2)]+[(iΔt(xj)^2)/4] and the elements on the upper and lower diagonals are a=−iΔt/[4m(Δx)^2]
The way I understand it, I'm supposed to give an initial condition(I've chosen a coherent state) in the form of the matrix Ψn and I need to compute the matrix Ψn+1 , which is the wave function after time Δt. To obtain Ψn+1 for a given step, I'm inverting the matrix A and multiplying it with the matrix A* and then multiplying the result with Ψn. The resulting matrix then becomes Ψn for the next step.
But when I'm doing this, I'm getting an incorrect animation. The wave packet is supposed to oscillate between the boundaries but in my animation, it is barely moving from its initial mean value. I just don't understand what I'm doing wrong. Is my understanding of the problem wrong? Or is it a flaw in my code?Please help! I've posted my code below and the video of my animation here. I'm sorry for the length of the code and the question but it's driving me crazy not knowing what my mistake is.
import numpy as np
import matplotlib.pyplot as plt
L = 30.0
x0 = -5.0
sig = 0.5
dx = 0.5
dt = 0.02
k = 1.0
w=2
K=w**2
a=np.power(K,0.25)
xs = np.arange(-L,L,dx)
nn = len(xs)
mu = k*dt/(dx)**2
dd = 1.0+mu
ee = 1.0-mu
ti = 0.0
tf = 100.0
t = ti
V=np.zeros(len(xs))
u=np.zeros(nn,dtype="complex")
V=K*(xs)**2/2 #harmonic oscillator potential
u=(np.sqrt(a)/1.33)*np.exp(-(a*(xs - x0))**2)+0j #initial condition for wave function
u[0]=0.0 #boundary condition
u[-1] = 0.0 #boundary condition
A = np.zeros((nn-2,nn-2),dtype="complex") #define A
for i in range(nn-3):
A[i,i] = 1+1j*(mu/2+w*dt*xs[i]**2/4)
A[i,i+1] = -1j*mu/4.
A[i+1,i] = -1j*mu/4.
A[nn-3,nn-3] = 1+1j*mu/2+1j*dt*xs[nn-3]**2/4
B = np.zeros((nn-2,nn-2),dtype="complex") #define A*
for i in range(nn-3):
B[i,i] = 1-1j*mu/2-1j*w*dt*xs[i]**2/4
B[i,i+1] = 1j*mu/4.
B[i+1,i] = 1j*mu/4.
B[nn-3,nn-3] = 1-1j*(mu/2)-1j*dt*xs[nn-3]**2/4
X = np.linalg.inv(A) #take inverse of A
plt.ion()
l, = plt.plot(xs,np.abs(u),lw=2,color='blue') #plot initial wave function
T=np.matmul(X,B) #multiply A inverse with A*
while t<tf:
u[1:-1]=np.matmul(T,u[1:-1]) #updating u but leaving the boundary conditions unchanged
l.set_ydata((abs(u))) #update plot with new u
t += dt
plt.pause(0.00001)
After a lot of tinkering, it came down to reducing my step size. That did the job for me- I reduced the step size and the program worked. If anyone is facing the same problem as I am, I recommend playing around with the step sizes. Provided that the rest of the code is fine, this is the only possible area of error.
I solved the following questions for a computational assignment, I got a really bad grade on it (67%) I would like to understand how to properly do these questions, in particular Q1.b and Q3. Please be as detailed as possible, I would really like to understand my msitakes
Generate data (sinusoidal functions). Use fft to analyze:
a) A superposition of three waves with constant, but different frequencies
b) A wave whose frequency depends on time
Plot the graphs, sample frequencies, amplitude and power spectra with appropriate axes.
Use the 3 waves from Exercise 1a), but change them to have the same frequency, phase and amplitude. Contaminate each of them with successively increasing amounts of
random, Gaussian-distributed noise.
1) Perform an FFT on the superposition of the three noise-contaminated waves.
Analyze and plot the output.
2) Filter the signal with a Gaussian function, plot the “clean” wave, and analyze the
result. Is the resultant wave 100% clean? Explain.
#1(b)
tmin = -2*pi
tmax - 2*pi
delta = 0.01
t = arange(tmin, tmax, delta)
y = sin(2.5*t*t)
plot(t, y, '-')
title('Figure 2: Plotting a wave whose frequency depends on time ')
xlabel('Time (s)')
ylabel('Y(t)')
show()
#b.2
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 5; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
n = len(y) # length of the signal
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
#Time vs. Amplitude
plot(t,y)
title('Figure 2: Time vs. Amplitude')
xlabel('Time')
ylabel('Amplitude')
plt.show()
#Amplitude Spectrum
plot(frq,abs(Y),'r')
title('Figure 2a: Amplitude Spectrum')
xlabel('Freq (Hz)')
ylabel('amplitude spectrum')
plt.show()
#Power Spectrum
plot(frq,abs(Y)**2,'r')
title('Figure 2b: Power Spectrum')
xlabel('Freq (Hz)')
ylabel('power spectrum')
plt.show()
#Exercise 3:
#part 1
t = np.linspace(-0.5*pi,0.5*pi,1000)
#contaminating our waves with successively increasing white noise
y_1 = sin(15*t) + np.random.normal(0,0.2*pi,1000)
y_2 = sin(15*t) + np.random.normal(0,0.3*pi,1000)
y_3 = sin(15*t) + np.random.normal(0,0.4*pi,1000)
y = y_1 + y_2 + y_3 # superposition of three contaminated waves
#Plotting the figure
plot(t,y,'-')
title('A superposition of three waves contaminated with Gaussian Noise')
xlabel('Time (s)')
ylabel('Y(t)')
show()
delta = pi/1000.0
n = len(y) ## calculate frequency in Hz
freq = fftfreq(n, delta) # Computing the FFT
Freq = fftfreq(len(y), delta) #Using Fast Fourier Transformation to #calculate frequencies
N = len(Freq)
fr = Freq[1:len(Freq)/2.0]
A = fft(y)
XF = A[1:len(A)/2.0]/float(len(A[1:len(A)/2.0]))
# Amplitude spectrum for contaminated waves
plt.plot(fr, abs(XF))
title('Figure 3a : Amplitude spectrum with Gaussian Noise')
xlabel('frequency')
ylabel('Amplitude')
show()
# Power spectrum for contaminated waves
plt.plot(fr,abs(XF)**2)
title('Figure 3b: Power spectrum with Gaussian Noise')
xlabel('frequency(cycles/year)')
ylabel('Power')
show()
# part 2
F_v = exp(-(abs(freq)-2)**2/2*0.5**2)
spectrum = A*F_v #Applying the Gaussian Filter to clean our waves
new_y = ifft(spectrum) #Computing the inverse FFT
plot(t,new_y,'-')
title('A superposition of three waves after Noise Filtering')
xlabel('Time (s)')
ylabel('Y(t)')
show()
Something like the code/images below would have been expected. I deviated in the plot of the sum of the three noisy waves to show off all three waves and the sum. Note that in the intensity spectrum of the noisy wave you don't see much. For those cases it can be instructive to also plot the logarithm of the spectrum (np.log) so you can see the noise better.
In the last plot I plotted both the Gaussian filter and the spectrum (different sizes) w/o rescaling just to show where the filter applies. It is effectively a low pass filter (lets low frequencies through), removing the higher frequency noise by multiplying it with numbers close to zero.
import numpy as np
import matplotlib.pyplot as p
%matplotlib inline
#1(b)
p.figure(figsize=(20,16))
p.subplot(431)
t = np.arange(0,10, 0.001) #units in seconds
#cleaner to show the frequency change explicitly than y = sin(2.5*t*t)
f= 1+ t*0.1 # linear up chirp, i.e. frequency goes up , frequency units in Hz (1/sec)
y = np.sin(2* np.pi* f* t)
p.plot(t, y, '-')
p.title('Figure 2: Plotting a wave whose frequency depends on time ')
p.xlabel('Time (s)')
p.ylabel('Y(t)')
#b.2
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 5; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
n = len(y) # length of the signal
k = np.arange(n) ## ok, the FFT has as many points in frequency space, as the original in time
T = n/Fs ## correct ; T=sampling time, the total frequency range is 1/sample time
frq = k/T # two sided frequency range
frq = frq[range(n/2)] # one sided frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
# Amplitude vs. Time
p.subplot(434)
p.plot(t,y)
p.title('y(t)') # Amplitude vs Time is commonly said, but strictly not true, the amplitude is unchanging
p.xlabel('Time')
p.ylabel('Amplitude')
#Amplitude Spectrum
p.subplot(435)
p.plot(frq,abs(Y),'r')
p.title('Figure 2a: Amplitude Spectrum')
p.xlabel('Freq (Hz)')
p.ylabel('amplitude spectrum')
#Power Spectrum
p.subplot(436)
p.plot(frq,abs(Y)**2,'r')
p.title('Figure 2b: Power Spectrum')
p.xlabel('Freq (Hz)')
p.ylabel('power spectrum')
#Exercise 3:
#part 1
t = np.linspace(-0.5*np.pi,0.5*np.pi,1000)
# #contaminating our waves with successively increasing white noise
y_1 = np.sin(15*t) + np.random.normal(0,0.1,1000) # no need to get pi involved in this amplitude
y_2 = np.sin(15*t) + np.random.normal(0,0.2,1000)
y_3 = np.sin(15*t) + np.random.normal(0,0.4,1000)
y = y_1 + y_2 + y_3 # superposition of three contaminated waves
#Plotting the figure
p.subplot(437)
p.plot(t,y_1+2,'-',lw=0.3)
p.plot(t,y_2,'-',lw=0.3)
p.plot(t,y_3-2,'-',lw=0.3)
p.plot(t,y-6 ,lw=1,color='black')
p.title('A superposition of three waves contaminated with Gaussian Noise')
p.xlabel('Time (s)')
p.ylabel('Y(t)')
delta = np.pi/1000.0
n = len(y) ## calculate frequency in Hz
# freq = np.fft(n, delta) # Computing the FFT <-- wrong, you don't calculate the FFT from a number, but from a time dep. vector/array
# Freq = np.fftfreq(len(y), delta) #Using Fast Fourier Transformation to #calculate frequencies
# N = len(Freq)
# fr = Freq[1:len(Freq)/2.0]
# A = fft(y)
# XF = A[1:len(A)/2.0]/float(len(A[1:len(A)/2.0]))
# Why not do as before?
k = np.arange(n) ## ok, the FFT has as many points in frequency space, as the original in time
T = n/Fs ## correct ; T=sampling time, the total frequency range is 1/sample time
frq = k/T # two sided frequency range
frq = frq[range(n/2)] # one sided frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
# Amplitude spectrum for contaminated waves
p.subplot(438)
p.plot(frq, abs(Y))
p.title('Figure 3a : Amplitude spectrum with Gaussian Noise')
p.xlabel('frequency')
p.ylabel('Amplitude')
# Power spectrum for contaminated waves
p.subplot(439)
p.plot(frq,abs(Y)**2)
p.title('Figure 3b: Power spectrum with Gaussian Noise')
p.xlabel('frequency(cycles/year)')
p.ylabel('Power')
# part 2
p.subplot(4,3,11)
F_v = np.exp(-(np.abs(frq)-2)**2/2*0.5**2) ## this is a Gaussian, plot it separately to see it; play with the values
cleaned_spectrum = Y*F_v #Applying the Gaussian Filter to clean our waves ## multiplication in FreqDomain is convolution in time domain
p.plot(frq,F_v)
p.plot(frq,cleaned_spectrum)
p.subplot(4,3,10)
new_y = np.fft.ifft(cleaned_spectrum) #Computing the inverse FFT of the cleaned spectrum to see the cleaned wave
p.plot(t[range(n/2)],new_y,'-')
p.title('A superposition of three waves after Noise Filtering')
p.xlabel('Time (s)')
p.ylabel('Y(t)')
it is quite common to calculate the probability density of a value within a probability density function (PDF). Imagine we have a gaussian distribution with mean = 40, a standard deviation of 5 and now would like to get the probability density of value 32. We'd go like:
In [1]: import scipy.stats as stats
In [2]: print stats.norm.pdf(32, loc=40, scale=5)
Out [2]: 0.022
--> The probability density is 2.2%.
But now, let's consider the inverse problem. I have the mean value, I have the value at probabilty density of 0.05 and I would like to get the standard deviation (i.e. the scale parameter).
What I could implement is a numerical approach: create stats.norm.pdf several times with the scale-parameter increased stepwise and take that one with the result getting as closest as possible.
In my case, I specify the value 30 as the 5% mark. So I need to solve this "equation":
stats.norm.pdf(30, loc=40, scale=X) = 0.05
There is a scipy function called "ppf" which is the inverse of the PDF, so it will return the value for a specific probability density, but I haven't found a function to return the scale parameter.
Implementing an iteration would take too much time (both creating and calculating). My script is going to be huge, so I should save computation time. Could the lambda-function help in this case? I roughly know what it's doing, but I haven't used it so far. Any ideas on this?
Thank you!
The normal probability density function, f is given by
Given f and x we wish to solve for 𝞼. Let's ask sympy if it can solve the equation:
import sympy as sy
from sympy.abc import x, y, sigma
expr = (1/(sy.sqrt(2*sy.pi)*sigma) * sy.exp(-x**2/(2*sigma**2))) - y
ans = sy.solve(expr, sigma)[0]
print(ans)
# sqrt(2)*exp(LambertW(-2*pi*x**2*y**2)/2)/(2*sqrt(pi)*y)
So it appears there is a closed-formed solution in terms of the LambertW function, W, which satisfies
z = W(z) * exp(W(z))
for all complex-valued z.
We could use sympy to also find the numerical result for given x and y, but
perhaps it would be faster to do the numerical work with
scipy.special.lambertw:
import numpy as np
import scipy.special as special
def sigma_func(x, y):
results = set([np.real_if_close(
np.sqrt(2)*np.exp(special.lambertw(-2*np.pi*x**2*y**2, k=k)/2)
/(2*np.sqrt(np.pi)*y)).item() for k in (0, -1)])
results = [s for s in results if np.isreal(s)]
return results
In general, the LambertW function returns complex values, but we are only
interested in real-valued solutions for sigma. Per the
docs,
special.lambertw has two partially-real branches, when k=0 and k=1. So the
code above checks if the returned value (for those two branches) is real, and
returns a list of any real solutions if they exist. If no real solution exists,
then an empty list is returned. That happens if the pdf value y is not
attained for any real value of sigma (for the given value of x).
You can use it like this:
x = 30.0
loc = 40.0
y = 0.02
s = sigma_func(loc-x, y)
print(s)
# [16.65817044316178, 6.830458938511113]
import scipy.stats as stats
for si in s:
assert np.allclose(stats.norm.pdf(x, loc=loc, scale=si), y)
In the example you gave, with y = 0.025, there is no solution for sigma:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = 30.0
loc = 40.0
y = 0.025
s = np.linspace(5, 20, 100)
plt.plot(s, stats.norm.pdf(x, loc=loc, scale=s))
plt.hlines(y, 4, 20, color='red') # the horizontal line y = 0.025
plt.ylabel('pdf')
plt.xlabel('sigma')
plt.show()
and so sigma_func(40-30, 0.025) returns an empty list:
In [93]: sigma_func(40-30, 0.025)
Out [93]: []
The plot above is typical in the sense that when y is too large there are zero
solutions, at the maximum of the curve (let's call it y_max) there is one
solution
In [199]: y_max = np.nextafter(np.sqrt(1/(np.exp(1)*2*np.pi*(10)**2)), -np.inf)
In [200]: y_max
Out[200]: 0.024197072451914336
In [201]: sigma_func(40-30, y_max)
Out[201]: [9.9999999776424]
and for y smaller than the y_max there are two solutions.
The will be two solutions, because normal PDF is symmetric around the mean.
As it stands, you have a single-variable equation to solve. It won't have a closed-form solution, so you can use e.g. scipy.optimize.fsolve to solve it.
EDIT: see #unutbu's answer for the closed form solution in terms of Lambert W function.
I am finding a very interesting problem while calculating a matrix update in python . I have to calculate the error (which is difference between previous n updated matrix ).
import numpy as np
import matplotlib.pyplot as plt
#from matplotlib import animation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
def update(A):
C=A
D=A
D[1:-1,1:-1]=(C[0:-2,1:-1]+C[2:,1:-1]+C[1:-1,0:-2]+C[1:-1,2:])/4
return(np.abs(D-C),D)
def error(A,B):
C=np.zeros(np.shape(A),np.float64)
#e=np.max(np.max(np.abs(C)))
e=(np.abs(C))
return (e.sum(dtype='float64'))
def initial(C):
C[0,:]=0 ## Top Boundary
C[-1,:]=0 ## Bottom Boundary
C[:,0]=0 ## left Boundary
C[:,-1]=100 ## Right Boundary
return(C)
def SolveLaplace(nx, ny,epsilon,imax):
## Initialize the mesh with some values
U = np.zeros((nx, ny),np.float64)
## Set boundary conditions for the problem
U=initial(U)
## Store previous grid values to check against error tolerance
UN=np.zeros((nx, ny),np.float64)
UN=initial(UN)
## Constants
k = 1 ## Iteration counter
## Iterative procedure
while k<imax:
err,U=update(U)
print(err.sum())
k+=1
return (U)
nx = 50.0
ny = 50.0
dx = 0.001
epsilon = 1e-6 ## Absolute Error tolerance
imax = 5000 ## Maximum number of iterations allowed
Z = SolveLaplace(nx, ny,epsilon,imax)
#x = np.linspace(0, nx * dx, nx)
#y = np.linspace(0, ny * dx, ny)
#X, Y = np.meshgrid(x,y)
##===================================================================
def PlotSolution(nx,ny,dx,T):
## Set up x and y vectors for meshgrid
x = np.linspace(0, nx * dx, nx)
y = np.linspace(0, ny * dx, ny)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(x,y)
ax.plot_surface(X, Y, T.transpose(), rstride=1, cstride=1, cmap=cm.cool, linewidth=0, antialiased=False)
plt.xlabel("X")
plt.ylabel("Y")
#plt.zlabel("T(X,Y)")
plt.figure()
plt.contourf(X, Y, T.transpose(), 32, rstride=1, cstride=1, cmap=cm.cool)
plt.colorbar()
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
##===================================================================
PlotSolution(nx, ny, dx, Z)
I am suppose to solve Laplace equation for 2-d sheet(temperature distribution) and when error is less than certain minimum value ,equilibrium will be achieved. But while calculating error, I am always getting 0 but when I print my matrix then I find it should not be a zero . Guys I think I have some conceptual problem here and So please help .
Your problem is that you use shallow copies, i.e., only copy the reference, when assigning C=A; D=A in the update function. Essentially, after the construction of D, all three variables A,C,D point to the same object. Use
def update(A):
C=1.0*A
D=1.0*A
D[1:-1,1:-1]=(C[0:-2,1:-1]+C[2:,1:-1]+C[1:-1,0:-2]+C[1:-1,2:])/4
return(np.abs(D-C),D)
or even shorter
def update(A):
D=A.copy()
D[1:-1,1:-1]=(A[0:-2,1:-1]+A[2:,1:-1]+A[1:-1,0:-2]+A[1:-1,2:])/4
return(np.abs(D-A),D)
Passing arguments and performing arithmetic operations results automatically in a deep copy.
You know that the (geometric, first order) convergence rate is something like max(1-C/(nx^2), 1-C/(ny^2)), i.e., very slow for even moderately large grids? For real applications, better use conjugate gradients, other Krylov-related algorithms or multi-grid approaches (or sparse solver libraries, UMFpack ...).
In the (unused) error procedure, should there be not something like
e = abs(A-B)
At the moment, you return the norm of the freshly generated zero matrix C.