Computational Physics, FFT analysis - python-2.7

I solved the following questions for a computational assignment, I got a really bad grade on it (67%) I would like to understand how to properly do these questions, in particular Q1.b and Q3. Please be as detailed as possible, I would really like to understand my msitakes
Generate data (sinusoidal functions). Use fft to analyze:
a) A superposition of three waves with constant, but different frequencies
b) A wave whose frequency depends on time
Plot the graphs, sample frequencies, amplitude and power spectra with appropriate axes.
Use the 3 waves from Exercise 1a), but change them to have the same frequency, phase and amplitude. Contaminate each of them with successively increasing amounts of
random, Gaussian-distributed noise.
1) Perform an FFT on the superposition of the three noise-contaminated waves.
Analyze and plot the output.
2) Filter the signal with a Gaussian function, plot the “clean” wave, and analyze the
result. Is the resultant wave 100% clean? Explain.
#1(b)
tmin = -2*pi
tmax - 2*pi
delta = 0.01
t = arange(tmin, tmax, delta)
y = sin(2.5*t*t)
plot(t, y, '-')
title('Figure 2: Plotting a wave whose frequency depends on time ')
xlabel('Time (s)')
ylabel('Y(t)')
show()
#b.2
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 5; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
n = len(y) # length of the signal
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
#Time vs. Amplitude
plot(t,y)
title('Figure 2: Time vs. Amplitude')
xlabel('Time')
ylabel('Amplitude')
plt.show()
#Amplitude Spectrum
plot(frq,abs(Y),'r')
title('Figure 2a: Amplitude Spectrum')
xlabel('Freq (Hz)')
ylabel('amplitude spectrum')
plt.show()
#Power Spectrum
plot(frq,abs(Y)**2,'r')
title('Figure 2b: Power Spectrum')
xlabel('Freq (Hz)')
ylabel('power spectrum')
plt.show()
#Exercise 3:
#part 1
t = np.linspace(-0.5*pi,0.5*pi,1000)
#contaminating our waves with successively increasing white noise
y_1 = sin(15*t) + np.random.normal(0,0.2*pi,1000)
y_2 = sin(15*t) + np.random.normal(0,0.3*pi,1000)
y_3 = sin(15*t) + np.random.normal(0,0.4*pi,1000)
y = y_1 + y_2 + y_3 # superposition of three contaminated waves
#Plotting the figure
plot(t,y,'-')
title('A superposition of three waves contaminated with Gaussian Noise')
xlabel('Time (s)')
ylabel('Y(t)')
show()
delta = pi/1000.0
n = len(y) ## calculate frequency in Hz
freq = fftfreq(n, delta) # Computing the FFT
Freq = fftfreq(len(y), delta) #Using Fast Fourier Transformation to #calculate frequencies
N = len(Freq)
fr = Freq[1:len(Freq)/2.0]
A = fft(y)
XF = A[1:len(A)/2.0]/float(len(A[1:len(A)/2.0]))
# Amplitude spectrum for contaminated waves
plt.plot(fr, abs(XF))
title('Figure 3a : Amplitude spectrum with Gaussian Noise')
xlabel('frequency')
ylabel('Amplitude')
show()
# Power spectrum for contaminated waves
plt.plot(fr,abs(XF)**2)
title('Figure 3b: Power spectrum with Gaussian Noise')
xlabel('frequency(cycles/year)')
ylabel('Power')
show()
# part 2
F_v = exp(-(abs(freq)-2)**2/2*0.5**2)
spectrum = A*F_v #Applying the Gaussian Filter to clean our waves
new_y = ifft(spectrum) #Computing the inverse FFT
plot(t,new_y,'-')
title('A superposition of three waves after Noise Filtering')
xlabel('Time (s)')
ylabel('Y(t)')
show()

Something like the code/images below would have been expected. I deviated in the plot of the sum of the three noisy waves to show off all three waves and the sum. Note that in the intensity spectrum of the noisy wave you don't see much. For those cases it can be instructive to also plot the logarithm of the spectrum (np.log) so you can see the noise better.
In the last plot I plotted both the Gaussian filter and the spectrum (different sizes) w/o rescaling just to show where the filter applies. It is effectively a low pass filter (lets low frequencies through), removing the higher frequency noise by multiplying it with numbers close to zero.
import numpy as np
import matplotlib.pyplot as p
%matplotlib inline
#1(b)
p.figure(figsize=(20,16))
p.subplot(431)
t = np.arange(0,10, 0.001) #units in seconds
#cleaner to show the frequency change explicitly than y = sin(2.5*t*t)
f= 1+ t*0.1 # linear up chirp, i.e. frequency goes up , frequency units in Hz (1/sec)
y = np.sin(2* np.pi* f* t)
p.plot(t, y, '-')
p.title('Figure 2: Plotting a wave whose frequency depends on time ')
p.xlabel('Time (s)')
p.ylabel('Y(t)')
#b.2
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 5; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
n = len(y) # length of the signal
k = np.arange(n) ## ok, the FFT has as many points in frequency space, as the original in time
T = n/Fs ## correct ; T=sampling time, the total frequency range is 1/sample time
frq = k/T # two sided frequency range
frq = frq[range(n/2)] # one sided frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
# Amplitude vs. Time
p.subplot(434)
p.plot(t,y)
p.title('y(t)') # Amplitude vs Time is commonly said, but strictly not true, the amplitude is unchanging
p.xlabel('Time')
p.ylabel('Amplitude')
#Amplitude Spectrum
p.subplot(435)
p.plot(frq,abs(Y),'r')
p.title('Figure 2a: Amplitude Spectrum')
p.xlabel('Freq (Hz)')
p.ylabel('amplitude spectrum')
#Power Spectrum
p.subplot(436)
p.plot(frq,abs(Y)**2,'r')
p.title('Figure 2b: Power Spectrum')
p.xlabel('Freq (Hz)')
p.ylabel('power spectrum')
#Exercise 3:
#part 1
t = np.linspace(-0.5*np.pi,0.5*np.pi,1000)
# #contaminating our waves with successively increasing white noise
y_1 = np.sin(15*t) + np.random.normal(0,0.1,1000) # no need to get pi involved in this amplitude
y_2 = np.sin(15*t) + np.random.normal(0,0.2,1000)
y_3 = np.sin(15*t) + np.random.normal(0,0.4,1000)
y = y_1 + y_2 + y_3 # superposition of three contaminated waves
#Plotting the figure
p.subplot(437)
p.plot(t,y_1+2,'-',lw=0.3)
p.plot(t,y_2,'-',lw=0.3)
p.plot(t,y_3-2,'-',lw=0.3)
p.plot(t,y-6 ,lw=1,color='black')
p.title('A superposition of three waves contaminated with Gaussian Noise')
p.xlabel('Time (s)')
p.ylabel('Y(t)')
delta = np.pi/1000.0
n = len(y) ## calculate frequency in Hz
# freq = np.fft(n, delta) # Computing the FFT <-- wrong, you don't calculate the FFT from a number, but from a time dep. vector/array
# Freq = np.fftfreq(len(y), delta) #Using Fast Fourier Transformation to #calculate frequencies
# N = len(Freq)
# fr = Freq[1:len(Freq)/2.0]
# A = fft(y)
# XF = A[1:len(A)/2.0]/float(len(A[1:len(A)/2.0]))
# Why not do as before?
k = np.arange(n) ## ok, the FFT has as many points in frequency space, as the original in time
T = n/Fs ## correct ; T=sampling time, the total frequency range is 1/sample time
frq = k/T # two sided frequency range
frq = frq[range(n/2)] # one sided frequency range
Y = np.fft.fft(y)/n # fft computing and normalization
Y = Y[range(n/2)]
# Amplitude spectrum for contaminated waves
p.subplot(438)
p.plot(frq, abs(Y))
p.title('Figure 3a : Amplitude spectrum with Gaussian Noise')
p.xlabel('frequency')
p.ylabel('Amplitude')
# Power spectrum for contaminated waves
p.subplot(439)
p.plot(frq,abs(Y)**2)
p.title('Figure 3b: Power spectrum with Gaussian Noise')
p.xlabel('frequency(cycles/year)')
p.ylabel('Power')
# part 2
p.subplot(4,3,11)
F_v = np.exp(-(np.abs(frq)-2)**2/2*0.5**2) ## this is a Gaussian, plot it separately to see it; play with the values
cleaned_spectrum = Y*F_v #Applying the Gaussian Filter to clean our waves ## multiplication in FreqDomain is convolution in time domain
p.plot(frq,F_v)
p.plot(frq,cleaned_spectrum)
p.subplot(4,3,10)
new_y = np.fft.ifft(cleaned_spectrum) #Computing the inverse FFT of the cleaned spectrum to see the cleaned wave
p.plot(t[range(n/2)],new_y,'-')
p.title('A superposition of three waves after Noise Filtering')
p.xlabel('Time (s)')
p.ylabel('Y(t)')

Related

Is Time Resolution always the best in Continuous Wavelet Transform (CWT)?

Does wavelet move a sample point at a time in CWT? This seems to have the best time resolution at all time regardless of what scale is. In the example below, signal length is 2048, for whatever scale value, the calculated coef has a length of 2048.
import pywt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
# Define signal
fs = 1024.0
dt = 1 / fs
signal_freuqncy = 100
scale = np.arange(2, 256)
wavelet = 'morl'
t = np.linspace(0, 2, int(2 * fs))
y = np.sin(signal_freuqncy*2*np.pi*t)
# Calculate continuous wavelet transform
coef, freqs = pywt.cwt(y, scale, wavelet, dt)
# Show w.r.t. time and frequency
plt.figure(figsize=(6, 4))
plt.pcolor(t, freqs, coef, shading = 'flat', cmap = 'jet')
# Set yscale, ylim and labels
# plt.yscale('log')
# plt.ylim([0, 20])
plt.ylabel('Frequency (Hz)')
plt.xlabel('Time (sec)')
cbar = plt.colorbar()
cbar.set_label('Energy Intensity')
plt.show()
If my guess is right, it is contradictory to my previous understanding that we use expanded wavelet for low freq. component because we want good freq. resolution but poor time resolution at low freq. range. Like shown in the picture.

Component reconstruction for multivariate lagged time series

I am trying to write a multivariate Singular Spectrum Analysis with Monte Carlo test. To this extent I am working on a code piece that can reconstruct the input series using the lagged trajectory matrix and projection base (ST-PCs) that result from the pca/ssa decomposition of the input series. The attached code piece works for a lagged univariate (that is, single) time series, but I am struggling to make this reconstruction for a lagged multivariate time series. I don't quite get the procedure mathematically and - not surprisingly - I also did not manage to program it. Useful links are attached to the function descriptions of the accompanying code. Input data should be of the form (time * number of series), so say 288x3 implying 3 time series of 288 time levels.
I hope you can help me out!
import numpy as np
def lagged_covariance_matrix(data, M):
""" Computes the lagged covariance matrix using the Broomhead & King method
Background: Plaut, G., & Vautard, R. (1994). Spells of low-frequency oscillations and
weather regimes in the Northern Hemisphere. Journal of the atmospheric sciences, 51(2), 210-236.
Arguments:
data : pxn time series, where p denotes the length of the time series and n the number of channels
M : window length """
# explicitely 'add' spatial dimension if input is a single time series
if np.ndim(data) == 1:
data = np.reshape(data,(len(data),1))
T = data.shape[0]
L = data.shape[1]
N = T - M + 1
X = np.zeros((T, L, M))
for i in range(M):
X[:,:,i] = np.roll(data, -i, axis = 0)
X = X[:N]
# X constitutes the trajectory matrix and is a stacked hankel matrix
X = np.reshape(X, (N, M*L), order = 'C') # https://www.jstatsoft.org/article/viewFile/v067i02/v67i02.pdf
# choose the smallest projection basis for computation of the covariance matrix
if M*L >= N:
return 1/(M*L) * X.dot(X.T), X
else:
return 1/N * X.T.dot(X), X
def sort_by_eigenvalues(eigenvalues, PCs):
""" Sorts the PCs and eigenvalues by descending size of the eigenvalues """
desc = np.argsort(-eigenvalues)
return eigenvalues[desc], PCs[:,desc]
def Reconstruction(M, E, X):
""" Reconstructs the series as the sum of M subseries.
See: https://en.wikipedia.org/wiki/Singular_spectrum_analysis, 'Basic SSA' &
the work of Vivien Sainte Fare Garnot on univariate time series (https://github.com/VSainteuf/mcssa)
Arguments:
M : window length
E : eigenvector basis
X : trajectory matrix """
time = len(X) + M - 1
RC = np.zeros((time, M))
# step 3: grouping
for i in range(M):
d = np.zeros(M)
d[i] = 1
I = np.diag(d)
Q = np.flipud(X # E # I # E.T)
# step 4: diagonal averaging
for k in range(time):
RC[k, i] = np.diagonal(Q, offset = -(time - M - k)).mean()
return RC
#=====================================================================================================
#=====================================================================================================
#=====================================================================================================
# input data
data = None
# number of lags a.k.a. window length
M = 45 # M = 1 means no lag
covmat, X = lagged_covariance_matrix(data, M)
# get the eigenvalues and vectors of the covariance matrix
vals, vecs = np.linalg.eig(covmat)
eig_data, eigvec_data = sort_by_eigenvalues(vals, vecs)
# component reconstruction
recons_data = Reconstruction(M, eigvec_data, X)
The following works but does not make direct use of the projection base (ST-PCs). Hence the original question still stands, but this already helps a great lot and solves the problem for me. This code piece makes use of the similarity between the ST-PCs projection base and the u & vt matrices obtained from the single value decomposition of the lagged trajectory matrix. I think it gives back the same answer as one would obtain using the ST-PCs projection base?
def lag_reconstruction(data, X, M, pairs = None):
""" Reconstructs the series as the sum of M subseries using the lagged trajectory matrix.
Based on equation 2.9 of Plaut, G., & Vautard, R. (1994). Spells of low-frequency oscillations and weather regimes in the Northern Hemisphere. Journal of Atmospheric Sciences, 51(2), 210-236.
Inspired by work of R. van Westen and C. Wieners """
time = data.shape[0] # number of time levels of the original series
L = data.shape[1] # number of input series
N = time - M + 1
u, s, vt = np.linalg.svd(X, full_matrices = False)
rc = np.zeros((time, L, M))
for t in range(time):
counter = 0
for i in range(M):
if t-i >= 0 and t-i < N:
counter += 1
if pairs:
for k in pairs:
rc[t,:,i] += u[t-i, k] * s[k] * vt[k, i*L : i*L + L]
else:
for k in range(len(s)):
rc[t,:,i] += u[t-i, k] * s[k] * vt[k, i*L : i*L + L]
rc[t] = rc[t]/counter
return rc

DCT based compressive sensing

I need please to understand the following parts.
**A=zeros(M,N);**
**for k=1:M**
**A(k,:)=dct(Phi(k,:));**
**end**
Result2 = BSBL_BO(A,y,groupStartLoc,0,'prune_gamma',-1, 'max_iters',20);
why does the author calculate the DCT of the sensing matrix Phi ?
what I know is that
y = Phi * DCT(x)
Thus, we need to find DCT(x), not DCT(Phi).
The Complete code
% Example showing the ability of BSBL to recover non-sparse signals.
% The signal to recover is a real-world fetal ECG data
clear; close all;
N = 250;
M = 125;
% load fetal ECG data
load ECGsegment.mat;
x = double(ecg);
% load a sparse binary matrix.
load Phi.mat;
% =========================== Compress the ECG data ====================
y = Phi * x;
% First recover the signal's coefficients in the DCT domain;
% Then recover the signal using the DCT ceofficients and the DCT basis
% Look at the coefficients when representing the fetal ECG signal in DCT
% domain; Still, the coefficients are not sparse. To recover all the
% coefficients are not impossible for most compressed sensing algorithms!
set(0, 'DefaultFigurePosition', [400 150 500 400]);
figure(2);
plot(dct(ecg)); title('\bf DCT coefficients of the fetal ECG signal; They are still not sparse!');
% Now we use the BSBL-BO. Its' block size is randomly set ( = 25, as in the
% above experiment).
**A=zeros(M,N);
for k=1:M
A(k,:)=dct(Phi(k,:));
end**
Result2 = BSBL_BO(A,y,groupStartLoc,0,'prune_gamma',-1, 'max_iters',20);
signal_hat = idct(Result2.x);
set(0, 'DefaultFigurePosition', [800 150 500 400]);
figure(3);
subplot(211);plot(signal_hat); title('\bf Reconstructed by BSBL-BO from DCT Domain');
subplot(212);plot(x,'r');title('\bf Original ECG Signal');

Complex cross spectral density

mlab.csd from matplotlib: http://matplotlib.org/api/mlab_api.html#matplotlib.mlab.csd can be used to get real valued cross spectral density. If I want to get the phase information from the spectral density, I need a csd calculation which returns complex values. Is there one ?
This is discussed e.g. in this answer: https://stackoverflow.com/a/29306730/3920342
If you use csd of the mlab library you will get complex values so you can calculate phase angles (and the real valued coherence). In the following code s1 and and s2 contain the two signals (in time domain) to be correlated.
from matplotlib import mlab
# First create power sectral densities for normalization
(ps1, f) = mlab.psd(s1, Fs=1./dt, scale_by_freq=False)
(ps2, f) = mlab.psd(s2, Fs=1./dt, scale_by_freq=False)
plt.plot(f, ps1)
plt.plot(f, ps2)
# Then calculate cross spectral density
(csd, f) = mlab.csd(s1, s2, NFFT=256, Fs=1./dt,sides='default', scale_by_freq=False)
fig = plt.figure()
ax1 = fig.add_subplot(1, 2, 1)
# Normalize cross spectral absolute values by auto power spectral density
ax1.plot(f, np.absolute(csd)**2 / (ps1 * ps2))
ax2 = fig.add_subplot(1, 2, 2)
angle = np.angle(csd, deg=True)
angle[angle<-90] += 360
ax2.plot(f, angle)
# zoom in on frequency with maximum coherence
ax1.set_xlim(9, 11)
ax1.set_ylim(0, 1e-0)
ax1.set_title("Cross spectral density: Coherence")
ax2.set_xlim(9, 11)
ax2.set_ylim(0, 90)
ax2.set_title("Cross spectral density: Phase angle")
Here the real and imaginary(!) part of the cross spectral density:
This code is taken from the question How to use the cross-spectral density to calculate the phase shift of two related signals to create two signals s1 and s2:
"""
Compute the coherence of two signals
"""
import numpy as np
import matplotlib.pyplot as plt
# make a little extra space between the subplots
plt.subplots_adjust(wspace=0.5)
nfft = 256
dt = 0.01
t = np.arange(0, 30, dt)
nse1 = np.random.randn(len(t)) # white noise 1
nse2 = np.random.randn(len(t)) # white noise 2
r = np.exp(-t/0.05)
cnse1 = np.convolve(nse1, r, mode='same')*dt # colored noise 1
cnse2 = np.convolve(nse2, r, mode='same')*dt # colored noise 2
# two signals with a coherent part and a random part
s1 = 0.01*np.sin(2*np.pi*10*t) + cnse1
s2 = 0.01*np.sin(2*np.pi*10*t) + cnse2

How numpy decides number of coefficient in smooth bivariate spline?

I am using SmoothBivariateSpline of numpy library to get a surface fit of a data.
By default degrees of bivariate spline is 3. When i am fitting my data, if the degree is 3, i should get 16 coefficients and 8 knot points in x and y direction respectively. This is happening for one of the spline fit (f1). But i am getting 25 coefficients and 9 knot points for the second fit (f2).
Can someone please tell me why i am getting more number of coefficients?? if you want i can share my data file as well.
here is my code
import numpy as np
# loading the txt file from where i am reading data
L = np.loadtxt('DATA_File_PT.txt')
P = L[:,0] # pressure
T = L[:,1] # temperature
H = L[:,2] # enthalpy
Rho = L[:,3] # density
# fitting Spline for each set (I have P and T as independent variable, H and Rho are dependent)
f1 = interpolate.SmoothBivariateSpline(P, T, H, kx=3, ky=3)
f2 = interpolate.SmoothBivariateSpline(P, T, Rho, kx=3, ky=3)
o1=f1.get_knots()
coeff1 = f1.get_coeffs()
o2=f2.get_knots()
coeff2 = f2.get_coeffs()