The process to get a observation signal - compression

First, a small number of active users from a total of K users are set to respectively transmit a frame of convolutionally-encoded binary bits, optionally interleaved, then mapped following the Binary Phase-Shift Keying (BPSK) constellation. Inactive users are set to transmit all-zero frames, thus forming an augmented BPSK constellation {0, ±1}. All frames to be transmitted are then spread using bipolar-valued Pseudo-Noise (PN) spreading sequences before they go through their respective wireless channels (modeled as i.i.d. Rayleigh distributed channels with an exponentially-decaying power delay profile where the expected decay is known). The superposition of these signals through channels is considered to be the observation y the aggregated node or gateway receives
enter code here
%%%%%%%%%
clc
clear
format long;
K=128 ; % the number of users
Ns=K/4; %spreading factor
Lh=6; %channel tap
L=104;
Pactive=0.02; %the active possibility of uers
nTx=K*Pactive;
nBlocks = 10; % number of blocks for each SNR
SNR = 0:5:25; % dB
X_origin=[];
for iSNR = 1:length(SNR)
for iBlock = 1:nBlocks
for i=1:K
r_K=randperm(L);
x=zeros(L,1);
for j=1:nTx
x(r_K(j))=1;
end;
X_origin=[X_origin;x]; % signal vector
end;
inSignals=pskmod(X_origin ,2); %bpsk modulation
cH = RayleighChannel(Ns*L+Lh-1,K*L); %channel matrix
rH = RVD(cH);
cB = cH*inSignals;
cY = awgn(cB,SNR(iSNR),'measured'); % add noise, n~(0,sigma2)
rY = RVD(cY);
spower = mean(abs(cY(:)).^2);
cG = cH'*cH;
rG = rH'*rH;
sigma2 = spower/(10^(SNR(iSNR)/10));
cW = cG+eye(K*L)*sigma2;
rW = RVD(cW);
invW = inv(rW);
rYBar = rH'*rY;
rSHat = invW*rYBar;
A=rH*rSHat;
S= CS_gOMP( rY,A,nTx);
% X_reconstuction=rSHat*X_r;
[~,vBer1(iBlock)] = biterr(X_origin,S);
end;
% ber(iSNR) = mean(vBer);
ber1(iSNR) = mean(vBer1);
SNR(iSNR)
ber1(iSNR)
end;
semilogy( SNR,ber1,'go--');
%set(P1,'Linewidth',[2]);
grid on;
title (['\fontsize{12}\bfCS—MUD :\rm','16-QAM,N=',int2str(K*L),',K=',int2str(K)]);
xlabel ('SNR'); ylabel ('BER');
legend('SMV-CS-MUD');

Related

DCT based compressive sensing

I need please to understand the following parts.
**A=zeros(M,N);**
**for k=1:M**
**A(k,:)=dct(Phi(k,:));**
**end**
Result2 = BSBL_BO(A,y,groupStartLoc,0,'prune_gamma',-1, 'max_iters',20);
why does the author calculate the DCT of the sensing matrix Phi ?
what I know is that
y = Phi * DCT(x)
Thus, we need to find DCT(x), not DCT(Phi).
The Complete code
% Example showing the ability of BSBL to recover non-sparse signals.
% The signal to recover is a real-world fetal ECG data
clear; close all;
N = 250;
M = 125;
% load fetal ECG data
load ECGsegment.mat;
x = double(ecg);
% load a sparse binary matrix.
load Phi.mat;
% =========================== Compress the ECG data ====================
y = Phi * x;
% First recover the signal's coefficients in the DCT domain;
% Then recover the signal using the DCT ceofficients and the DCT basis
% Look at the coefficients when representing the fetal ECG signal in DCT
% domain; Still, the coefficients are not sparse. To recover all the
% coefficients are not impossible for most compressed sensing algorithms!
set(0, 'DefaultFigurePosition', [400 150 500 400]);
figure(2);
plot(dct(ecg)); title('\bf DCT coefficients of the fetal ECG signal; They are still not sparse!');
% Now we use the BSBL-BO. Its' block size is randomly set ( = 25, as in the
% above experiment).
**A=zeros(M,N);
for k=1:M
A(k,:)=dct(Phi(k,:));
end**
Result2 = BSBL_BO(A,y,groupStartLoc,0,'prune_gamma',-1, 'max_iters',20);
signal_hat = idct(Result2.x);
set(0, 'DefaultFigurePosition', [800 150 500 400]);
figure(3);
subplot(211);plot(signal_hat); title('\bf Reconstructed by BSBL-BO from DCT Domain');
subplot(212);plot(x,'r');title('\bf Original ECG Signal');

Why polyxpoly does not work in GNU octave

I want to plot Det curve and roc curve Why polyxpoly does not work?
I plotted a DET curve based on the following steps: First, I changed the threshold and count the number of false rejections and false acceptances. Second, I use plot MATLAB function to draw FAR and FRR.
function [TPR,FPR] = DETCurve(G,I)
#load('G.dat');
#load('I.dat');
#load data from the column 4 fscore
i0=find(Fscore(:,end)==0);
i1=find(Fscore(:,end)==1);
D0=Fscore(i0,end-1);
D1=Fscore(i1,end-1);
% Creates a matrix
TPR = zeros(1, 1000);
FPR = zeros(1, 1000);
#number of positive responses and negative responses in ground truth
P = length(i1);
N = length(i0);
index = 0;
% Assume the threshold as 0.01
for threshold = 0:0.001:1
TP = 0;
FP = 0;
%Provides the D1 count
for i = 1:length(i1)
if (D1(i) >= threshold) TP = TP + 1;
end
end
% Provides the D0count
for i1 = length(i0)
if(D0(i1) >= threshold)
FP = FP + 1;
end
end
index = index + 1;
% Calculating true positive rate
TPR(index) = TP/P;
% Calculating false positive rate
FPR(index) = FP/N;
end
% Calculating false negative rate(FNR) using TPR+FNR=1
FNR = (1-TPR);
x = 0:0.1:1;
y = x;
#[x(i),y(i)] = polyxpoly(x,y,FPR,FNR);
fprintf('EER(X): %d n', x(i));
fprintf('EER(Y): %d n', y(i));
plot(FPR,FNR,'LineWidth',2, 'color','g');
hold on;
plot(x,y,x,1-y, 'color','r');
plot (x(i),y(i),'X','MarkerSize',10, 'LineWidth', 2,'Color','b');
hold off;
title('DET CURVE');
xlabel('False Positive Rate (FPR) ');
ylabel('False Neagtive Rate (FNR) ');
end

Tensorflow RNN training won't execute?

I am currently trying to train this RNN network, but seem to be running into weird errors, which I am not able to decode.
The input to my rnn network is digital sampled audio files. As the audio file can be of different length, will the vector of the sampled audio also have different lengths.
The output or the target of the neural network is to recreate a 14 dimensional vector, containing certain information of the audio files. I've already know the target, by manually calculating it, but need to make it work with a neural network.
I am currently using tensorflow as framework.
My network setup looks like this:
def last_relevant(output):
max_length = int(output.get_shape()[1])
relevant = tf.reduce_sum(tf.mul(output, tf.expand_dims(tf.one_hot(length, max_length), -1)), 1)
return relevant
def length(sequence): ##Zero padding to fit the max lenght... Question whether that is a good idea.
used = tf.sign(tf.reduce_max(tf.abs(sequence), reduction_indices=2))
length = tf.reduce_sum(used, reduction_indices=1)
length = tf.cast(length, tf.int32)
return length
def cost(output, target):
# Compute cross entropy for each frame.
cross_entropy = target * tf.log(output)
cross_entropy = -tf.reduce_sum(cross_entropy, reduction_indices=2)
mask = tf.sign(tf.reduce_max(tf.abs(target), reduction_indices=2))
cross_entropy *= mask
# Average over actual sequence lengths.
cross_entropy = tf.reduce_sum(cross_entropy, reduction_indices=1)
cross_entropy /= tf.reduce_sum(mask, reduction_indices=1)
return tf.reduce_mean(cross_entropy)
#----------------------------------------------------------------------#
#----------------------------Main--------------------------------------#
### Tensorflow neural network setup
batch_size = None
sequence_length_max = max_length
input_dimension=1
data = tf.placeholder(tf.float32,[batch_size,sequence_length_max,input_dimension])
target = tf.placeholder(tf.float32,[None,14])
num_hidden = 24 ## Hidden layer
cell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True) ## Long short term memory
output, state = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32,sequence_length = length(data)) ## Creates the Rnn skeleton
last = last_relevant(output)#tf.gather(val, int(val.get_shape()[0]) - 1) ## Appedning as last
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = cost(output,target)# How far am I from correct value?
optimizer = tf.train.AdamOptimizer() ## TensorflowOptimizer
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
## Training ##
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
batch_size = 1000
no_of_batches = int(len(train_data)/batch_size)
epoch = 5000
for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_data[ptr:ptr+batch_size], train_output[ptr:ptr+batch_size]
ptr+=batch_size
sess.run(minimize,{data: inp, target: out})
print "Epoch - ",str(i)
incorrect = sess.run(error,{data: test_data, target: test_output})
print('Epoch {:2d} error {:3.1f}%'.format(i + 1, 100 * incorrect))
sess.close()
The error seem to be the usage of the function last_relevant, which should take the output, and feed it back.
This is the error message:
TypeError: Expected binary or unicode string, got <function length at 0x7f846594dde8>
Anyway to tell what could be wrong here?
I tried to build your code in my local.
There is a fundamental mistake in the code which is that you call tf.one_hot but what you pass don't really fit with what is expected:
Read documentation here:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard6/tf.one_hot.md
tf.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)
However, you are passing a function pointer ("length" is a function in your code, I recommend naming your function in a meaningful manner by refraining yourself from using common keywords) instead of the first parameter.
For a wild guide, you can put your indices as first param (instead of my placeholder empty list) and it will be fixed
relevant = tf.reduce_sum(
tf.mul(output, tf.expand_dims(tf.one_hot([], max_length), -1)), 1)

How to syncronize audio with the power spectrum and choose frame length N (to do fft)?

I am doing a music visualizer program in C++. It gives the frequency spectrum of the audio input. I used Aquila-dsp for getting audio samples, Kiss-fft for doing FFT, and SMFL to play the audio. The input is in (.wav) format. OpenGL is used to plot the graph.
Algorithm Used:
1. *framePointer = 0, N = 10000;*
2. Load audio file and play it using SFML.
3. For *i* = framePointer to --> *framePointer* + *N* < *total_samples_count*
Collect audio samples.
4. Apply Window Function (Hann window)
5. Apply *FFT*
6. Calculate magnitude of first N/2 *FFT* data
*Magnitude* = sqrt( re * re + im * im)
7. Convert to dB(log) scale (optional)
10*log(magnitude)
8. Plot N/2, log(magnitude) values
9. If *framaPointer* >= *toatl_samples_count - N*
Exit
Else go to step 3.
#define N 10000
int framePointer = 0;
void getData()
{
int i,j,x;
Aquila::WaveFile wav(fileName);
double mag[N/2];
double roof = wav.getSamplesCount();
//Get first N samples
for( i = framePointer, j = 0; i < (framePointer + N)
&& framePointer < roof - N ; i++,j++ ){
//Apply window function on the sample
double multiplier = 0.5 * (1 - cos(2*M_PI*j/(N-1)));
in[j].r = multiplier * wav.sample(i);
in[j].i = 0; //stores N samples
}
if(framePointer < roof-N -1){
framePointer = i;
}
else {
printf("Frame pointer > roof - N \n");
printf("Framepointer = %d\n",framePointer );
//get total time and exit
timestamp_t t1 = get_timestamp();
double secs = (t1 - tmain) / 1000000.0L;
std::cout<<"Program exit.\nTotal time: "<<secs<<std::endl;
exit(0);
}
// Apply FFT
getFft(in,out);
// calculate magnitude of first N/2 FFT
for(i = 0; i < N/2; i++ ){
mag[i] = sqrt((out[i].r * out[i].r) + (out[i].i * out[i].i));
graph[i] = log(mag[i]) *10;
}
}
I plot the graph using OpenGL.
Full source code
The problem I got is in choosing the frame length (N value).
For a certain length of audio having:
Length: 237191 ms
Sample frequency: 44100 Hz
Channels: 2
Byte rate: 172 kB/s
Bits per sample: 16b
The graph is synchronized with the audio if I choose N = 10000. Or at least it is stopping while the audio ends.
How to chose the N (frame length) such that the audio will be synchronized with the spectrum.
The audio is dual channel, will this algorithm work for that?
Start by deciding how often you want the visualizer to update. Let's say we want it to update 25 times per second (similar to TV or movie frame rates). That means every 1 / 25 seconds, or every 40 ms. At a sample rate of 44.1 kHz this translates to 44100 / 25 = 1764 samples. Since we typically want a power of 2 FFT size then let's go for N = 2048.
This gives a resolution in the frequency axis of 44100 / 2048 = 21.5 Hz. If you want higher resolution then you can overlap successive FFT windows, e.g. keeping the same update rate and overlapping by 50% then you can have N = 4096 for a resolution of 10.75 Hz.

How calculate fiber length

I need help calculating fiber length. I found all the coordinate values of center line of fiber by using regional maximal of euclidean distance. Here is the image that I got after applying regional maximal of euclidean distance. Now I want to draw a line on each fiber by using these points how can I do that so that I can extract each fiber length automatically. I tried to do it by using spline curve fitting. But the problem is I was not able to initiate the starting and ending point of fiber. How can I calculate each fiber length?
close all;
clear all;
clc
ima=imread('ecm61.png');
ima=bwareaopen(ima,50);
[rowsInImage,columnImage]=size(ima);
skel= bwmorph(ima,'skel',Inf);
figure
imshow(skel)
B = bwmorph(skel, 'branchpoints');
E = bwmorph(skel, 'endpoints');
[x,y] = find(E);
%plot(x,y,'+')
B_loc = find(B);
Dmask = false(size(skel));
for k = 1:numel(y)
D = bwdistgeodesic(skel,y(k),x(k));
distanceToBranchPt = min(D(B_loc));
Dmask(D < distanceToBranchPt) =true;
end
skelD = skel - Dmask;
figure
imshow(skelD);
hold all;
[x,y] = find(B); plot(y,x,'ro')
numberOfEndpoints=length(y);
% Label the image. Gives each separate segment a unique ID label number.
[labeledImage, numberOfSegments] = bwlabel(skelD);
fprintf('There are %d endpoints on %d segments.\n', numberOfEndpoints, numberOfSegments);
% Get the label numbers (segment numbers) of every endpoint.
for k = 1 : numberOfEndpoints
thisRow = x(k);
thisColumn = y(k);
%line([endPointRows(k),endPointColumns(k)],[endPointRows(k+1),endPointColumns(k+1)])
% Get the label number of this segment
theLabels(k) = labeledImage(thisRow, thisColumn);
fprintf('Endpoint #%d at (%d, %d) is in segment #%d.\n', k, thisRow, thisColumn, theLabels(k));
end
% For each endpoint, find the closest other endpoint
% that is not in the same segment
for k = 1 : numberOfEndpoints
thisRow = x(k);
thisColumn =y(k);
% Get the label number of this segment
thisLabel = theLabels(k);
otherEndpointIndexes = setdiff(1:numberOfEndpoints, k);
%if mustBeDifferent
% If they want to consider joining only end points that reside on different segments
% then we need to remove the end points on the same segment from the "other" list.
% Get the label numbers of the other end points.
%otherLabels = theLabels(otherEndpointIndexes);
%onSameSegment = (otherLabels == thisLabel); % List of what segments are the same as this segment
%otherEndpointIndexes(onSameSegment) = []; % Remove if on the same segment
%end
% Now get a list of only those end points that are on a different segment.
otherCols = y(otherEndpointIndexes);
otherRows = x(otherEndpointIndexes);
% Compute distances
distances = sqrt((thisColumn - otherCols).^2 + (thisRow - otherRows).^2);
% Find the min
[minDistance, indexOfMin] = min(distances);
nearestX = otherCols(indexOfMin);
nearestY = otherRows(indexOfMin);
%if minDistance < longestGapToClose;
if minDistance < rowsInImage
% Draw line from this endpoint to the other endpoint.
line([thisColumn, nearestX], [thisRow, nearestY], 'Color', 'g', 'LineWidth', 2);
fprintf('Drawing line #%d, %.1f pixels long, from (%d, %d) on segment #%d to (%d, %d) on segment #%d.\n', ...
k, minDistance, thisColumn, thisRow, theLabels(k), nearestX, nearestY, theLabels(indexOfMin));
end
end
title('Endpoints Linked by Green Lines', 'FontSize', 12, 'Interpreter', 'None');
after using edge linking
I would do this:
Skeletonize
Pruning
Find the different path at each intersection. It will give you the different segments, and you can reconnect them using the orientation.