grouping similar values of y and getting the x values - using octave / matlab - grouping

I have imported a voice audio signal and I'm trying to find all the values of t (the time line) along with their indexes where yy2 (the audio signal) values are similar.
I can group yy2 values using histc but how can I get it so that I know the values of t for the grouped yy2 values?
Should I be using histc for this since the bins (total bars used in the plot) will be more than 10?
My thoughts where to:
to group similar values of yy2.
get the t values for each grouped yy2 value that is similar.
In the code below, yy2 simulates the imported audio signal.
clear, clc, clf,close all
pkg load signal
fs_rate=8000
len_of_sig=1.5; %length of signal in seconds
t=linspace(0,len_of_sig,fs_rate*len_of_sig);
yy2=.5*sin(2*pi*3*t)+.3*sin(2*pi*2.2*t);
subplot(2,1,1);plot(t,yy2,'-*');
subplot(2,1,2);hist(yy2)
plot:
Ps: I'm using Octave 4.2.2 which is similar to matlab.

If you want groups of indices and values of t where yy2 values are similar:
% Get uniqe valuse of yy2
unique_vals = unique(yy2);
% Get index groups for each unique yy2 value in a cell array
index_groups = arrayfun(#(v) find((yy2 == v)),unique_vals,'UniformOutput',false);
% Get t groups for each unique yy2 value in a cell array
t_groups = cellfun(#(v) t(v),index_groups,'UniformOutput',false);
% Get yy2 groups for each unique yy2 value in a cell array (not really required)
yy2_groups = cellfun(#(v) yy2(v),index_groups,'UniformOutput',false);

Related

How to merger these two records ino one row removing Null value in Informatica using transformation. Please see the snapshot for scenario

enter image description here
Input-
Code value Min Max
A abc 10 null
A abc Null 20
Output-
Code value Min Max
A abc 10 20
You can use an aggregator transformation to remove nulls and get single row. I am providing solution based on your data only.
use an aggregator with below ports -
inout_Code (group by)
inout_value (group by)
in_Min
in_Max
out_Min= MAX(in_Min)
out_Max = MAX(in_Max)
And then attach out_Min, out_Max, code and value to target.
You will get 1 record for a combination of code and value and null values will be gone.
Now, if you have more than 4/5/6/more etc. code,value combinations and some of min, max columns are null and you want multiple records, you need more complex mapping logic. Let me know if this helps. :)

How to count the number of blank cells in one column based on the first blank row in another column

I have a spreadsheet set up with tv program titles in column B, the next 20 or so columns are tracking different information about that title. I need to count the number of blank cells in column R relating to the range in column B that contains titles (ie, up to the first blank row in column B.)
I can easily set up a formula to count the number of empty cells in a given range in column R, the problem is as I add more titles to the sheet I would have to keep updating the range in the formula [a simple =COUNTIF(R3:R1108, "")]. I've done a little googling of the problem but haven't quite found anything that fits the situation. I thought I would be able to get the following to work but I didn't fully understand what was going on with them and they weren't giving the expected results.
I've tried these formulas:
=ArrayFormula(sum(MIN("B3:B"&MIN(IF((R3:R)>"",ROW(B3:B)-1)))))
=ArrayFormula(sum(INDIRECT("B3:B"&MIN(IF((R3:R)>"",ROW(B3:B)-1)))))
And
=if(SUM(B3:B)="","",SUM(R3:R))
All of the above formulas give "0" as the result. Based on the COUNTIF formula I have set up it should be 840, which is a number I would expect. Currently, there are 1106 rows containing data and 840 is a reasonable number to expect in this situation.
Is this what you're looking for?
=COUNTBLANK(INDIRECT(CONCATENATE("R",3,":R",(3+COUNTA(B3:B)))))
This counts the number of non-blank rows in the B column (starting at B3), and uses that to determine the rows to perform COUNTBLANK in, in column R (starting at R3). CONCATENATE is a way to give it a range by adding strings together, and the INDIRECT allows for the range reference to be a string.
a proper way would be:
=ARRAYFORMULA(COUNTBLANK(INDIRECT(ADDRESS(3, 18, 4)&":"&
ADDRESS(MAX(IF(B3:B<>"", ROW(B3:B), )), 18, 4)))
or shorter:
=ARRAYFORMULA(COUNTBLANK(INDIRECT("R3:"&
ADDRESS(MAX(IF(B3:B<>"", ROW(B3:B), )), 18, 4))))
or shorter:
=ARRAYFORMULA(COUNTBLANK(INDIRECT("R3:R"&MAX(IF(B3:B<>"", ROW(B3:B), ))))

Adding values from multiple .rrd file

Problem =====>
Basically there are three .rrd which are generated for three departments.
From that we fetch three values (MIN, MAX, CURRENT) and print ins 3x3 format. There is a python script which does that.
eg -
Dept1: Min=10 Max=20 Cur=15
Dept2: Min=0 Max=10 Cur=5
Dept3: Min=10 Max=30 Cur=25
Now I want to add the values together (Min, Max, Cur) and print in one line.
eg -
Dept: Min=20 Max=60 Cur=45
Issue I am facing =====>
No matter what CDEF i write, I am breaking the graph. :(
This is the part I hate as i do not get any error message.
As far as I understand(please correct me if i am wrong) I definitely cannot store the value anywhere in my program as a graph is returned.
What would be a proper way to add the values in this condition.
Please let me know if my describing the problem is lacking more detail.
You can do this with a VDEF over a CDEF'd sum.
DEF:a=dept1.rrd:ds0:AVERAGE
DEF:b=dept2.rrd:ds0:AVERAGE
DEF:maxa=dept1.rrd:ds0:MAXIMUM
DEF:maxb=dept2.rrd:ds0:MAXIMUM
CDEF:maxall=maxa,maxb,+
CDEF:all=a,b,+
VDEF:maxalltime=maxall,MAXIMUM
VDEF:alltimeavg=all,AVERAGE
PRINT:maxalltime:Max=%f
PRINT:alltimeavg:Avg=%f
LINE:all#ff0000:AllDepartments
However, you should note that, apart form at the highest granularity, the Min and Max totals will be wrong! This is because max(a+b) != max(a) + max(b). If you dont calculate the min/max aggregate at time of storage, the granularity will be gone at time of display.
For example, if a = (1, 2, 3) and b = (3, 2, 1), then max(a) + max(b) = 6; however the maximum at any point in time is in fact 4. The same issue applies to using min(a) + min(b).

How to apply a mask to a DataFrame in Python?

My dataset named ds_f is a 840x57 matrix which contains NaN values. I want to forecast a variable with a linear regression model but when I try to fit the model, I get this message "SVD did not converge":
X = ds_f[ds_f.columns[:-1]]
y = ds_f['target_o_tempm']
model = sm.OLS(y,X) #stackmodel
f = model.fit() #ERROR
So I've been searching for an answer to apply a mask to a DataFrame. Although I was thinking of creating a mask to "ignore" NaN values and then convert it into a DataFrame, I get the same DataFrame as ds_f, nothing changes:
m = ma.masked_array(ds_f, np.isnan(ds_f))
m_ds_f = pd.DataFrame(m,columns=ds_f.columns)
EDIT: I've solved the problem by writing model=sm.OLS(X,y,missing='drop') but a new problem appears when I display results, I get only NaN:
Are you using statsmodels? If so, you could specify sm.OLS(y, X, missing='drop'), to drop the NaN values prior to estimation.
Alternatively, you may want to consider interpolating the missing values, rather than dropping them.

For loop using a t-stat function to create a list

I am using the following function to calculate the t-stat for data in data frame (x):
wilcox.test.all.genes<-function(x,s1,s2) {
x1<-x[s1]
x2<-x[s2]
x1<-as.numeric(x1)
x2<-as.numeric(x2)
wilcox.out<-wilcox.test(x1,x2,exact=F,alternative="two.sided",correct=T)
out<-as.numeric(wilcox.out$statistic)
return(out)
}
I need to write a for loop that will iterate a specific number of times. For each iteration, the columns need to be shuffled, the above function performed and the maximum t-stat value saved to a list.
I know that I can use the sample() function to shuffle the columns of the data frame, and the max() function to identify the maximum t-stat value, but I can't figure out how to put them together to achieve a workable code.
You are trying to generate empiric p-values, corrected for the multiple comparisons you are making because of the multiple columns in your data. First, let's simulate an example data set:
# Simulate data
n.row = 100
n.col = 10
set.seed(12345)
group = factor(sample(2, n.row, replace=T))
data = data.frame(matrix(rnorm(n.row*n.col), nrow=n.row))
Calculate the Wilcoxon test for each column, but we will replicate this many times while permuting the class membership of the observations. This gives us an empiric null distribution of this test statistic.
# Re-calculate columnwise test statisitics many times while permuting class labels
perms = replicate(500, apply(data[sample(nrow(data)), ], 2, function(x) wilcox.test(x[group==1], x[group==2], exact=F, alternative="two.sided", correct=T)$stat))
Calculate the null distribution of the maximum test statistic by collapsing across the multiple comparisons.
# For each permuted replication, calculate the max test statistic across the multiple comparisons
perms.max = apply(perms, 2, max)
By simply sorting the results, we can now determine the p=0.05 critical value.
# Identify critical value
crit = sort(perms.max)[round((1-0.05)*length(perms.max))]
We can also plot our distribution along with the critical value.
# Plot
dev.new(width=4, height=4)
hist(perms.max)
abline(v=crit, col='red')
Finally, comparing a real test statistic to this distribution will give you an empiric p-value, corrected for multiple comparisons by controlling the family-wise error to p<0.05. For example, let's pretend a real test stat was 1600. We could then calculate the p-value like:
> length(which(perms.max>1600))/length(perms.max)
[1] 0.074