I have about 2330000 observations, I want to assign evenly spaced 10,000 buckets. The bucket criteria would the max(var)-min(var)/10,000. So for example, my maximum value is 3000 and my minimum value is -200, So my bucket size will be (3000+200)/10,000=0.32. So any value between -200 to (-200+0.32) should go to bucket 1, and any value between (-200+0.32) to (-200+0.32*2) should go to bucket 2 and so on. The dataset will be something like this:
Var_value bucket
-200 1
-53 ?
-5 ?
-46 ?
5
8
4
56
7542
242
....
How should the code be written? I am thinking a do loop but not sure how to do it? Can anyone help?
Not sure what you would do with the proposed loop but this is what I'd do:
/* get some data to play with */
data a(keep=val);
do i=1 to 1000000;
val = 3200*ranuni(0)-200;
output;
end;
run;
/* groups=xxx specifies the number of buckets
var yyy is the name of the variable whose values we'd like to classify
ranks zzz specifies the name of the variable containing the assigned rank
*/
proc rank data=a out=b groups=10000;
var val;
ranks bucket;
run;
Below is another approach you could use:
Generate random simulated data
data have;
do i=1 to 250000;
/*Seed is `_N_` so we'll see the same random item count.*/
var_value = (ranuni(_N_)-0.5)*8000;
output;
end;
drop i;
run;
Solution(s)
/*Desired number of buckets.*/
%let num_buckets = 10000;
/*Determine bucket size and minimum var_value*/
proc sql noprint;
select (max(var_value)-min(var_value))/&num_buckets.,
min(var_value)
into : bucket_size,
: min_var_value
from have;
quit;
%put bucketsize: &bucket_size.;
%put min var_value: &min_var_value.;
/* 1 - Assign buckets using data step */
data want;
set have;
bucket = max(ceil((var_value-&min_var_value.)/&bucket_size.),1);
run;
proc sort data=want;
by bucket;
run;
/* or 2 - Assign buckets using proc sql*/
proc sql;
create table want as
select var_value,
max(ceil((var_value-&min_var_value.)/&bucket_size.),1) as bucket
from have
order by CALCULATED bucket;
quit;
Related
SAS - I'd like to count the number of times a record appears within a variable (Ref) and apply a count in a new variable (Count) for these.
Eg.
Ref
Count
1000
1
1001
1
2000
1
3000
1
1000
2
1000
3
What is the best way to do this?
That is what PROC FREQ is for. It will count the number of OBSERVATIONS for each value of a variable (or combination of variables).
proc freq data=have;
tables REF ;
run;
If you want the result in a dataset then use the OUT= option of the TABLES statement.
proc freq data=have;
tables REF / out=want;
run;
Managed to achive the results with the below code.
Please note - Data needs to be sorted with a PROC SORT before running this.
DATA want;
set have;
BY Variable;
IF FIRST.Variable then counter = 1
ELSE counter + 1;
RUN;
If you use the SAS hash object, you can do it with the original order intact
data have;
input Ref $;
datalines;
1000
1001
2000
3000
1000
1000
;
data want;
if _N_ = 1 then do;
dcl hash h();
h.definekey('Ref');
h.definedata('Count');
h.definedone();
end;
set have;
if h.find() ne 0 then Count = 1;
else Count + 1;
h.replace();
run;
I am looking to create an optimal bucketing macro. My first obstacle is to create equidistant buckets. I am using the sashelp.baseball dataset as an example.
I take the range of logsalary and divide this by 100 to create the distance between each bucket. Then I would like to assign the logsalary column a bucket value if the logsalary is smaller than the bucket value
The code I have tried is attached. I am hoping to be able to join or merge on the bucket limit values and use a greater than or smaller than clause to append a bucket value
/*Sort the baseball dataset by smallest to largest, removing any missing data*/
PROC SORT
DATA = sashelp.baseball
(KEEP = logsalary
WHERE = (NOT MISSING(logsalary)))
OUT = baseball;
BY logsalary;
RUN;
/*Identify the size of each bucket by splitting the range into 100 equidistant buckets*/
DATA _NULL_;
RETAIN bin_size;
SET baseball END = EOF;
IF _N_ = 1 THEN DO;
bin_size = logsalary;
CALL SYMPUT("min_bin",logsalary);
END;
IF EOF THEN DO;
bin_size = ((logsalary - bin_size) / 100);
CALL SYMPUT("bin_size",bin_size);
END;
RUN;
/*Create a vector to identify each bucket range*/
DATA bin_levels;
DO bin = 1 TO 100;
IF bin = 1 THEN DO;
bin_level = &min_bin.;
OUTPUT;
END;
ELSE DO;
bin_level = &min_bin. + &bin_size. * bin;
OUTPUT;
END;
END;
RUN;
/*Append a bucket number based on the logsalary being smaller than the next bucket value*/
PROC SQL;
CREATE TABLE binned_data AS
SELECT
a.*
, b.bin
, b.bin_level
FROM
baseball a
LEFT JOIN
bin_levels b ON b.bin_level > a.logsalary
;
QUIT;
I would like to see the first ten rows look like this
logSalary bin
4.2121275979 1
4.2195077052 1
4.248495242 1
4.248495242 1
4.248495242 1
4.248495242 1
4.248495242 1
4.3174881135 2
4.3174881135 2
4.3174881135 2
...
Thanks in advance
EDIT: for now, I am going to go with this solution
DATA bucketed_data;
RETAIN bin bin_limit;
SET baseball;
IF _n_ = 1 THEN DO;
bin_limit = logsalary;
bin = 1;
END;
IF logsalary > bin_limit THEN DO;
bin_limit + &bin_size.;
bin + 1;
END;
RUN;
No need for macro variables put the values into a dataset and combine the dataset with the one you want to bin. Let's use 10 bins instead of 100 to make it easier to examine the results.
First find the minimum and range:
proc means n min max data=sashelp.baseball;
var logsalary;
output out=stats(keep=min range) min=min range=range;
run;
Then use those to bin the data:
DATA bucketed_data;
SET sashelp.baseball (keep=logsalary);
if _n_=1 then set stats;
if not missing(logsalary) then do bin=1 to 10 while(logsalary > min+bin*(range/10));
* nothing to do here ;
end;
run;
Let's use PROC MEANS to see how it worked.
proc means n min max ;
class bin / missing;
var logsalary;
run;
Results:
I am using Proc HPBIN to split my data into equally-spaced buckets i.e. each bucket has an equal proportion of the total range of the variable.
My issue is when I have extremely skewed data with a large range. Almost all of my datapoints lie in one bucket while there is a couple of observations scattered around the extremes.
I'm wondering if there is a way to force PROC HPBIN to consider the proportion of values in each bin and make sure there is at least e.g. 5% of observations in a bin and to group others?
DATA var1;
DO VAR1 = 1 TO 100;
OUTPUT;
END;
DO VAR1 = 500 TO 505;
OUTPUT;
END;
DO VAR1 = 7000 TO 7015;
OUTPUT;
END;
DO VAR1 = 1000000 TO 1000010;
OUTPUT;
END;
RUN;
/*Use proc hpbin to generate bins of equal width*/
ODS EXCLUDE ALL;
ODS OUTPUT
Mapping = bin_width_results;
PROC HPBIN
DATA=var1
numbin = 15
bucket;
input VAR1 / numbin = 15;
RUN;
ODS EXCLUDE NONE;
Id like to see a way that proc hpbin or other method groups together the bins which are empty and allows at least 5% of proportion per bucket. However, I am not looking to use percentiles in this case (it is another plot on my pdf) because I'd see like to see the spread.
Have you tried using the WINSOR method (winsorised binning)? From the documentation:
Winsorized binning is similar to bucket binning except that both tails are cut off to obtain a smooth binning result. This technique is often used to remove outliers during the data preparation stage.
You can specify the WINSORRATE to impact how it adjusts these tails.
Quantile option and 20 bins should give you ~5% per bin
PROC HPBIN DATA=var1 quantile;
input VAR1 / numbin = 20;
RUN;
When the values of a bin need to be dynamically rebinned due overly high proportions in a bin (problem bins) you need to hpbin only those values in the problem bins. A macro can be written to loop around the HPBIN process, zooming in on problem areas.
For example:
DATA have;
DO VAR1 = 1 TO 100;
OUTPUT;
END;
DO VAR1 = 500 TO 505;
OUTPUT;
END;
DO VAR1 = 7000 TO 7015;
OUTPUT;
END;
DO VAR1 = 1000000 TO 1000010;
OUTPUT;
END;
RUN;
%macro bin_zoomer (data=, var=, nbins=, rezoom=0.25, zoomlimit=8, out=);
%local data_view step nextstep outbins zoomers;
proc sql;
create view data_zoom1 as
select 1 as step, &var from &data;
quit;
%let step = 1;
%let data_view = data_zoom&step;
%let outbins = bins_step&step;
%bin:
%if &step > &zoomlimit %then %goto done;
ODS EXCLUDE ALL;
ODS OUTPUT Mapping = &outbins;
PROC HPBIN DATA=&data_view bucket ;
id step;
input &var / numbin = &nbins;
RUN;
ODS EXCLUDE NONE;
proc sql noprint;
select count(*) into :zoomers trimmed
from &outbins
where proportion >= &rezoom
;
%put NOTE: &=zoomers;
%if &zoomers = 0 %then %goto done;
%let step = %eval(&step+1);
proc sql;
create view data_zoom&step as
select &step as step, *
from &data_view data
join &outbins bins
on data.&var between bins.LB and bins.UB
and bins.proportion >= &rezoom
;
quit;
%let outbins = bins_step&step;
%let data_view = data_zoom&step;
%goto bin;
%done:
%put NOTE: done # &=step;
* stack the bins that are non-problem or of final zoom;
* the LB to UB domains from step2+ will discretely cover the bounds
* of the original step1 bins;
data &out;
set
bins_step1-bins_step&step
indsname = source
;
if proportion < &rezoom or source = "bins_step&step";
step = source;
run;
%mend;
options mprint;
%bin_zoomer(data=have, var=var1, nbins=15, out=bins);
I have a dataset with some variables named sx for x = 1 to n.
Is it possible to write a freq which gives the same result as:
proc freq data=prova;
table s1 * s2 * s3 * ... * sn /list missing;
run;
but without listing all the names of the variables?
I would like an output like this:
S1 S2 S3 S4 Frequency
A 10
A E 100
A E J F 300
B 10
B E 100
B E J F 300
but with an istruction like this (which, of course, is invented):
proc freq data=prova;
table s1:sn /list missing;
run;
Why not just use PROC SUMMARY instead?
Here is an example using two variables from SASHELP.CARS.
So this is PROC FREQ code.
proc freq data=sashelp.cars;
where make in: ('A','B');
tables make*type / list;
run;
Here is way to get counts using PROC SUMMARY
proc summary missing nway data=sashelp.cars ;
where make in: ('A','B');
class make type ;
output out=want;
run;
proc print data=want ;
run;
If you need to calculate the percentages you can instead use the WAYS statement to get both the overall and the individual cell counts. And then add a data step to calculate the percentages.
proc summary missing data=sashelp.cars ;
where make in: ('A','B');
class make type ;
ways 0 2 ;
output out=want;
run;
data want ;
set want ;
retain total;
if _type_=0 then total=_freq_;
percent=100*_freq_/total;
run;
So if you have 10 variables you would use
ways 0 10 ;
class s1-s10 ;
If you just want to build up the string "S1*S2*..." then you could use a DO loop or a macro %DO loop and put the result into a macro variable.
data _null_;
length namelist $200;
do i=1 to 10;
namelist=catx('*',namelist,cats('S',i));
end;
call symputx('namelist',namelist);
run;
But here is an easy way to make such a macro variable from ANY variable list not just those with numeric suffixes.
First get the variables names into a dataset. PROC TRANSPOSE is a good way if you use the OBS=0 dataset option so that you only get the _NAME_ column.
proc transpose data=have(obs=0) ;
var s1-s10 ;
run;
Then use PROC SQL to stuff the names into a macro variable.
proc sql noprint;
select _name_
into :namelist separated by '*'
from &syslast
;
quit;
Then you can use the macro variable in your TABLES statement.
proc freq data=have ;
tables &namelist / list missing ;
run;
Car':
In short, no. There is no shortcut syntax for specifying a variable list that crosses dimension.
In long, yes -- if you create a surrogate variable that is an equivalent crossing.
Discussion
Sample data generator:
%macro have(top=5);
%local index;
data have;
%do index = 1 %to ⊤
do s&index = 1 to 2+ceil(3*ranuni(123));
%end;
array V s:;
do _n_ = 1 to 5*ranuni(123);
x = ceil(100*ranuni(123));
if ranuni(123) < 0.1 then do;
ix = ceil(&top*ranuni(123));
h = V(ix);
V(ix) = .;
output;
V(ix) = h;
end;
else
output;
end;
%do index = 1 %to ⊤
end;
%end;
run;
%mend;
%have;
As you probably noticed table s: created one freq per s* variable.
For example:
title "One table per variable";
proc freq data=have;
tables s: / list missing ;
run;
There is no shortcut syntax for specifying a variable list that crosses dimension.
NOTE: If you specify out=, the column names in the output data set will be the last variable in the level. So for above, the out= table will have a column "s5", but contain counts corresponding to combinations for each s1 through s5.
At each dimensional level you can use a variable list, as in level1 * (sublev:) * leaf. The same caveat for out= data applies.
Now, reconsider the original request discretely (no-shortcut) crossing all the s* variables:
title "1 table - 5 columns of crossings";
proc freq data=have;
tables s1*s2*s3*s4*s5 / list missing out=outEach;
run;
And, compare to what happens when a data step view uses a variable list to compute a surrogate value corresponding to the discrete combinations reported above.
data haveV / view=haveV;
set have;
crossing = catx(' * ', of s:); * concatenation of all the s variables;
keep crossing;
run;
title "1 table - 1 column of concatenated crossings";
proc freq data=haveV;
tables crossing / list missing out=outCat;
run;
Reality check with COMPARE, I don't trust eyeballs. If zero rows with differences (per noequal) then the out= data sets have identical counts.
proc compare noprint base=outEach compare=outCat out=diffs outnoequal;
var count;
run;
----- Log -----
NOTE: There were 31 observations read from the data set WORK.OUTEACH.
NOTE: There were 31 observations read from the data set WORK.OUTCAT.
NOTE: The data set WORK.DIFFS has 0 observations and 3 variables.
NOTE: PROCEDURE COMPARE used (Total process time)
I want to find the way to build another variable (it's ok even in the same dataset) that is the categorization of the old variable. I would choose the number of the buckets (for exemples using percentiles as cutoffs: p10, p20, p30, etc.).
Now I do this thing extracting the percentiles of the variable with proc univariate. But this give me only the percentiles (my cutoffs) and then I have to build the new variable manually using the percentiles.
How can I create this new variable giving the cutoffs and the number of buckets as input?
thanks in advance
Assuming you want equal percentage sized buckets, then PROC RANK might just get you want you are looking for.
data test;
do i=1 to 100;
output;
end;
run;
proc rank data=test out=test2 groups=5;
var i;
ranks grp;
run;
That will give you 5 groups (named 0 .. 4), which should be equivalent to P20, P40, ..., P80 cutoffs.
If you wanted non-equal buckets, ie P10, P40, P60, and P90, then you would have to choose the lowest level and combine groups. Using the groups above:
%let groups=10;
proc rank data=test out=test2 groups=&groups;
var var;
ranks grp;
run;
/*
P = (grp+1)*&groups
Cutoffs 10, 40, 60, 90
implicit 5 new groups
*/
%let n_cutoff=4;
%let cutoffs=10, 40, 60, 90;
data test3(drop=_i cutoffs:);
set test2;
array cutoffs[&n_cutoff] (&cutoffs);
P = (grp+1)*&groups;
do _i=1 to &n_cutoff;
if P <= cutoffs[_i] then do;
new_grp = _i-1;
leave;
end;
if _i = &n_cutoff then
new_grp = _i;
end;
run;
10 is the lowest common denominator of the P values. 100/10 = 10 so we need 10 groups from PROC RANK.
The Data Step at the end combines the groups using the cutoffs you are looking for.