I want to compute a cumulative max grouped by another column.
Say I have this data:
data have;
input grp $ number;
datalines;
a 3
b 4
a 5
b 2
a 1
b 8
;
My desired output would be:
data want;
input grp $ cummax;
a 3
b 4
a 5
b 4
a 5
b 8
;
My real case will involve several grouping columns + filters, and ideally this cumulative max would be computed on several columns at the same time.
My main concern is computational efficiency as I'll be running this on tables of ten to hundred millions of rows. Proc SQL or native SAS are both welcome.
Rows might be shuffled if necessary.
System Info
proc product_status;run;
For Base SAS Software ...
Custom version information: 9.3_M2
Image version information: 9.03.01M2P080112
For SAS/STAT ...
Custom version information: 12.1
Image version information: 9.03.01M0P081512
For SAS/GRAPH ...
Custom version information: 9.3_M2
For SAS/CONNECT ...
Custom version information: 9.3_M2
For SAS OLAP Server ...
Custom version information: 9.3_M1
For SAS Enterprise Miner ...
Custom version information: 12.1
Image version information: 9.03.01M0P081512
For SAS Integration Technologies ...
Custom version information: 9.3_M2
For SAS/ACCESS Interface to Oracle ...
Custom version information: 9.3_M1
For SAS/ACCESS Interface to PC Files ...
Custom version information: 9.3_M2
proc setinit;run;
Product expiration dates:
---Base SAS Software 31JUL2018
---SAS/STAT 31JUL2018
---SAS/GRAPH 31JUL2018
---SAS/CONNECT 31JUL2018
---SAS OLAP Server 31JUL2018
---SAS Enterprise Miner 31JUL2018
---MDDB Server common products 31JUL2018
---SAS Integration Technologies 31JUL2018
---SAS Enterprise Miner Server 31JUL2018
---SAS Enterprise Miner Client 31JUL2018
---Unused OLAP Slot 31JUL2018
---SAS Enterprise Guide 31JUL2018
---SAS/ACCESS Interface to Oracle 31JUL2018
---SAS/ACCESS Interface to PC Files 31JUL2018
---SAS Metadata Bridges for Informatica 31JUL2018
---SAS Metadata Bridges for Microsoft SQL Server 31JUL2018
---SAS Metadata Bridge for Oracle 31JUL2018
---SAS Workspace Server for Local Access 31JUL2018
---SAS Workspace Server for Enterprise Access 31JUL2018
---SAS Table Server 31JUL2018
---DataFlux Trans DB Driver 31JUL2018
---SAS Framework Data Server 31JUL2018
---SAS Add-in for Microsoft Excel 31JUL2018
---SAS Add-in for Microsoft Outlook 31JUL2018
---SAS Add-in for Microsoft PowerPoint 31JUL2018
---SAS Add-in for Microsoft Word 31JUL2018
Use a HASH object to store the max for each variable and group combination. This will allow you to single pass through your data set and code something that you can scale for the number of groups and variables.
This does not require a sort which can be costly on a large data set.
Test Data
data example;
format grp1-grp5 $1.;
array grp[5];
array val[5];
do rows=1 to 1000000;
do i=1 to 5;
r = ceil(ranuni(1)*5);
grp[i] = substr("ABCDE",r,1);
end;
do j=1 to 5;
val[j] = 10*rannor(1);
end;
output;
end;
keep grp: val:;
run;
Data Step to compute the cumulative max
data want;
set example;
array val[5];
array max[5];
if _n_ = 1 then do;
declare hash mx();
rc = mx.defineKey('grp1','grp2','grp3','grp4','grp5');
rc = mx.definedata('max1','max2','max3','max4','max5');
rc = mx.definedone();
end;
rc = mx.find();
/*No Max for this combination -- add it*/
if rc then do;
do i=1 to 5;
max[i] = val[i];
end;
end;
/*Update Max Values*/
do i=1 to 5;
if val[i] > max[i] then
max[i] = val[i];
end;
/*Update Hash*/
rc = mx.replace();
drop rc i;
n = _n_; /*This is for testing*/
run;
Using that testing variable n, we can sort the groups keeping the original order and see if it worked. (hint, it did).
proc sort data=want;
by grp: n;
run;
proc sort data=have;
by grp;
run;
data want;
set have;
by grp;
retain max;
max=ifn(first.grp,number,max(number,max));
run;
Use Hash without sort
data want;
if _n_=1 then do;
declare hash h();
h.definekey('grp');
h.definedata('value');
h.definedone();
end;
set have;
if h.find()^=0 then do;
h.add(key:grp,data:number);
max=number;
end;
else do;
max=max(number,value);
h.replace(key:grp,data:number);
end;
drop value number;
run;
something like the following will work. If you want to keep the original order add a row counter and resort on that:
proc sort data=have;
by grp;
run;
data new;
drop newnum;
set have;
by grp;
retain newnum;
if first.grp then newnum = number;
if number > newnum then newnum=number;
else number=newnum;
run;
I built a macro function wrapped around #DomPazz 's solution, one can choose which columns to group by, which columns to compute on and which columns to drop or keep in the end.
I think included examples are straightforward.
I joined at the bottom the short convenience macro functions that I use in cummax.
*------------------------------------------------------------;
* CUMMAX ;
* Compute a cumulative max on 1 or several variables grouped ;
* by one or several variables; ;
*------------------------------------------------------------;
/* EXAMPLE:
data have;
format grp1-grp2 $1.;
array grp[2];
array val[3];
do rows=1 to 20;
do i=1 to 2;
r = ceil(ranuni(1)*2);
grp[i] = substr("AB",r,1);
end;
do j=1 to 3;
val[j] = 10*rannor(1);
end;
output;
end;
keep grp: val:;
run;
%cummax(have,grp=grp1 grp2,val=val1 val2,out= want1)
%cummax(have,grp=grp1,val=val1,drop=grp2 val3,out= want2)
%cummax(have,grp=grp1,val=val1,keep= val2,out= want3)
*/
%macro cummax
(data /* source table */
,grp= /* variables to group on */
,val= /* variables to compute on */
,keep= /* variables to keep additionally to grp and computed columns, don't use with drop */
,drop= /* variables to drop, don't use with keep */
,out= /* output table */
);
/* default output */
%if not %length(&out) %then %let out = &data;
/* rework keep and drop */
%local n_val max_val;
%let n_val = %list_length(&val);
%let max_val = %list_fix(&val,suffix=_cmax);
%if %length(&keep) %then %let keep = (keep= &keep &grp &max_val );
%if %length(&drop) %then %let drop = (drop= &drop);
/* data step */
data &out&keep&drop;
set &data;
array val[&n_val] &val;
array max[&n_val] &max_val;
if _n_ = 1 then do;
declare hash mx();
rc = mx.defineKey(%list_quote_comma(&grp));
rc = mx.definedata(%list_quote_comma(&max_val));
rc = mx.definedone();
end;
rc = mx.find();
/*No Max for this combination -- add it*/
if rc then do;
do i=1 to &n_val; /* %list_length(&val) */
max[i] = val[i];
end;
end;
/*Update Max Values*/
do i=1 to &n_val;
if val[i] > max[i] then
max[i] = val[i];
end;
/*Update Hash*/
rc = mx.replace();
drop rc i;
run;
%mend;
*---------------------------------------------------------------;
* LIST_LENGTH ;
* Length of space separated list ;
*---------------------------------------------------------------;
/* EXAMPLES :
%put %list_length(item1 item2 item3);
*/
%macro list_length
(data
);
%sysfunc(countw(&data,%str( )))
%mend;
*---------------------------------------------------------------;
* LIST_QUOTE_COMMA ;
* create comma separated list with quoted items, from ;
* unquoted space separated list. ;
*---------------------------------------------------------------;
/* EXAMPLE
%put %list_quote_comma(a b c);
*/
%macro list_quote_comma
(data /* space separated list to quote */
);
%unquote(%str(%')%qsysfunc(tranwrd(&data,%str( ),%str(%',%')))%str(%'))
%mend;
*---------------------------------------------------------------;
* LIST_FIX ;
* Add prefix and/or suffix to items of space separated list ;
*---------------------------------------------------------------;
/* EXAMPLES :
%put %list_fix(item1 item2 item3,pref_,_suf);
%put %list_fix(item1 item2 item3,pref_);
%put %list_fix(item1 item2 item3,suffix=_suf);
*/
%macro list_fix
(data
,prefix
,suffix
);
%local output;
%do i=1 %to %sysfunc(countw(&data,%str( ))) ;
%let output= &output &prefix.%scan(&data,&i,%str( ))&suffix;
%end;
&output
%mend;
Related
I am a SAS Developer. I am starting a project that requires me to assign RK number to unique record. Every extraction will get data that is already existed in the target table and some may not.
For example.
Source Data:
Name
A
B
C
D
E
Target Table:
Name RK
A 1
B 2
C 3
When I load, i want it to insert D and E into the target table with RK 4 & 5 respectively. Currently, I can think of doing hash lookup from source with target table. For data that is not mapped using hash object, RK field will be blank. I will put the max RK number from the target table and incremental 1 to it by appending D & E into it.
I am not sure if this is the most efficient way of doing so. Is there another more efficient way?
You could use a hash to determine if some name (I'll call it value) already exists in target table. However, new keys would have to be tracked, output at the end of the step and then PROC APPPEND'd to target table (I'll call it master) .
For the case of just updating the master table with new RK values, a traditional SAS approach is to use a DATA step to MODIFY a unique keyed master table. The coding pattern is:
SET <source>
MODIFY <master> KEY=<value> / UNIQUE;
... _IORC_ logic ...
Example:
%* Create some source data and the master table;
data have1 have2 have3 have4 have5;
call streaminit(123);
value = 2020; output; output; output;
do _n_ = 1 to 2500;
value = ceil(rand('uniform', 5000));
select;
when (rand('uniform') < 0.20) output have1;
when (rand('uniform') < 0.20) output have2;
when (rand('uniform') < 0.20) output have3;
when (rand('uniform') < 0.20) output have4;
otherwise output have5;
end;
end;
run;
data have6;
do _n_ = 1 to 20;
value = 2020;
output;
end;
run;
* Create the unique keyed master table;
* Typically done once and stored in a permanent library.;
proc sql;
create table keys (value integer, RK integer);
create distinct index value on work.keys;
quit;
%* A macro for adding new RK values as needed;
%macro RK_ASSIGN(master, data);
%local last;
proc sql noprint;
select max(RK) into :last trimmed from &master;
quit;
data &master;
retain newkey %sysevalf(0&last+0); %* trickery for 1st use case when max(RK) is .;
set &data;
modify &master key=value / unique;
if _iorc_ eq %sysrc(_DSENOM);
newkey + 1;
RK = newkey;
output;
_error_ = 0;
run;
%mend;
%* Use the macro to process source data;
%RK_ASSIGN(keys,have1)
%RK_ASSIGN(keys,have2)
%RK_ASSIGN(keys,have3)
%RK_ASSIGN(keys,have4)
%RK_ASSIGN(keys,have5)
%RK_ASSIGN(keys,have6)
You can see the forced repeats of the 2020 value in the source data is only RK'd once in the master table, and there are no errors during processing.
If you want to backfill the source data with the found or assigned RK value there would be additional steps. You could update a custom format, or do a traditional left join. If you want to focus on backfill during a read over source data the HASH step + APPEND new RK's step might be preferable.
Example 2 Master table is named values
HASH version with RK assignment added to source data. New RKs output and appended.
proc sql;
create table values (value integer, RK integer);
create distinct index value on work.values;
%macro RK_HASH_ASSIGN(master,data);
%local last;
proc sql noprint;
select max(RK) into :last trimmed from &master;
quit;
data &data(drop=next_RK);
set &data end=end;
if _n_ = 1 then do;
declare hash lookup (dataset:"&master");
lookup.defineKey("value");
lookup.defineData("value", "RK");
lookup.defineDone();
declare hash newlookup (dataset:"&master(obs=0)");
newlookup.defineKey("value");
newlookup.defineData("value", "RK");
newlookup.defineDone();
end;
retain next_RK %sysevalf(0&last+0); %* trick;
* either load existing RK from hash, or compute and apply next RK value;
if lookup.find() ne 0 then do;
next_RK + 1;
RK = next_RK;
lookup.add();
newlookup.add();
end;
if end then do;
newlookup.output(dataset:'work.newmasters');
end;
run;
proc append base=&master data=work.newmasters;
proc delete data=work.newmasters;
run;
%mend;
%RK_HASH_ASSIGN(values,have1)
%RK_HASH_ASSIGN(values,have2)
%RK_HASH_ASSIGN(values,have3)
%RK_HASH_ASSIGN(values,have4)
%RK_HASH_ASSIGN(values,have5)
%RK_HASH_ASSIGN(values,have6)
%* Compare the two assignment strategies, no differences!;
proc sort force data=values(index=(value));
by RK;
run;
proc compare noprint base=keys compare=values out=diffs outnoequal;
by RK;
run;
----- LOG -----
2525 proc compare noprint base=keys compare=values out=diffs
outnoequal <------------- do not output when data is identical ;
;
2526 by RK;
2527 run;
NOTE: There were 215971 observations read from the data set WORK.KEYS.
NOTE: There were 215971 observations read from the data set WORK.VALUES.
NOTE: The data set WORK.DIFFS has 0 observations and 4 variables. <--- all the same ---
NOTE: PROCEDURE COMPARE used (Total process time):
real time 0.25 seconds
cpu time 0.26 seconds
I am using Proc HPBIN to split my data into equally-spaced buckets i.e. each bucket has an equal proportion of the total range of the variable.
My issue is when I have extremely skewed data with a large range. Almost all of my datapoints lie in one bucket while there is a couple of observations scattered around the extremes.
I'm wondering if there is a way to force PROC HPBIN to consider the proportion of values in each bin and make sure there is at least e.g. 5% of observations in a bin and to group others?
DATA var1;
DO VAR1 = 1 TO 100;
OUTPUT;
END;
DO VAR1 = 500 TO 505;
OUTPUT;
END;
DO VAR1 = 7000 TO 7015;
OUTPUT;
END;
DO VAR1 = 1000000 TO 1000010;
OUTPUT;
END;
RUN;
/*Use proc hpbin to generate bins of equal width*/
ODS EXCLUDE ALL;
ODS OUTPUT
Mapping = bin_width_results;
PROC HPBIN
DATA=var1
numbin = 15
bucket;
input VAR1 / numbin = 15;
RUN;
ODS EXCLUDE NONE;
Id like to see a way that proc hpbin or other method groups together the bins which are empty and allows at least 5% of proportion per bucket. However, I am not looking to use percentiles in this case (it is another plot on my pdf) because I'd see like to see the spread.
Have you tried using the WINSOR method (winsorised binning)? From the documentation:
Winsorized binning is similar to bucket binning except that both tails are cut off to obtain a smooth binning result. This technique is often used to remove outliers during the data preparation stage.
You can specify the WINSORRATE to impact how it adjusts these tails.
Quantile option and 20 bins should give you ~5% per bin
PROC HPBIN DATA=var1 quantile;
input VAR1 / numbin = 20;
RUN;
When the values of a bin need to be dynamically rebinned due overly high proportions in a bin (problem bins) you need to hpbin only those values in the problem bins. A macro can be written to loop around the HPBIN process, zooming in on problem areas.
For example:
DATA have;
DO VAR1 = 1 TO 100;
OUTPUT;
END;
DO VAR1 = 500 TO 505;
OUTPUT;
END;
DO VAR1 = 7000 TO 7015;
OUTPUT;
END;
DO VAR1 = 1000000 TO 1000010;
OUTPUT;
END;
RUN;
%macro bin_zoomer (data=, var=, nbins=, rezoom=0.25, zoomlimit=8, out=);
%local data_view step nextstep outbins zoomers;
proc sql;
create view data_zoom1 as
select 1 as step, &var from &data;
quit;
%let step = 1;
%let data_view = data_zoom&step;
%let outbins = bins_step&step;
%bin:
%if &step > &zoomlimit %then %goto done;
ODS EXCLUDE ALL;
ODS OUTPUT Mapping = &outbins;
PROC HPBIN DATA=&data_view bucket ;
id step;
input &var / numbin = &nbins;
RUN;
ODS EXCLUDE NONE;
proc sql noprint;
select count(*) into :zoomers trimmed
from &outbins
where proportion >= &rezoom
;
%put NOTE: &=zoomers;
%if &zoomers = 0 %then %goto done;
%let step = %eval(&step+1);
proc sql;
create view data_zoom&step as
select &step as step, *
from &data_view data
join &outbins bins
on data.&var between bins.LB and bins.UB
and bins.proportion >= &rezoom
;
quit;
%let outbins = bins_step&step;
%let data_view = data_zoom&step;
%goto bin;
%done:
%put NOTE: done # &=step;
* stack the bins that are non-problem or of final zoom;
* the LB to UB domains from step2+ will discretely cover the bounds
* of the original step1 bins;
data &out;
set
bins_step1-bins_step&step
indsname = source
;
if proportion < &rezoom or source = "bins_step&step";
step = source;
run;
%mend;
options mprint;
%bin_zoomer(data=have, var=var1, nbins=15, out=bins);
I'm using SAS and I'd like to create an indicator variable.
The data I have is like this (DATA I HAVE):
and I want to change this to (DATA I WANT):
I have a fixed number of total time that I want to use, and the starttime has duplicate time value (in this example, c1 and c2 both started at time 3). Although the example I'm using is small with 5 names and 12 time values, the actual data is very large (about 40,000 names and 100,000 time values - so the outcome I want is a matrix with 100,000x40,000.)
Can someone please provide any tips/solution on how to handle this?
40k variables is a lot. It will be interesting to see how well this scales. How do you determine the stop time?
data have;
input starttime name :$32.;
retain one 1;
cards;
1 varx
3 c1
3 c2
5 c3x
10 c4
11 c5
;;;;
run;
proc print;
run;
proc transpose data=have out=have2(drop=_name_ rename=(starttime=time));
by starttime;
id name;
var one;
run;
data time;
if 0 then set have2(drop=time);
array _n[*] _all_;
retain _n 0;
do time=.,1 to 12;
output;
call missing(of _n[*]);
end;
run;
data want0 / view=want0;
merge time have2;
by time;
retain dummy '1';
run;
data want;
length time 8;
update want0(obs=0) want0;
by dummy;
if not missing(time);
output;
drop dummy;
run;
proc print;
run;
This will work. There may be a simpler solution that does it all in one data step. My data step creates a staggered results that has to be collapsed which I do by summing in the sort/means.
data have;
input starttime name $;
datalines;
3 c1
3 c2
5 c3
10 c4
11 c5
;
run;
data want(drop=starttime name);
set have;
array cols (*) c1-c5;
do time=1 to 100;
if starttime < time then cols(_N_)=1;
else cols(_N_)=0;
output;
end;
run;
proc sort data=want;
by time;
proc means data=want noprint;
by time;
var _numeric_;
output out=want2(drop=_type_ _freq_) sum=;
run;
I am not recommending you do it this way. You didn't provide enough information to let us know why you want a matrix of that size. You may have processing issues getting it to run.
In the line do time=1 to 100 you can change that to 100000 or whatever length.
I think the code below will work:
%macro answer_macro(data_in, data_out);
/* Deduplication of initial dataset just to assure that every variable has a unique starting time*/
proc sort data=&data_in. out=data_have_nodup; by name starttime; run;
proc sort data=data_have_nodup nodupkey; by name; run;
/*Getting min and max starttime values - here I am assuming that there is only integer values form starttime*/
proc sql noprint;
select min(starttime)
,max(starttime)
into :min_starttime /*not used. Use this (and change the loop on the next dataset) to start the time variable from the value where the first variable starts*/
,:max_starttime
from data_have_nodup
;quit;
/*Getting all pairs of name/starttime*/
proc sql noprint;
select name
,starttime
into :name1 - :name1000000
,:time1 - :time1000000
from data_have_nodup
;quit;
/*Getting total number of variables*/
proc sql noprint;
select count(*) into :nvars
from data_have_nodup
;quit;
/* Creating dataset with possible start values */
/*I'm not sure this step could be done with a single datastep, but I don't have SAS
on my PC to make tests, so I used the method below*/
data &data_out.;
do i = 1 to &max_starttime. + 1;
time = i; output;
end;
drop i;
run;
data &data_out.;
set &data_out.;
%do i = 1 %to &nvars.;
if time >= &&time&i then &&name&i = 1;
else &&name&i = 0;
%end;
run;
%mend answer_macro;
Unfortunately I don't have SAS on my machine right now, so I can't confirm that the code works. But even if it doesn't, you can use the logic in it.
Suppose I have these data read into SAS:
I would like to list each unique name and the number of months it appeared in the data above to give a data set like this:
I have looked into PROC FREQ, but I think I need to do this in a DATA step, because I would like to be able to create other variables within the new data set and otherwise be able to manipulate the new data.
Data step:
proc sort data=have;
by name month;
run;
data want;
set have;
by name month;
m=month(lag(month));
if first.id then months=1;
else if month(date)^=m then months+1;
if last.id then output;
keep name months;
run;
Pro Sql:
proc sql;
select distinct name,count(distinct(month(month))) as months from have group by name;
quit;
While it's possible to do this in a data step, you wouldn't; you'd use proc freq or similar. Almost every PROC can give you an output dataset (rather than just print to the screen).
PROC FREQ data=sashelp.class;
tables age/out=age_counts noprint;
run;
Then you can use this output dataset (age_counts) as a SET input to another data step to perform your further calculations.
You can also use proc sql to group the variable and count how many are in that group. It might be faster than proc freq depending on how large your data is.
proc sql noprint;
create table counts as
select AGE, count(*) as AGE_CT from sashelp.class
group by AGE;
quit;
If you want to do it in a data step, you can use a Hash Object to hold the counted values:
data have;
do i=1 to 100;
do V = 'a', 'b', 'c';
output;
end;
end;
run;
data _null_;
set have end=last;
if _n_ = 1 then do;
declare hash cnt();
rc = cnt.definekey('v');
rc = cnt.definedata('v','v_cnt');
rc = cnt.definedone();
call missing(v_cnt);
end;
rc = cnt.find();
if rc then do;
v_cnt = 1;
cnt.add();
end;
else do;
v_cnt = v_cnt + 1;
cnt.replace();
end;
if last then
rc = cnt.output(dataset: "want");
run;
This is very efficient as it is a single loop over the data. The WANT data set contains the key and count values.
I have 6 identical SAS data sets. They only differ in terms of the values of the observations.
How can I create one output data, which finds the maximum value across all the 6 data sets for each cell?
The update statement seems a good candidate, but it cannot set a condition.
data1
v1 v2 v3
1 1 1
1 2 3
data2
v1 v2 v3
1 2 3
1 1 1
Result
v1 v2 v3
1 2 3
1 2 3
If need be the following could be automated by "PUT" statements or variable arrays.
***ASSUMES DATA SETS ARE SORTED BY ID;
Data test;
do until(last.id);
set a b c;
by id;
if v1 > updv1 then updv1 = v1;
if v2 > updv2 then updv2 = v2;
if v3 > updv3 then updv3 = v3;
end;
drop v1-v3;
rename updv1-updv3 = v1-v3;
run;
To provide a more complete solution to Rico's question(assuming 6 datasets e.g. d1-d6) one could do it this way:
Data test;
array v(*) v1-v3;
array updv(*) updv1-updv3;
do until(last.id);
set d1-d6;
by id;
do i = 1 to dim(v);
if v(i) > updv(i) then updv(i) = v(i);
end;
end;
drop v1-v3;
rename updv1-updv3 = v1-v3;
run;
proc print;
var id v1-v3;
run;
See below. For a SAS beginner might be too complex. I hope the comments do explain it a bit.
/* macro rename_cols_opt to generate cols_opt&n variables
- cols_opt&n contains generated code for dataset RENAME option for a given (&n) dataset
*/
%macro rename_cols_opt(n);
%global cols_opt&n max&n;
proc sql noprint;
select catt(name, '=', name, "&n") into: cols_opt&n separated by ' '
from dictionary.columns
where libname='WORK' and memname='DATA1'
and upcase(name) ne 'MY_ID_COLUMN'
;
quit;
%mend;
/* prepare macro variables = pre-generate the code */
%rename_cols_opt(1)
%rename_cols_opt(2)
%rename_cols_opt(3)
%rename_cols_opt(4)
%rename_cols_opt(5)
%rename_cols_opt(6)
/* create macro variable keep_list containing names of output variables to keep (based on DATA1 structure, the code expects those variables in other tables as well */
proc sql noprint;
select trim(name) into: keep_list separated by ' '
from dictionary.columns
where libname='WORK' and memname='DATA1'
;
quit;
%put &keep_list;
/* macro variable maxcode contains generated code for calculating all MAX values */
proc sql noprint;
select cat(trim(name), ' = max(of ', trim(name), ":)") into: maxcode separated by '; '
from dictionary.columns
where libname='WORK' and memname='DATA1'
and upcase(name) ne 'MY_ID_COLUMN'
;
quit;
%put "&maxcode";
data result1 / view =result1;
merge
data1 (in=a rename=(&cols_opt1))
data2 (in=b rename=(&cols_opt2))
data3 (in=b rename=(&cols_opt3))
data4 (in=b rename=(&cols_opt4))
data5 (in=b rename=(&cols_opt5))
data6 (in=b rename=(&cols_opt6))
;
by MY_ID_COLUMN;
&maxcode;
keep &keep_list;
run;
/* created a datastep view, now "describing" it to see the generated code */
data view=result1;
describe;
run;
Here's another attempt that is scalable against any number of datasets and variables. I've added in an ID variable this time as well. Like the answer from #vasja, there are some advanced techniques used here. The 2 solutions are in fact very similar, I've used 'call execute' instead of a macro to create the view. My solution also requires the dataset names to be stored in a dataset.
/* create dataset of required dataset names */
data datasets;
input ds_name $;
cards;
data1
data2
;
run;
/* dummy data */
data data1;
input id v1 v2 v3;
cards;
10 1 1 1
20 1 2 3
;
run;
data data2;
input id v1 v2 v3;
cards;
10 1 2 3
20 1 1 1
;
run;
/* create dataset, macro list and count of variables names */
proc sql noprint;
create table variables as
select name as v_name from dictionary.columns
where libname='WORK' and upcase(memname)='DATA1' and upcase(name) ne 'ID';
select name, count(*) into :keepvar separated by ' ',
:numvar
from dictionary.columns
where libname='WORK' and upcase(memname)='DATA1' and upcase(name) ne 'ID';
quit;
/* create view that joins all datasets, renames variables and calculates maximum value per id */
data _null_;
set datasets end=last;
if _n_=1 then call execute('data data_all / view=data_all; merge');
call execute (trim(ds_name)|| '(rename=(');
do i=1 to &numvar.;
set variables point=i;
call execute(trim(v_name)||'='||catx('_',v_name,_n_));
end;
call execute('))');
if last then do;
call execute('; by id;');
do i=1 to &numvar.;
set variables point=i;
call execute(trim(v_name)||'='||'max(of '||trim(v_name)||':);');
end;
call execute('run;');
end;
run;
/* create dataset of maximum values per id per variable */
data result (keep=id &keepvar.);
set data_all;
run;