Stata programming and putexcel-loop - stata

I'm trying to make an automated Excel file that documents the number of observations dropped during my sample construction, using putexcel and a simple program.
I'm pretty new to programming, but the program below does the job. It stores 4 global macros for each time I drop some observations: 1) Number of observations dropped, 2) Share of observations dropped, 3) Number of observations left in the data set and 4) a string that describes why I drop the observations.
To export the results to excel I use the putexcel-command -- which is working fine. The problem is that I need to drop observations a lot of times in the dofile and I wondered if I could somehow incorporate the putexcel part in the program to make it loop over cells.
In other words, what I want is the program to automatically save the description ($why) in A1 the first time, in A8 the second time and so on.
I have provided an example of my code below:
** Generate some data:
clear
input id year wage
1 1 200
1 2 250
1 3 300
2 1 152
2 2 150
2 3 140
3 1 300
3 2 320
3 3 360
end
** Define program
cap program drop dropdata
program define dropdata
count
global N = r(N)
count if `1'
global drop = r(N)
global share = ($drop/$N)
drop if `1'
count
global left = r(N)
global why = "`2'"
end
** Drop if first year
dropdata year==1 "Drop if first year"
** Export to excel
putexcel set "documentation.xlsx", modify
putexcel A1 = ("$why")
putexcel A3 = ("Obs. dropped") A4 = ("Share dropped") A5 = ("Observations left")
putexcel B3 = ($drop) B4 = ($share) B5=($left)
** Now drop if wage is < 300
dropdata wage<300 "Drop if wage<300"
putexcel A8 = ("$why")
putexcel A10 = ("Obs. dropped") A11 = ("Share dropped") A12 = ("Observations left")
putexcel B10 = ($drop) B11 = ($share) B12 = ($left)

The issue with this is that Stata does not know what cells are filled and which are not, so I think it would probably be easiest to include another argument in your program define that says the number of times you have run the program.
Here is an example:
** Generate some data:
clear
input id year wage
1 1 200
1 2 250
1 3 300
2 1 152
2 2 150
2 3 140
3 1 300
3 2 320
3 3 360
end
** Define program
cap program drop dropdata
program define dropdata
count
local N = r(N)
count if `1'
local drop = r(N)
local share = ($drop/$N)
drop if `1'
count
local left = r(N)
local why = "`2'"
local row1 = `3'*7 + 1
local row3 = `row1' + 2
local row4 = `row1' + 3
local row5 = `row1' + 4
putexcel set "documentation.xlsx", modify
putexcel A`row1' = ("`why'")
putexcel A`row3' = ("Obs. dropped") A`row4' = ("Share dropped") A`row5' = ("Observations left")
putexcel B`row3' = (`drop') B`row4' = (`share') B`row5' = (`left')
end
** Drop if first year
dropdata year==1 "Drop if first year" 0
** Now drop if wage is < 300
dropdata wage<300 "Drop if wage<300" 1
Note that the change is to include the number of calls already done as the third argument in dropdata, then we add the putexcel commands to rows based on that number.
As an aside:
I changed all of your globals to locals because they're safer. Also, in general, if you want to return macros from a program that you write, you tell Stata the program is, for example an rclass and then use statements like below:
program define return_2, rclass
return local asdf 2
end
and then you can access the local asdf (which is equal to 2) as the local r(asdf) and you can check the values of all locals returned by the program with the command return list

Related

Generate sum of all possible combinations of id

I have a dataset with the structure that looks something like this:
Group ID Value
1 A 10
1 B 15
1 C 20
2 D 10
2 E 25
Within each Group, I want to obtain the sum of all possible combinations of two or more IDs. For instance, within group 1, I can have the following combinations: AB, AC, BC, ABC. So, in total I have four possible combinations for group 1, of which I'd like to get the sum of the variable value.
I am using the formula for combinations of N elements in groups of size R to identify how many observations I need to add to the dataset to have enough observations.
For Group 1, the number of observations I need are:
3!/((3-2)!*2!)*2 = 6 for the two-IDs combinations
3!/(3-3)!*3!)*3 = 3 for the three-IDs combination.
So a total of 9 observations. Since I already have three, I can use the command:expand 6 if Group==1. For Group 1 I would get something like
Group ID Value
1 A 10
1 B 15
1 C 20
1 A 10
1 B 15
1 C 20
1 A 10
1 B 15
1 C 20
Now, I am stuck here on how to proceed to tell Stata to identify the combinations and create the summation. Ideally, I want to create two new variables, to identify the tuples and get the summation, so something that looks like:
Group ID Value Touple Sum
1 A 10 AB 25
1 B 15 AB 25
1 A 10 AC 30
1 C 20 AC 30
1 B 15 BC 35
1 C 20 BC 35
1 A 10 ABC 45
1 B 15 ABC 45
1 C 20 ABC 45
In this way, I could then just drop the duplicates in terms of Group and Tuples. Once I have the Tuples variable, getting the sum is straightforward, but getting the Tuples, I can't get my head around it.
Any advice on how to do this?
I tried doing this with nested loops and the tuples command.
First I create and save a tempfile to store results:
clear
tempfile group_results
save `group_results', replace emptyok
Then I input and save data, along with a local for the number of groups:
clear
input Group str1 ID Value
1 A 10
1 B 15
1 C 20
2 D 10
2 E 25
2 F 13 // added to test
2 G 2 // added to test
end
sum Group
local num_groups = r(max)
tempfile base
save `base', replace
Here's the core of the code. The outer loop here iterates over Groups. Then it makes a list of the IDs in that group, and uses the tuples command to make a list of the unique combinations of those IDs, with a minimum size of 2. The k loop iterates through the number of tuples and the m loop makes an indicator for tuple membership.
forvalues i = 1/`num_groups' {
display "Starting Group `i'"
use `base' if Group==`i', clear
* Make list of IDs to get unique combos of
forvalues j = 1/`=_N' {
local tuple_list`i' = "`tuple_list`i'' " + ID[`j']
}
* Get all unique combos in list using tuples command
tuples `tuple_list`i'', display min(2)
forvalues k = 1/`ntuples' {
display "Tuple `k': `tuple`k''"
local length = wordcount("`tuple`k''")
gen intuple=0
gen tuple`k'="`tuple`k''"
forvalues m = 1/`length' {
replace intuple=1 if ID==word("`tuple`k''",`m')
}
* Calculate sum of values in that tuple
gegen group_sum`k' = sum(Value) if intuple==1
drop intuple
list
}
* Reshape into desired format
reshape long tuple group_sum, i(Group ID Value) j(tuple_num)
drop if missing(group_sum)
sort tuple_num
list
append using `group_results'
save `group_results', replace
}
* Full results
use `group_results', clear
sort Group tuple_num
list
I hope this helps. The list commands will give you a busy results window but it shows what's all happening. Here's the output at the end of the i loop for Group 1:
+--------------------------------------------------+
| Group ID Value tuple_~m tuple group_~m |
|--------------------------------------------------|
1. | 1 C 20 1 B C 35 |
2. | 1 B 15 1 B C 35 |
3. | 1 A 10 2 A C 30 |
4. | 1 C 20 2 A C 30 |
5. | 1 A 10 3 A B 25 |
|--------------------------------------------------|
6. | 1 B 15 3 A B 25 |
7. | 1 C 20 4 A B C 45 |
8. | 1 A 10 4 A B C 45 |
9. | 1 B 15 4 A B C 45 |
+--------------------------------------------------+
This could be inefficient if your data is actually much larger!

Using a counter to find multiple occurences on same day from ID & date

I am trying find when a person has multiple occurences on the same day & when they do not.
My data looks something like this
data have;
input id date ;
datalines ;
1 nov10
1 nov15
2 nov11
2 nov11
2 nov14
3 nov12
4 nov17
4 nov19
4 nov19
etc...;
I want to create a new variable to show when an occurence happens on the same day or not. I want my end rseult to look like
data want;
input id date occ;
1 nov10 1
1 nov15 1
2 nov11 2
2 nov11 2
2 nov14 1
3 nov12 1
4 nov17 1
4 nov19 2
4 nov19 2
etc...;
THis is what I tried but it is not working for each date instead only doing it if the date repeats on the first. Here is my code
data want ;
set have ;
by id date;
if first.date then occ = 1;
else occ = 2;
run;
Your IF/THEN logic is just a complicated way to do
occ = 1 + not first.date;
Which is just a test of whether or not it is the first observation for this date.
Looks like you want to instead test whether or not there are multiple observations per date.
occ = 1 + not (first.date and last.date) ;

Stata Generate New Variable List By Multiplying Var Lists

I have a balanced panel with a set of dummies for 'countries' and observations for several years. I want to generate a new set of variables that assigns a number in the sequence 1:n for each year observation of country i, and 0 for any other observation that is not from country i.
As an example, suppose I have two countries and two years. Below on the left is an example of my database. I want a new set of variables as shown on the right:
*Example of Database Example of Desired Output
*country1 country2 year output1 output2
* 1 0 1 1 0
* 1 0 2 2 0
* 0 1 1 0 1
* 0 1 2 0 2
How can I get the desired output? Intuitively I need to multiply 'country*' by 'year' to get 'output*', but I have been unable to make it work in Stata.
Below is what I tried.
gen output = year * country
* country is ambiguous
gen output = year * country*
* invalid syntax
foreach var in country*{
gen output_`var' = year * `var'
}
* invalid name
Your last attempt almost solved it. The issue with your attempt is that you need to tell Stata that you are passing a varlist for you to be able to use the wildcards * and ?. To be able to use a wildcard in foreach, do this:
* Example generated by -dataex-. For more info, type help dataex
clear
input byte(country1 country2 year)
1 0 1
1 0 2
0 1 1
0 1 2
end
foreach var of varlist country* {
gen `var'_year = year * `var'
}
The full name country1, country2 etc. is stored in `var' so I took the freedom to update the name of the result variables to country1_year, country2_year etc. rather than output_country1, output_country2 etc.
Note that this solution will only work if the country* vars only have the values 1 and 0, no observation has a missing value in any variable country* and no observation have the value 1 in more than one variable country*.

How to write a foreach loop statement in SAS?

I'm working in SAS as a novice. I have two datasets:
Dataset1
Unique ID
ColumnA
1
15
1
39
2
20
3
10
Dataset2
Unique ID
ColumnB
1
40
2
55
2
10
For each UniqueID, I want to subtract all values of ColumnB by each value of ColumnA. And I would like to create a NewColumn that is 1 anytime 1>ColumnB-Column >30. For the first row of Dataset 1, where UniqueID= 1, I would want SAS to go through all the rows in Dataset 2 that also have a UniqueID = 1 and determine if there is any rows in Dataset 2 where the difference between ColumnB and ColumnA is greater than 1 or less than 30. For the first row of Dataset 1 the NewColumn should be assigned a value of 1 because 40 - 15 = 25. For the second row of Dataset 1 the NewColumn should be assigned a value of 0 because 40 - 39 = 1 (which is not greater than 1). For the third row of Dataset 1, I again want SAS to go through every row of ColumnB in Dataset 2 that has the same UniqueID as in Dataset1, so 55 - 20 = 35 (which is greater than 30) but NewColumn would still be assigned a value of 1 because (moving to row 3 of Datatset 2 which has UniqueID =2) 20 - 10 = 10 which satisfies the if statement.
So I want my output to be:
Unique ID
ColumnA
NewColumn
1
15
1
1
30
0
2
20
1
I have tried concatenating Dataset1 and Dataset2 into a FullDataset. Then I tried using a do loop statement but I can't figure out how to do the loop for each value of UniqueID. I tried using BY but that of course produces an error because that is only used for increments.
DATA FullDataset;
set Dataset1 Dataset2; /*Concatenate datasets*/
do i=ColumnB-ColumnA by UniqueID;
if 1<ColumnB-ColumnA<30 then NewColumn=1;
output;
end;
RUN;
I know I'm probably way off but any help would be appreciated. Thank you!
So, the way that answers your question most directly is the keyed set. This isn't necessarily how I'd do this, but it is fairly simple to understand (as opposed to a hash table, which is what I'd use, or a SQL join, probably what most people would use). This does exactly what you say: grabs a row of A, says for each matching row of B check a condition. It requires having an index on the datasets (well, at least on the B dataset).
data colA(index=(id));
input ID ColumnA;
datalines;
1 15
1 39
2 20
3 10
;;;;
data colB(index=(id));
input ID ColumnB;
datalines;
1 40
2 55
2 30
;;;;
run;
data want;
*base: the colA dataset - you want to iterate through that once per row;
set colA;
*now, loop while the check variable shows 0 (match found);
do while (_iorc_ = 0);
*bring in other dataset using ID as key;
set colB key=ID ;
* check to see if it matches your requirement, and also only check when _IORC_ is 0;
if _IORC_ eq 0 and 1 lt ColumnB-ColumnA lt 30 then result=1;
* This is just to show you what is going on, can remove;
put _all_;
end;
*reset things for next pass;
_ERROR_=0;
_IORC_=0;
run;

Automatically replace outlying values with missing values

Suppose the data set have contains various outliers which have been identified in an outliers data set. These outliers need to be replaced with missing values, as demonstrated below.
Have
Obs group replicate height weight bp cholesterol
1 1 A 0.406 0.887 0.262 0.683
2 1 B 0.656 0.700 0.083 0.836
3 1 C 0.645 0.711 0.349 0.383
4 1 D 0.115 0.266 666.000 0.015
5 2 A 0.607 0.247 0.644 0.915
6 2 B 0.172 333.000 555.000 0.924
7 2 C 0.680 0.417 0.269 0.499
8 2 D 0.787 0.260 0.610 0.142
9 3 A 0.406 0.099 0.263 111.000
10 3 B 0.981 444.000 0.971 0.894
11 3 C 0.436 0.502 0.563 0.580
12 3 D 0.814 0.959 0.829 0.245
13 4 A 0.488 0.273 0.463 0.784
14 4 B 0.141 0.117 0.674 0.103
15 4 C 0.152 0.935 0.250 0.800
16 4 D 222.000 0.247 0.778 0.941
Want
Obs group replicate height weight bp cholesterol
1 1 A 0.4056 0.8870 0.2615 0.6827
2 1 B 0.6556 0.6995 0.0829 0.8356
3 1 C 0.6445 0.7110 0.3492 0.3826
4 1 D 0.1146 0.2655 . 0.0152
5 2 A 0.6072 0.2474 0.6444 0.9154
6 2 B 0.1720 . . 0.9241
7 2 C 0.6800 0.4166 0.2686 0.4992
8 2 D 0.7874 0.2595 0.6099 0.1418
9 3 A 0.4057 0.0988 0.2632 .
10 3 B 0.9805 . 0.9712 0.8937
11 3 C 0.4358 0.5023 0.5626 0.5799
12 3 D 0.8138 0.9588 0.8293 0.2448
13 4 A 0.4881 0.2731 0.4633 0.7839
14 4 B 0.1413 0.1166 0.6743 0.1032
15 4 C 0.1522 0.9351 0.2504 0.8003
16 4 D . 0.2465 0.7782 0.9412
The "get it done" approach is to manually enter each variable/value combination in a conditional which replaces with missing when true.
data have;
input group replicate $ height weight bp cholesterol;
datalines;
1 A 0.4056 0.8870 0.2615 0.6827
1 B 0.6556 0.6995 0.0829 0.8356
1 C 0.6445 0.7110 0.3492 0.3826
1 D 0.1146 0.2655 666 0.0152
2 A 0.6072 0.2474 0.6444 0.9154
2 B 0.1720 333 555 0.9241
2 C 0.6800 0.4166 0.2686 0.4992
2 D 0.7874 0.2595 0.6099 0.1418
3 A 0.4057 0.0988 0.2632 111
3 B 0.9805 444 0.9712 0.8937
3 C 0.4358 0.5023 0.5626 0.5799
3 D 0.8138 0.9588 0.8293 0.2448
4 A 0.4881 0.2731 0.4633 0.7839
4 B 0.1413 0.1166 0.6743 0.1032
4 C 0.1522 0.9351 0.2504 0.8003
4 D 222 0.2465 0.7782 0.9412
;
run;
data outliers;
input parameter $ 11. group replicate $ measurement;
datalines;
cholesterol 3 A 111
height 4 D 222
weight 2 B 333
weight 3 B 444
bp 2 B 555
bp 1 D 666
;
run;
EDIT: Updated outliers so that parameter avoids truncation and changed measurement to be numeric type so as to match the corresponding height, weight, bp, cholesterol. This shouldn't change the responses.
data want;
set have;
if group = 3 and replicate = 'A' and cholesterol = 111 then cholesterol = .;
if group = 4 and replicate = 'D' and height = 222 then height = .;
if group = 2 and replicate = 'B' and weight = 333 then weight = .;
if group = 3 and replicate = 'B' and weight = 444 then weight = .;
if group = 2 and replicate = 'B' and bp = 555 then bp = .;
if group = 1 and replicate = 'D' and bp = 666 then bp = .;
run;
This, however, doesn't utilize the outliers data set. How can the replacement process be made automatic?
I immediately think of the IN= operator, but that won't work. It's not the entire row which needs to be matched. Perhaps an SQL key matching approach would work? But to match the key, don't I need to use a where statement? I'd then effectively be writing everything out manually again. I could probably create macro variables which contain the various if or where statements, but that seems excessive.
I don't think generating statements is excessive in this case. The complexity arises here because your outlier dataset cannot be merged easily since the parameter values represent variable names in the have dataset. If it is possible to reorient the outliers dataset so you have a 1 to 1 merge, this logic would be simpler.
Let's assume you cannot. There are a few ways to use a variable in a dataset that corresponds to a variable in another.
You could use an array like array params{*} height -- cholesterol; and then use the vname function as you loop through the array to compare to the value in the parameter variable, but this gets complicated in your case because you have a one to many merge, so you would have to retain the replacements and only output the last record for each by group... so it gets complicated.
You could transpose the outliers data using proc transpose, but that will get lengthy because you will need a transpose for each parameter, and then you'd need to merge all the transposed datasets back to the have dataset. My main issue with this method is that code with a bunch of transposes like that gets unwieldy.
You create the macro variable logic you are thinking might be excessive. But compared to the other ways of getting the values of the parameter variable to match up with the variable names in the have dataset, I don't think something like this is excessive:
data _null_;
set outliers;
call symput("outlierstatement"||_n_,"if group = "||group||" and replicate = '"||replicate||"' and "||parameter||" = "||measurement||" then "|| parameter ||" = .;");
call symput("outliercount",_n_);
run;
%macro makewant();
data want;
set have;
%do i = 1 %to &outliercount;
&&outlierstatement&i;
%end;
run;
%mend;
Lorem:
Transposition is the key to a fully automatic programmatic approach. The transposition that will occur is of the filter data, not the original data. The transposed filter data will have fewer rows than the original. As John indicated, transposition of the want data can create a very tall table and has to be transposed back after applying the filters.
As to the the filter data, the presence of a filter row for a specific group, replicate and parameter should be enough to mark a cell for filtering. This is on the presumption that you have a system for automatic outlier detection and the filter values will always be in concordance with the original values.
So, what has to be done to automate the filter application process without code generating a wall of test and assign statements ?
Transpose filter data into same form as want data, call it Filter^
Merge Want and Filter^ by record key (which is the by group of Group and Replicate)
Array process the data elements, looking for filtering conditions.
For your consideration, try the following SAS code. There is an erroneous filter record added to the mix.
data have;
input group replicate $ height weight bp cholesterol;
datalines;
1 A 0.4056 0.8870 0.2615 0.6827
1 B 0.6556 0.6995 0.0829 0.8356
1 C 0.6445 0.7110 0.3492 0.3826
1 D 0.1146 0.2655 666 0.0152
2 A 0.6072 0.2474 0.6444 0.9154
2 B 0.1720 333 555 0.9241
2 C 0.6800 0.4166 0.2686 0.4992
2 D 0.7874 0.2595 0.6099 0.1418
3 A 0.4057 0.0988 0.2632 111
3 B 0.9805 444 0.9712 0.8937
3 C 0.4358 0.5023 0.5626 0.5799
3 D 0.8138 0.9588 0.8293 0.2448
4 A 0.4881 0.2731 0.4633 0.7839
4 B 0.1413 0.1166 0.6743 0.1032
4 C 0.1522 0.9351 0.2504 0.8003
4 D 222 0.2465 0.7782 0.9412
5 E 222 0.2465 0.7782 0.9412 /* test record for filter value misalignment test */
;
run;
data outliers;
length parameter $32; %* <--- widened parameter so it can transposed into column via id;
input parameter $ group replicate $ measurement ; %* <--- changed measurement to numeric variable;
datalines;
cholesterol 3 A 111
height 4 D 222
height 5 E 223 /* test record for filter value misalignment test */
weight 2 B 333
weight 3 B 444
bp 2 B 555
bp 1 D 666
;
run;
data want;
set have;
if group = 3 and replicate = 'A' and cholesterol = 111 then cholesterol = .;
if group = 4 and replicate = 'D' and height = 222 then height = .;
if group = 2 and replicate = 'B' and weight = 333 then weight = .;
if group = 3 and replicate = 'B' and weight = 444 then weight = .;
if group = 2 and replicate = 'B' and bp = 555 then bp = .;
if group = 1 and replicate = 'D' and bp = 666 then bp = .;
run;
/* Create a view with 1st row having all the filtered parameters
* This is necessary so that the first transposed filter row
* will have the parameters as columns in alphabetic order;
*/
proc sql noprint;
create view outliers_transpose_ready as
select distinct parameter from outliers
union
select * from outliers
order by group, replicate, parameter
;
/* Generate a alphabetic ordered list of parameters for use
* as a variable (aka column) list in the filter application step */
select distinct parameter
into :parameters separated by ' '
from outliers
order by parameter
;
quit;
%put NOTE: &=parameters;
/* tranpose the filter data
* The ID statement pivots row data into column names.
* The prefix=_filter_ ensure the new column names
* will not collide with the original data, and can be
* the shortcut listed with _filter_: in an array statement.
*/
proc transpose data=outliers_transpose_ready out=outliers_apply_ready prefix=_filter_;
by group replicate notsorted;
id parameter;
var measurement;
run;
/* Robust production code should contain a bin for
* data that does not conform to the filter application conditions
*/
data
want2(label="Outlier filtering applied" drop=_i_ _filter_:)
want2_warnings(label="Outlier filtering: misaligned values")
;
merge have outliers_apply_ready(keep=group replicate _filter_:);
by group replicate;
/* The arrays are for like named columns
* due to the alphabetic ordering enforced in data and codegen preparation
*/
array value_filter_check _filter_:;
array value &parameters;
if group ne .;
do _i_ = 1 to dim(value);
if value(_i_) EQ value_filter_check(_i_) then
value(_i_) = .;
else
if not missing(value_filter_check(_i_)) AND
value(_i_) NE value_filter_check(_i_)
then do;
put 'WARNING: Filtering expected but values do not match. ' group= replicate= value(_i_)= value_filter_check(_i_)=;
output want2_warnings;
end;
end;
output want2;
run;
Confirm your want and automated want2 agree.
proc compare noprint data=want compare=want2 outnoequal out=diffs;
by group replicate;
run;
Enjoy your SAS
You could use a hash table. Load a hash table with the outlier dataset, with parameter-group-replicate defined as the key. Then read in the data, and as you read each record, check each of the variables to see if that combination of parameter-group-replicate can be found in the hash table. I think below works (I'm no hash expert):
data want;
if 0 then set outliers (keep=parameter group replicate);
if _N_ = 1 then
do;
declare hash h(dataset:'outliers') ;
h.defineKey('parameter', 'group', 'replicate') ;
h.defineDone() ;
end;
set have ;
array vars {*} height weight bp cholesterol ;
do i=1 to dim(vars);
parameter=vname(vars{i});
if h.check()=0 then call missing(vars{i});
end;
drop i parameter;
run;
I like #John's suggestion:
You could use an array like array params{*} height -- cholesterol; and
then use the vname function as you loop through the array to compare
to the value in the parameter variable, but this gets complicated in
your case because you have a one to many merge, so you would have to
retain the replacements and only output the last record for each by
group... so it gets complicated.
Generally in a one to many merge I would avoid recoding variables from the dataset that is unique, because variables are retained within BY groups. But in this case, it works out well.
proc sort data=outliers;
by group replicate;
run;
data want (keep=group replicate height weight bp cholesterol);
merge have (in=a)
outliers (keep=group replicate parameter in=b)
;
by group replicate;
array vars {*} height weight bp cholesterol ;
do i=1 to dim(vars);
if vname(vars{i})=parameter then call missing(vars{i});
end;
if last.replicate;
run;
Thank you #John for providing a proof of concept. My implementation is a little different and I think worth making a separate entry for posterity. I went with a macro variable approach because I feel it is the most intuitive, being a simple text replacement. However, since a macro variable can contain only 65534 characters, it is conceivable that there could be sufficient outliers to exceed this limit. In such a case, any of the other solutions would make fine alternatives. Note that it is important that the put statement use something like best32. Too short a width will truncate the value.
If you desire to have a dataset containing the if statements (perhaps for verification), simply remove the into : statement and place a create table statements as line at the beginning of the PROC SQL step.
data have;
input group replicate $ height weight bp cholesterol;
datalines;
1 A 0.4056 0.8870 0.2615 0.6827
1 B 0.6556 0.6995 0.0829 0.8356
1 C 0.6445 0.7110 0.3492 0.3826
1 D 0.1146 0.2655 666 0.0152
2 A 0.6072 0.2474 0.6444 0.9154
2 B 0.1720 333 555 0.9241
2 C 0.6800 0.4166 0.2686 0.4992
2 D 0.7874 0.2595 0.6099 0.1418
3 A 0.4057 0.0988 0.2632 111
3 B 0.9805 444 0.9712 0.8937
3 C 0.4358 0.5023 0.5626 0.5799
3 D 0.8138 0.9588 0.8293 0.2448
4 A 0.4881 0.2731 0.4633 0.7839
4 B 0.1413 0.1166 0.6743 0.1032
4 C 0.1522 0.9351 0.2504 0.8003
4 D 222 0.2465 0.7782 0.9412
;
run;
data outliers;
input parameter $ 11. group replicate $ measurement;
datalines;
cholesterol 3 A 111
height 4 D 222
weight 2 B 333
weight 3 B 444
bp 2 B 555
bp 1 D 666
;
run;
proc sql noprint;
select
cat('if group = '
, strip(put(group, best32.))
, " and replicate = '"
, strip(replicate)
, "' and "
, strip(parameter)
, ' = '
, strip(put(measurement, best32.))
, ' then '
, strip(parameter)
, ' = . ;')
into : listIfs separated by ' '
from outliers
;
quit;
%put %quote(&listIfs);
data want;
set have;
&listIfs;
run;