Dealing with extreme outliers in sas with If-Then-Else statements - sas

I have some extreme outliers throwing my regression model off, and I removed them using If-Then-Else statements. However, SAS eliminated those data points completely and found new outliers in the ones remaining. Is there a way to remove the outliers from analysis without it throwing more into the mix?
I calculated Q3 + 1.5 * IQR and used that value as so:
Data lungcancer; input trt surv age sex ##;
/* create a new variable diff */
diff = surv - 365;
/* create a new categorical variable resp */
If diff > 0 then resp= 1;
If diff <= 0 then resp= 0;
/* create a new categorical variable sev */
if 2276 > surv >= 1621 then sev=0;
Else If 456 <= surv <= 1620 then sev=1;
Else if 181 <= surv <= 455 then sev=2;
Else if 1 <= surv <= 180 then sev=3;
Else if surv > 2276 then delete; /* Remove outliers */

So, you removed some data points that were on the edge of your data, and then got a new set of data, and recalculated IQR, and ... are surprised that there are new "outliers"?
This isn't SAS doing anything particular, it's doing what it's asked, identifying things in 1.5*IQR. Outlier removal is always up to you (when you're doing things this way, anyway, and not using one of the more advanced procs I suppose): you decide what's an outlier and remove it or not, depending on your data. So - do you think these new data points are outliers? Remove or not depending on that.

Related

Perform calculations using n and nmiss values

I have the following SAS PROC MEANS statement that works great as it is.
proc means data=MBA_NODUP_APPLICANT_&TERM. missing nmiss n mean median p10 p90 fw = 8;
where ENR = 1;
by SRC_TYPE;
var gmattotal greverb2 grequant2 greanwrt;
run;
However, I am trying to add new variable calculating nmiss/(nmiss+n). I don't see any examples of this online, but also nothing that says that it cannot be done.
To calculate the percent missing, which is what your formula means, just use the OUTPUT statement to generate a dataset with the NMISS and N values. Then add a step to do the arithmetic yourself.
Or you could create a new binary variable using the MISSING() function and take the MEAN of that. The mean of a 1/0 variable is the same are the percent that were 1 (TRUE).
Example:
data test;
set sashelp.cars;
missing_cylinders=missing(cylinders);
run;
proc means data=test nmiss n mean;
var cylinders missing_cylinders ;
run;
So 2/428 is a little less than 0.5%.
The MEANS Procedure
N
Variable Miss N Mean
------------------------------------------------
Cylinders 2 426 5.8075117
missing_cylinders 0 428 0.0046729

SAS code (Change from Baseline time Point)

In a clinical trial, Systolic and diastolic blood pressure are measured pre-dose (0 hr) and at 1,2,4,8 hour post- dose.
Twelve subjects were studied. The SAS dataset has the following structure
Variable-Vol Length - 8 Label- Subject Number
Variable- Ntime Length- 8 Label Nominal time post-dose (hours)
Variable- Sups Length- 8 Label- Supine Systolic BP (mmHg)
What SAS code could I use to calculate the change from baseline (Oh) at each time point, and then calculate the mean, minimum, maximum change from baseline for the 12 subjects? Edit: This is what I've tried so far
data postbase;
do until (last.vol);
*** Only keep pre-dose values;
set save.vitals (where=(not(ntime <= 0 )));
by Vol Ntime;
if Ntime <= 0 then bl = Sups;
else do;
chgbl = Sups - bl;
output;
end;
end;
run;
data postbase;
set save.vitals;
by subject time volume;
retain baseline;
if time=0 then baseline=volume;
else change = volume - baseline;
run;
I think your code is too complex by far and I couldn't parse your variable names so just made them up.
I set baseline volume whenever time = 0 and then do the change every other time.
RETAIN causes the value to stay until it's reset. If you have times that may not be 0 or missing baseline then you may need to modify the query.

Nearest Neighbor Matching in Stata

I need to program a nearest neighbor algorithm in stata from scratch because my dataset does not allow me to use any of the available solutions (as far as I am concerned).
To be pecise. I have a dataset that is of similar structure to that of the following (original has around 14k observations)
input id value treatment match
1 0.14 0 .
2 0.32 0 .
3 0.465 1 2
4 0.878 1 2
5 0.912 1 2
6 0.001 1 1
end
I want to generate a variable called match (already included in the example above). For each observation with treatment == 1 the variable match should store the id of another observation from within treatment == 0 whose value is closest to value of the considered observation (treatment == 1).
I am new to stata programming, so I am not yet familiar with the syntax. My first shot is the following however it does not produce any changes to the match variable. I am sure this is a novice question but I am hoping for some advice on how to make the code running.
EDIT: I have changed the code slightly and now it seems to work. Do you see any problems that may arise if I run it on a bigger dataset?
set more off
clear all
input id pscore treatment
1 0.14 0
2 0.32 0
3 0.465 1
4 0.878 1
5 0.912 1
6 0.001 1
end
gen match = .
forval i = 1/`= _N' {
if treatment[`i'] == 1 {
local dist 1
forvalues j = 1/`= _N' {
if (treatment[`j'] == 0) {
local current_dist (pscore[`i'] - pscore[`j'])^2
if `dist' > `current_dist' {
local dist `current_dist' // update smallest distance
replace match = id[`j'] in `i' // write match
}
}
}
}
}
Consider some simulated data: 1,000 observations, 200 of them untreated (treat == 0) and the rest treated (treat == 1). Then the code included below will be much more efficient than the originally posted. (Ties, like in your code, are not explicitly handled.)
clear
set more off
*----- example data -----
set obs 1000
set seed 32956
gen id = _n
gen pscore = runiform()
gen treat = cond(_n <= 200, 0, 1)
*----- new method -----
timer clear
timer on 1
// get id of last non-treated and first treated
// (data is sorted by treat and ids are consecutive)
bysort treat (id): gen firsttreat = id[1]
local firstt = first[_N]
local lastnt = `firstt' - 1
// start loop
gen match = .
gen dif = .
quietly forvalues i = `firstt'/`=_N' {
// compute distances
replace dif = (pscore[`i'] - pscore)^2
summarize dif in 1/`lastnt', meanonly
// identify id of minimum-distance observation
replace match = . in 1/`lastnt'
replace match = id in 1/`lastnt' if dif == r(min)
summarize match in 1/`lastnt', meanonly
// save the minimum-distance id
replace match = r(max) in `i'
}
// clean variable and drop
replace match = . in 1/`lastnt'
drop dif firsttreat
timer off 1
tempfile first
save `first'
*----- your method -----
drop match
timer on 2
gen match = .
quietly forval i = 1/`= _N' {
if treat[`i'] == 1 {
local dist 1
forvalues j = 1/`= _N' {
if (treat[`j'] == 0) {
local current_dist (pscore[`i'] - pscore[`j'])^2
if `dist' > `current_dist' {
local dist `current_dist' // update smallest distance
replace match = id[`j'] in `i' // write match
}
}
}
}
}
timer off 2
tempfile second
save `second'
// check for equality of results
cf _all using `first'
// check times
timer list
The results in seconds to finish execution:
. timer list
1: 0.19 / 1 = 0.1930
2: 10.79 / 1 = 10.7900
The difference is huge, specially considering this data set has only 1,000 observations.
An interesting thing to notice is that as the number of non-treated cases increases relative to the number of treated, then the original method improves, but never reaches the levels of efficiency of the new method. As an example, invert the number of cases, so there is now 800 untreated and 200 treated (change data setup to gen treat = cond(_n <= 800, 0, 1)). The result is
. timer list
1: 0.07 / 1 = 0.0720
2: 4.45 / 1 = 4.4470
You can see that the new method also improves and is still much faster. In fact, the relative difference is still the same.
Another way to do this is using joinby or cross. The problem is they temporarily expand (a lot) the size of your data base. In many cases, they are not feasible due to the hard limit Stata has on the number of possible observations (see help limits). You can find an example of joinby here: https://stackoverflow.com/a/19784222/2077064.
Edit
If there's a large number of treated relative to untreated, your code suffers
because you go through the whole first loop many more times (due to the first if).
Furthermore, going through
that whole loop once, implies going through another loop that
has itself two if conditions, _N more times.
The opposite case in which there are few treated observations means that you go through the whole
first loop only in a small number of occasions, speeding up your code substantially.
The reason my code can maintain its efficiency is due to the use of in. This always
offers speed gains over if. Stata will go directly to those observations with no
logical checking needed. Your problem provides an opportunity for that replacement
and it's wise to seize it.
If my code used if where in is in place, the results would be different.
Your code would be faster for the
case in which there's a large number of untreated relative to treated, and again, that
is because in your code there would not be the need to go through the complete loop,
requiring very little work;
the first loop is short-circuited with the first if. For the opposite case,
my code would still dominate.
The key is to "separate" treated from untreated and work on each group using in.

Stata: compare coefficients of factor variables using foreach (or forvalues)

I am using an ordinal independent variable in an OLS regression as a categorical variable using the factor variable technique in Stata (i.e, i.ordinal). The variable can take on values of the integers from 0 to 9, with 0 being the base category. I am interested in testing if the coefficient of each variable is greater (or less) than that which succeeds it (i.e. _b[1.ordinal] >= _b[2.ordinal], _b[2.ordinal] >= _b[3.ordinal], etc.). I've started with the following pseudocode based on FAQ: One-sided t-tests for coefficients:
foreach i in 1 2 3 5 6 7 8 {
test _b[`i'.ordinal] - _b[`i+'.ordinal] = 0
gen sign_`i'`i+' = sign(_b[`i'.ordinal] - _b[`i+'.ordinal])
display "Ho: i <= i+ p-value = " ttail(r(df_r), sign_`i'`i+'*sqrt(r(F)))
display "Ho: i >= i+ p-value = " 1-ttail(r(df_r), sign_`i'`i+'*sqrt(r(F)))
}
where I want the ```i+' to mean the next value of i in the sequence (so if i is 3 then ``i+' is 5). Is this even possible to do? Of course, if you have any cleaner suggestions to test the coefficients in this manner, please advise.
Note: The model only uses a sub-sample of my dataset for which there are no observations for 4.ordinal, which is why I use foreach instead of forvalues. If you have suggestions for developing a general code that can be used regardless of missing variables, please advise.
There are various ways to do this. Note that there is little obvious point to creating a new variable just to hold one constant. Code not tested.
forval i = 1/8 {
local j = `i' + 1
capture test _b[`i'.ordinal] - _b[`j'.ordinal] = 0
if _rc == 0 {
local sign = sign(_b[`i'.ordinal] - _b[`j'.ordinal])
display "Ho: `i' <= `j' p-value = " ttail(r(df_r), `sign' * sqrt(r(F)))
display "Ho: `i' >= `j' p-value = " 1-ttail(r(df_r), `sign' * sqrt(r(F)))
}
}
The capture should eat errors.

SAS creating a dynamic interval

This is somewhat complex (well to me at least).
Here is what I have to do:
Say that I have the following dataset:
date price volume
02-Sep 40 100
03-Sep 45 200
04-Sep 46 150
05-Sep 43 300
Say that I have a breakpoint where I wish to create an interval in my dataset. For instance, let my breakpoint = 200 volume transaction.
What I want is to create an ID column and record an ID variable =1,2,3,... for every breakpoint = 200. When you sum all the volume per ID, the value must be constant across all ID variables.
So using my example above, my final dataset should look like the following:
date price volume id
02-Sep 40 100 1
03-Sep 45 100 1
03-Sep 45 100 2
04-Sep 46 100 2
04-Sep 46 50 3
05-Sep 43 150 3
05-Sep 43 150 4
(last row can miss some value but that is fine. I will kick out the last id)
As you can see, I had to "decompose" some rows (like the second row for instance, I break the 200 into two 100 volume) in order to have constant value of the sum, 200, of volume across all ID.
Looks like you're doing volume bucketing for a flow toxicity VPIN calculation. I think this works:
%let bucketsize = 200;
data buckets(drop=bucket volume rename=(vol=volume));
set tmp;
retain bucket &bucketsize id 1;
do until(volume=0);
vol=min(volume,bucket);
output;
volume=volume-vol;
bucket=bucket-vol;
if bucket=0 then do;
bucket=&bucketsize;
id=id+1;
end;
end;
run;
I tested this with your dataset and it looks right, but I would check carefully several cases to confirm that it works right.
If you have a variable which indicates 'Buy' or 'Sell', then you can try this. Let's say this variable is called type and takes the values 'B' or 'S'. One advantage of using this method would be that it is easier to process 'by-groups' if any.
%let bucketsize = 200;
data tmp2;
set tmp;
retain volsumb idb volusums ids;
/* Initialize. */
volusumb = 0; idb = 1; volsums = 0; ids = 1;
/* Store the current total for each type. */
if type = 'B' then volsumb = volsumb + volume;
else if type = 'S' then volsums = volsums + volume;
/* If the total has reached 200, then reset and increment id. */
/* You have not given the algorithm if the volume exceeds 200, for example the first two values are 150 and 75. */
if volsumb = &bucketsize then do; idb = idb + 1; volsumb = 0; end;
if volsums = &bucketsize then do; ids = ids + 1; volsums = 0; end;
drop volsumb volsums;
run;