Function for conditioned mean in SAS - sas

I have a dataset which some cells are valorized with 888888888 and 999999999. I would do the mean not considering these values. That is:
x=5, y=10, z=888888888
the mean will be 5.
How can I fix?

As you're calculating across variables, just store them in an array, loop through them and sum any that are less than the required threshold (I've used 100,000,000), then divide by the total number of variables to get the mean.
data have;
input x y z;
datalines;
5 10 888888888
4 20 999999999
;
run;
data want;
set have;
array vars{*} x y z;
_sum=0;
do _i = 1 to dim(vars);
if vars{_i}<1e8 then _sum+vars{_i};
end;
mean_vars = _sum/dim(vars);
drop _: ;
run;

Related

mean of 10 variables with different starting point (SAS)

I have 18 numerical variables pm25_total2000 to pm25_total2018
Each person have a starting year between 2013 and 2018, we can call that variable "reqyear".
Now I want to calculate mean for each persons 10 years before the starting year.
For example if a person have starting year 2015 I want mean(of pm25_total2006-pm25_total2015)
Or if a person have starting year 2013 I want mean(of pm25_total2004-pm25_total2013)
How to do this?
data _null_;
set scapkon;
reqyear=substr(iCDate,1,4)*1;
call symput('reqy',reqyear);
run;
data scatm;
set scapkon;
/* Medelvärde av 10 år innan rekryteringsår */
pm25means=mean(of pm25_total%eval(&reqy.-9)-pm25_total%eval(&reqy.));
run;
%eval(&reqy.-9) will be constant value (the same value for all as for the first person) , in my case 2007
That doesn't work.
You can compute the mean with a traditional loop.
data want;
set have;
array x x2000-x2018;
call missing(sum, mean, n);
do _n_ = 1 to 10;
v = x ( start - 1999 -_n_ );
if not missing(v) then do;
sum + v;
n + 1;
end;
end;
if n then mean = sum / n;
run;
If you want to flex your SAS skill, you can use POKE and PEEK concepts to copy a fixed length slice (i.e. a fixed number of array elements) of an array to another array and compute the mean of the slice.
Example:
You will need to add sentinel elements and range checks on start to prevent errors when start-10 < 2000.
data have;
length id start x2000-x2018 8;
do id = 1 to 15;
start = 2013 + mod(id,6);
array x x2000-x2018;
do over x;
x = _n_;
_n_+1;
end;
output;
end;
format x: 5.;
run;
data want;
length id start mean10yrPriorStart 8;
set have;
array x x2000-x2018;
array slice(10) _temporary_;
call pokelong (
peekclong ( addrlong ( x(start-1999-10) ) , 10*8 ) ,
addrlong ( slice (1))
);
mean10yrPriorStart = mean(of slice(*));
run;
use an array and loop
index the array with years
accumulate the sum of the values
accumulate the count to account for any missing values
divide to obtain the mean value
data want;
set have;
array _pm(2000:2018) pm25_total2000 - pm25_total2018;
do year=reqyear to (reqyear-9) by -1;
*add totals;
total = sum(total, _pm(year));
*add counts;
nyears = sum(nyears,not missing(_pm(year)));
end;
*accounts for possible missing years;
mean = total/nyears;
run;
Note this loop goes in reverse (start year to 9 years previous) because it's slightly easier to understand this way IMO.
If you have no missing values you can remove the nyears step, but not a bad thing to include anyways.
NOTE: My first answer did not address the OP's question, so this a redux.
For this solution, I used Richard's code for generating test data. However, I added a line to randomly add missing values.
x = _n_;
if ranuni(1) < .1 then x = .;
_n_+1;
This alternative does not perform any checks for missing values. The sum() and n() functions inherently handle missing values appropriately. The loop over the dynamic slice of the data array only transfers the value to a temporary array. The final sum and count is performed on the temp array outside of the loop.
data want;
set have;
array x(2000:2018) x:;
array t(10) _temporary_;
j = 1;
do i = start-9 to start;
t(j) = x(i);
j + 1;
end;
sum = sum(of t(*));
cnt = n(of t(*));
mean = sum / cnt;
drop x: i j;
run;
Result:
id start sum cnt mean
1 2014 72 7 10.285714286
2 2015 305 10 30.5
3 2016 458 9 50.888888889
4 2017 631 9 70.111111111

Calculate the top 5 and summarize them by store

Let's say I have stores all around the world and I want to know what was my top losses sales across the world per store. What is the code for that?!
here is my try:
proc sort data= store out=sorted_store;
by store descending amount;
run;
and
data calc1;
do _n_=1 by 1 until(last.store);
set sorted_store;
by store;
if _n_ <= 5 then "Sum_5Largest_Losses"n=sum(amount);
end;
run;
but this just prints out the 5:th amount and not 1.. TO .. 5! and I really don't know how to select the top 5 of EACH store . I think a kind of group by would be a perfect fit. But first things, first. How do I selct i= 1...5 ? And not just = 5?
There is also way of doing it with proc sql:
data have;
input store$ amount;
datalines;
A 100
A 200
A 300
A 400
A 500
A 600
A 700
B 1000
B 1100
C 1200
C 1300
C 1400
D 600
D 700
E 1000
E 1100
F 1200
;
run;
proc sql outobs=4; /* limit to first 10 results */
select store, sum(amount) as TOTAL_AMT
from have
group by 1
order by 2 desc; /* order them to the TOP selection*/
quit;
The data step sum(,) function adds up its arguments. If you only give it one argument then there is nothing to actually sum so it just returns the input value.
data calc1;
do _n_=1 by 1 until(last.store);
set sorted_store;
by store;
if _n_ <= 5 then Sum_5Largest_Losses=sum(Sum_5Largest_Losses,amount);
end;
run;
I would highly recommend learning the basic methods before getting into DOW loops.
Add a counter so you can find the first 5 of each store
As the data step loops the sum accumulates
Output sum for counter=5
proc sort data= store out=sorted_store;
by store descending amount;
run;
data calc1;
set sorted_store;
by store;
*if first store then set counter to 1 and total sum to 0;
if first.store then do ;
counter=1;
total_sum=0;
end;
*otherwise increment the counter;
else counter+1;
*accumulate the sum if counter <= 5;
if counter <=5 then total_sum = sum(total_sum, amount);
*output only when on last/5th record for each store;
if counter=5 then output;
run;

Accumulator variables and its use

I am trying to test how accumulator variables work and I created the following program.
data numbers;
input n;
cards;
10
20
40
50
;
data newnums;
infile numbers;
input tens;
count+tens;
run;
proc print data=newnums;
run;
I purposely put blank rows. However besides that I thought that the program would execute.
I want to figure out the last value of the variable count, but I cannot... may I have some help please?
you have multiple things in your code, which you need to change.
missing numeric value is represented as . character
data set is referenced using set statement not infile
accumulator variable you are talking about is sum statement, it retains the value when you have missing value and more on sum statement in the link below.
Difference between SUM statement and sum variable in SAS?
data numbers;
input n;
cards;
10
20
40
.
50
;
data newnums;
set numbers;
count+n;
run;
proc print data=newnums;
run;
Edit1: if you had something below you will get missing value by using truncover
data numbers;
infile datalines truncover;
input n;
cards;
10
20
40
50
;

multiples of 8 - optimal length for SAS character variables?

I heard that SAS stores character variables in chunks of 8 bytes.
Therefore, the thinking goes we should always assign the length of the character variables to be a multiple of 8.
I have searched and could not find any support for the initial assertion.
Is it true? Is this covered somewhere in the documentation?
This is true for datasets that contain no 8 byte numeric variables. I will post separately for datasets that do.
No, there is nothing special about 8 byte character variable lengths.
See the below:
data length8;
length char0001-char9999 $8;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
data length7;
length char0001-char9999 $7;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
data length4;
length char0001-char9999 $4;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
data length12;
length char0001-char9999 $12;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
data length16;
length char0001-char9999 $16;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
data length17;
length char0001-char9999 $17;
call missing(of _all_);
do _i = 1 to 100;
output;
end;
drop _i;
run;
Each of these datasets is of different size, roughly proportional to the length of the character variables. Note that the 4 size is a bit bigger proportionally (on my machine, anyway): in fact, 4,5,6 are all the same size. This is because of the page size: the minimum page size on my installation is 64kb (65535 bytes), and 4,5,6 all can only fit one row of data in that (roughly 40, 50, and 60kb rows). It's not because of any particular size being saved for a character variable, but instead because of the total length of the data record.
That's where you could potentially have a savings by altering a small amount: if your data happen to be arranged such that the page size is just under double the size of the row, then making the row just slightly smaller will save you half of the space. That's unlikely to occur except on a very small number of cases though - it requires a very wide row (many variables, or very long character variables). You also can alter the page size with options, though, which may be the better way to deal with edge cases like this.
For datasets that contain a numeric variable, as #jaamor's example included, there is a difference that does have some impact on storage related to 8 byte size. It will not usually have a significant impact on dataset size, except on a very tall and narrow dataset, but for datasets that are very tall and narrow, it may be a consideration.
When a numeric variable that is 8 bytes (the default) in length, SAS places those numeric variables at the end of the data vector, and starts them at a multiple of 8 bytes, presumably to aid in efficiency at accessing those predictable numeric variables. Any other variable other than an 8 byte numeric will be placed at the start of the data vector, and then any padding needed to bring that up to a multiple of 8 bytes is added, and then the numeric 8 byte variables are placed after that.
This can be seen by looking at the proc contents output from some example datasets.
data fourteen_eight;
length x y $7; *14 total;
length i 8;
run;
data twelve_eight;
length x y $6; *12 total;
length i 8;
run;
data twelve_six;
length x y $6; *12 total;
length i 6;
run;
data twelve_six_eight;
length x y $6;
length z 6;
length i 8;
run;
fourteen_eight has a conceptual observation length of 22, but a physical observation length of 24 (looking at PROC CONTENTS). twelve_eight has a conceptional length of 20, but a physical observation length of 24 as well. twelve_six has a conceptual length of 18, and a physical observation length of 18 - meaning no buffer if the numeric variable isn't 8 long. twelve_six_eight has a conceptual length of 26, and a physical size of 32: 18 rounded up to 24, and then the 8 at the end. (You can verify it's not allocating 8 for each numeric variable by simply adding several more 6 byte numbers; they never increase the total padding, and fit neatly in a smaller space.)
Here's how it ends up looking:
x $6
y $6
z 6
i 8
would fit like so:
[00000000011111111112222222222333333333344444444445]
[12345678901234567890123456789012345678901234567890]
[xxxxxxyyyyyyzzzzzz iiiiiiii]
One side note: I'm not 100% sure that it's not [iiiiiiiixxxxxxyyyyyyzzzz ]. That would work just as well as far as being able to predict the location of numeric variables. It doesn't really affect this, though: either way, yes, there will be a small buffer if your total non-8-byte-numeric storage is not a multiple of 8 bytes if you do have one or more 8 byte numeric variables.
As Joe said, I did test empirically using the below script:
libname testlen "<directory>";
%macro create_ds(length=, dsName=);
data &dsName;
length x $&length.;
do i=1 to 1000000;
x="";
output;
end;
run;
%mend;
%macro create_all_ds;
%do i=1 %to 20;
%create_ds(length=&i, dsName=testlen.len&i)
%end;
%mend;
%create_all_ds
All datasets have one variable. The length of the variable varies across datasets, starting from 1 to 20.
Datasets 1-8 take up ~15.8 MB
Datasets 9-16 take up ~23.7 MB
Datasets 16-20 take up ~31.5 MB
This probably means that it is not space efficient to declare SAS variable lengths that are not multiples of 8 for 1 variable datasets.
I tried a similar test for 2 variable datasets:
%macro create_ds(length=, dsName=);
data &dsName;
length x y $&length.;
do i=1 to 1000000;
x="";
y="";
output;
end;
run;
%mend;
%macro create_all_ds;
%do i=1 %to 20;
%create_ds(length=&i, dsName=testlen.len&i)
%end;
%mend;
%create_all_ds
The results are as follows:
Datasets 1-4 take up ~15.8 MB
Datasets 5-8 take up ~23.7 MB
This could mean that for efficient length declarations the sum of the length of the character variables should be a multiple of eight.

Determine rates of change for different groups

I have a SAS issue that I know is probably fairly straightforward for SAS users who are familiar with array programming, but I am new to this aspect.
My dataset looks like this:
Data have;
Input group $ size price;
Datalines;
A 24 5
A 28 10
A 30 14
A 32 16
B 26 10
B 28 12
B 32 13
C 10 100
C 11 130
C 12 140
;
Run;
What I want to do is determine the rate at which price changes for the first two items in the family and apply that rate to every other member in the family.
So, I’ll end up with something that looks like this (for A only…):
Data want;
Input group $ size price newprice;
Datalines;
A 24 5 5
A 28 10 10
A 30 14 12.5
A 32 16 15
;
Run;
The technique you'll need to learn is either retain or diff/lag. Both methods would work here.
The following illustrates one way to solve this, but would need additional work by you to deal with things like size not changing (meaning a 0 denominator) and other potential exceptions.
Basically, we use retain to cause a value to persist across records, and use that in the calculations.
data want;
set have;
by group;
retain lastprice rateprice lastsize;
if first.group then do;
counter=0;
call missing(of lastprice rateprice lastsize); *clear these out;
end;
counter+1; *Increment the counter;
if counter=2 then do;
rateprice=(price-lastprice)/(size-lastsize); *Calculate the rate over 2;
end;
if counter le 2 then newprice=price; *For the first two just move price into newprice;
else if counter>2 then newprice=lastprice+(size-lastsize)*rateprice; *Else set it to the change;
output;
lastprice=newprice; *save the price and size in the retained vars;
lastsize=size;
run;
Here a different approach that is obviously longer than Joe's, but could be generalized to other similar situations where the calculation is different or depends on more values.
Add a sequence number to your data set:
data have2;
set have;
by group;
if first.group the seq = 0;
seq + 1;
run;
Use proc reg to calculate the intercept and slope for the first two rows of each group, outputting the estimates with outest:
proc reg data=have2 outest=est;
by group;
model price = size;
where seq le 2;
run;
Join the original table to the parameter estimates and calculate the predicted values:
proc sql;
create table want as
select
h.*,
e.intercept + h.size * e.size as newprice
from
have h
left join est e
on h.group = e.group
order by
group,
size
;
quit;