What I have
A dataset with 8 row x4 col
"Condition" "A_1" "B_1"
A 1 .
A 3 .
A 2 .
A 4 .
B . 4
B . 3
B . 5
B . 6
[
What I want is either:
What I want 1
(1)
"Condition" "A_1" "B_1"
A 1 .
A 3 .
A 2 .
A 4 .
B 4 .
B 3 .
B 5 .
B 6 .
OR, (2):
What I want 2
"Condition" "A_1" "B_1" "AB_1"
A 1 . 1
A 3 . 3
A 2 . 2
A 4 . 4
B . 4 4
B . 3 3
B . 5 5
B . 6 6
It was easy with STATA, R, and Excel (of course), but for the life of me I can't figure out this simple thing in SAS.
I tried,
data want;
if condition = "B" then A_1 = B_1;
set have;
run;
I also tried
data want;
if condition = "A" then AB_1 = A_1;
else AB_1 = B_1;
set have;
run;
The second code almost does the job except that the resulting AB_1 lags by 1 row.
What the hack...
Use coalesce. You also need your set statement before doing any of your logic. SAS reads a row when it encounters the set statement.
data want;
set have;
AB_1 = coalesce(A_1, B_1);
run;
for (2) you can try to use cats() function
data want;
set have;
AB_1 = cats(A_1,B_1);
run;
But in order to do the concatenate columns of different types you should also use the explicit PUT() function.
Related
I want to copy an observation to the lines above and below in variables with a specific characteristic (in this case the ID).If the variable is missing, no actions are required. I really have no clue how to do this. Any help would be greatly appreciated. Thanks in advance!
Have:
ID var
1 .
1 1
1 1
1 1
1 .
1 .
1 .
2 .
2 .
2 .
2 .
2 .
3 .
3 .
3 1
3 .
4 .
4 .
4 .
4 .
Want:
ID var
1 1
1 1
1 1
1 1
1 1
1 1
1 1
2 .
2 .
2 .
2 .
2 .
3 1
3 1
3 1
3 1
4 .
4 .
4 .
4 .
Just merge the data with itself.
data want;
merge have(drop=var) have(keep=id var where=(var=1));
by id;
run;
Another method: proc timeseries if you have SAS/ETS.
proc timeseries data=have out=want(drop=time);
by id;
var var / setmissing=maximum;
run;
But if it absolutely has to be the next value, you can run through proc timeseries twice: once to get the next value in-line and another to get the rest.
proc timeseries data=have out=have2;
by id;
var var / setmissing=next;
run;
proc timeseries data=have2 out=want(drop=time);
by id;
var var / setmissing=previous;
run;
Simplest way is SQL.
take aggregate statistic of column and assign to new value
Must select all variables so that a summary table is not generated
proc sql;
create table want as
select ID, var, max(var) as filled_var
from have
group by ID
from have;
quit;
Or in a data step.
sort by ID and descending VAR, any empty values are now at the end
Use RETAIN to hold value across rows
Drop old variable
proc sort data=have;
by id descending var;
run;
data want;
set have;
retain filled_var;
if first.id then filled_var = var;
keep id filled_var;
run;
data have;
input ID var;
datalines;
1 .
1 1
1 1
1 1
1 .
1 .
1 .
2 .
2 .
2 .
2 .
2 .
3 .
3 .
3 1
3 .
4 .
4 .
4 .
4 .
;
data want(drop = v);
do _N_ = 1 by 1 until (last.ID);
set have;
by ID;
v = max(v, var);
end;
do _N_ = 1 to _N_;
set have;
var = v;
output;
end;
run;
Hello so this is a sample of my data (There is an additional column of LBCAT =URINALYSIS for those panel of tests)
I've been asked to only include the panel of tests where LBNRIND is populated for any of those tests and the rest to be removed. Some subjects have multiple test results at different visit timepoints and others only have 1.I can't utilise a simple where LBNRIND ne '' in the data step because I need the entire panel of Urinalysis tests and not just that particular test result. What would be the best approach here? I think transposing the data would be too messy but maybe putting the variables in an array/macro and utilising a do loop for those panel of tests?.
Update:I've tried this code but it doesn't keep the corresponding tests for where lb_nrind >0. If I apply the sum(lb_nrind > '' ) the same when applying lb_nrind > '' to the having clause
*proc sql;
*create table want as
select * from labUA
group by ptno and day and lb_cat
having sum(lb_nrind > '') > 0 ;
data want2;
do _n_ = 1 by 1 until (last.ptno);
set labUA;
by ptno period day hour ;
if not flag_group then flag_group = (lb_nrind > '');
end;
do _n_ = 1 to _n_;
set want;
if flag_group then output;
end;
drop flag_group; run;*
You can use a SQL HAVING clause to retain rows of a group meeting some aggregate condition. In your case that group might be a patientid, panelid and condition at least one LBNRIND not NULL
Example:
Consider this example where a group of rows is to be kept only if at least one of the rows in the group meets the criteria result7=77
Both code blocks use the SAS feature that a logical evaluation is 1 for true and 0 for false.
SQL
data have;
infile datalines missover;
input id test $ parm $ result1-result10;
datalines;
1 A P 1 2 . 9 8 7 . . . .
1 B Q 1 2 3
1 C R 4 5 6
1 D S 8 9 . . . 6 77
1 E T 1 1 1
1 F U 1 1 1
1 G V 2
2 A Z 3
2 B K 1 2 3 4 5 6 78
2 C L 4
2 D M 9
3 G N 8
4 B Q 7
4 D S 6
4 C 1 1 1 . . 5 0 77
;
proc sql;
create table want as
select * from have
group by id
having sum(result7=77) > 0
;
DOW Loop
data want;
do _n_ = 1 by 1 until (last.id);
set have;
by id;
if not flag_group then flag_group = (result7=77);
end;
do _n_ = 1 to _n_;
set have;
if flag_group then output;
end;
drop flag_group;
run;
I have observations with column ID, a, b, c, and d. I want to count the number of unique values in columns a, b, c, and d. So:
I want:
I can't figure out how to count distinct within each row, I can do it among multiple rows but within the row by the columns, I don't know.
Any help would be appreciated. Thank you
********************************************UPDATE*******************************************************
Thank you to everyone that has replied!!
I used a different method (that is less efficient) that I felt I understood more. I am still going to look into the ways listed below however to learn the correct method. Here is what I did in case anyone was wondering:
I created four tables where in each table I created a variable named for example ‘abcd’ and placed a variable under that name.
So it was something like this:
PROC SQL;
CREATE TABLE table1_a AS
SELECT
*
a as abcd
FROM table_I_have_with_all_columns
;
QUIT;
PROC SQL;
CREATE TABLE table2_b AS
SELECT
*
b as abcd
FROM table_I_have_with_all_columns
;
QUIT;
PROC SQL;
CREATE TABLE table3_c AS
SELECT
*
c as abcd
FROM table_I_have_with_all_columns
;
QUIT;
PROC SQL;
CREATE TABLE table4_d AS
SELECT
*
d as abcd
FROM table_I_have_with_all_columns
;
QUIT;
Then I stacked them (this means I have duplicate rows but that ok because I just want all of the variables in 1 column and I can do distinct count.
data ALL_STACK;
set
table1_a
table1_b
table1_c
table1_d
;
run;
Then I counted all unique values in ‘abcd’ grouped by ID
PROC SQL ;
CREATE TABLE count_unique AS
SELECT
My_id,
COUNT(DISTINCT abcd) as Count_customers
FROM ALL_STACK
GROUP BY my_id
;
RUN;
Obviously, it’s not efficient to replicate a table 4 times just to put a variables under the same name and then stack them. But my tables were somewhat small enough that I could do it and then immediately delete them after the stack. If you have a very large dataset this method would most certainly be troublesome. I used this method over the others because I was trying to use Procs more than loops, etc.
A linear search for duplicates in an array is O(n2) and perfectly fine for small n. The n for a b c d is four.
The search evaluates every pair in the array and has a flow very similar to a bubble sort.
data have;
input id a b c d; datalines;
11 2 3 4 4
22 1 8 1 1
33 6 . 1 2
44 . 1 1 .
55 . . . .
66 1 2 3 4
run;
The linear search for duplicates will occur on every row, and the count_distinct will be initialized automatically in each row to a missing (.) value. The sum function is used to increment the count when a non-missing value is not found in any prior array indices.
* linear search O(N**2);
data want;
set have;
array x a b c d;
do i = 1 to dim(x) while (missing(x(i)));
end;
if i <= dim(x) then count_distinct = 1;
do j = i+1 to dim(x);
if missing(x(j)) then continue;
do k = i to j-1 ;
if x(k) = x(j) then leave;
end;
if k = j then count_distinct = sum(count_distinct,1);
end;
drop i j k;
run;
Try to transpose dataset, each ID becomes one column, frequency each ID column by option nlevels, which count frequency of value, then merge back with original dataset.
Proc transpose data=have prefix=ID out=temp;
id ID;
run;
Proc freq data=temp nlevels;
table ID:;
ods output nlevels=count(keep=TableVar NNonMisslevels);
run;
data count;
set count;
ID=compress(TableVar,,'kd');
drop TableVar;
run;
data want;
merge have count;
by id;
run;
one more way using sortn and using conditions.
data have;
input id a b c d; datalines;
11 2 3 4 4
22 1 8 1 1
33 6 . 1 2
44 . 1 1 .
55 . . . .
66 1 2 3 4
77 . 3 . 4
88 . 9 5 .
99 . . 2 2
76 . . . 2
58 1 1 . .
50 2 . 2 .
66 2 . 7 .
89 1 1 1 .
75 1 2 3 .
76 . 5 6 7
88 . 1 1 1
43 1 . . 1
31 1 . . 2
;
data want;
set have;
_a=a; _b=b; _c=c; _d=d;
array hello(*) _a _b _c _d;
call sortn(of hello(*));
if a=. and b = . and c= . and d =. then count=0;
else count=1;
do i = 1 to dim(hello)-1;
if hello(i) = . then count+ 0;
else if hello(i)-hello(i+1) = . then count+0;
else if hello(i)-hello(i+1) = 0 then count+ 0;
else if hello(i)-hello(i+1) ne 0 then count+ 1;
end;
drop i _:;
run;
You could just put the unique values into a temporary array. Let's convert your photograph into data.
data have;
input id a b c d;
datalines;
11 2 3 4 4
22 1 8 1 1
33 6 . 1 2
44 . 1 1 .
;
So make an array of the input variables and another temporary array to hold the unique values. Then loop over the input variables and save the unique values. Finally count how many unique values there are.
data want ;
set have ;
array unique (4) _temporary_;
array values a b c d ;
call missing(of unique(*));
do _n_=1 to dim(values);
if not missing(values(_n_)) then
if not whichn(values(_n_),of unique(*)) then
unique(_n_)=values(_n_)
;
end;
count=n(of unique(*));
run;
Output:
Obs id a b c d count
1 11 2 3 4 4 3
2 22 1 8 1 1 2
3 33 6 . 1 2 3
4 44 . 1 1 . 1
dataset looks like this
variable
1
.
3
.
5
.
7
.
9
How do you replace missing even values with the correct one
and resulting data should appear as
1
2
3
4
5
6
7
8
9
do you mean that the data looks like this:?
var
---
1
.
3
.
etc
and you want the ones in between to be one more than the one before? if so ...
data one;
input var;
datalines;
1
.
3
.
5
;
run;
data two (drop=prev_var);
set one;
retain prev_var;
if missing(var) then do;
var = prev_var + 1;
end;
prev_var=var;
run;
proc print data = two noobs; run;
I have a dataset that consists of a series of readings made by different people/instruments, of a bunch of different dimensions. It looks like this:
SUBJECT DIM1_1 DIM1_2 DIM1_3 DIM1_4 DIM1_5 DIM2_1 DIM2_2 DIM2_3 DIM3_1 DIM3_2
1 1 . 1 1 2 3 3 3 2 .
2 1 1 . 1 1 2 2 3 1 1
3 2 2 2 . . 1 . . 5 5
... ... ... ... ... ... ... ... ... ... ...
My real dataset contains around 190 dimensions, with up to 5 measures in each one
I have to obey a set of rules to create a new variable for each dimension:
If there are 2 different values in the same dimension (missings excluded), the new variable is a missing.
If all values are the same (missings excluded), the new variable assumes the same value.
My new variables should look like this:
SUBJECT ... DIM1_X DIM2_X DIM3_X
1 ... . 3 2
2 ... 1 . 1
3 ... 2 1 5
The problem here is that i don't have the same number of measures for each dimension. Also, i could only come up with a lot of IF's (and I mean a LOT, as more measures in a given dimension increases the number of comparisons), so I wonder if there is some easier way to handle this particular problem.
Any help would be apreciated.
Thanks in advance.
Easiest way is to transpose it to vertical (one row per DIMx_y), summarize, then set the ones you want missing to missing, then retranspose (and if needed merge back on).
data have;
input SUBJECT DIM1_1 DIM1_2 DIM1_3 DIM1_4 DIM1_5 DIM2_1 DIM2_2 DIM2_3 DIM3_1 DIM3_2;
datalines;
1 1 . 1 1 2 3 3 3 2 .
2 1 1 . 1 1 2 2 3 1 1
3 2 2 2 . . 1 . . 5 5
;;;;
run;
data have_pret;
set have;
array dim_data DIM:;
do _t = 1 to dim(dim_Data); *dim function is not related to the name - it gives # of vars in array;
dim_Group = scan(vname(dim_data[_t]),1,'_');
dim_num = input(scan(vname(dim_data[_t]),2,'_'),BEST12.);
dim_val=dim_data[_t];
output;
end;
keep dim_group dim_num subject dim_val;
run;
proc freq data=have_pret noprint;
by subject dim_group;
tables dim_val/out=want_pret(where=(not missing(dim_val)));
run;
data want_pret2;
set want_pret;
by subject dim_Group;
if percent ne 100 then dim_val=.;
idval = cats(dim_Group,'_X');
if last.dim_Group;
run;
proc transpose data=want_pret2 out=want;
by subject;
id idval;
var dim_val;
run;