I have the data in this format- it is just an
example: n=2
X Y info
2 1 good
2 4 bad
3 2 good
4 1 bad
4 4 good
6 2 good
6 3 good
Now, the above data is in sorted manner (total 7 rows). I need to make a group of 2 , 3 or 4 rows separately and generate a graph. In the above data, I made a group of 2 rows. The third row is left alone as there is no other column in 3rd row to form a group. A group can be formed only within the same row. NOT with other rows.
Now, I will check if both the rows have “good” in the info column or not. If both rows have “good” – the group formed is also good , otherwise bad. In the above example, 3rd /last group is “good” group. Rest are all bad group. Once I’m done with all the rows, I will calculate the total no. of Good groups formed/Total no. of groups.
In the above example, the output will be: Total no. of good groups/Total no. of groups => 1/3.
This is the case of n=2(size of group)
Now, for n=3, we make group of 3 rows and for n=4, we make a group of 4 rows and find the good /bad groups in a similar way. If all the rows in a group has “good” block—the result is good block, otherwise bad.
Example: n= 3
2 1 good
2 4 bad
2 6 good
3 2 good
4 1 good
4 4 good
4 6 good
6 2 good
6 3 good
In the above case, I left the 4th row and last 2 rows as I can’t make group of 3 rows with them. The first group result is “bad” and last group result is “good”.
Output: 1/ 2
For n= 4:
2 1 good
2 4 good
2 6 good
2 7 good
3 2 good
4 1 good
4 4 good
4 6 good
6 2 good
6 3 good
6 4 good
6 5 good
In this case, I make a group of 4 and finds the result. The 5th,6th,7th,8th row are left behind or ignored. I made 2 groups of 4 rows and both are “good” blocks.
Output: 2/2
So, After getting 3 output values for n=2 , n-3, and n=4 I will plot a graph of these values.
Below is code that I think is getting what you are looking for. It assumes that the data that you described is stored separately in the three datasets named data_2, data_3, and data_4. Each of these datasets is processed by the %FIND_GOOD_GROUPS macro that determines which groups of X have all "GOOD" values in INFO, then this summary information is appended as a new row to the BASE dataset. I didn't add the code, but you could calculate the ratio of GOOD_COUNT to FREQ in a separate data step, then use a procedure to plot the N value and the ratio. Hope this gets close to what you're trying to accomplish.
%******************************************************************************;
%macro main;
%find_good_groups(dsn=data_2, n=2);
%find_good_groups(dsn=data_3, n=3);
%find_good_groups(dsn=data_4, n=4);
proc print data=base uniform noobs;
%mend main;
%******************************************************************************;
%******************************************************************************;
%macro find_good_groups(dsn=,n=);
%***************************************************************************;
%* Sort data by X and Y so that you can use FIRST.X variable in Data step. *;
%***************************************************************************;
proc sort data=&dsn;
by x y;
run;
%***************************************************************************;
%* TEMP dataset uses the FIRST.X variable to reset COUNT and GOOD_COUNT to *;
%* initial values for each row where X changes. Each row in the X groups *;
%* adds 1 to COUNT and sets GOOD_COUNT to 0 (zero) if INFO is ever "BAD". *;
%* A record is output if COUNT is equal to the macro parameter &N. *;
%***************************************************************************;
data temp;
keep good_count n;
retain count 0 good_count 1 n &n;
set &dsn;
by x y;
if first.x then do;
count = 0;
good_count = 1;
end;
count = count + 1;
if good_count eq 1 then do;
if trim(left(upcase(info))) eq "BAD" then do;
good_count = 0;
end;
end;
if count eq &n then output;
run;
%***************************************************************************;
%* Summarize the TEMP data to find the number of times that all of the *;
%* rows had "GOOD" in the INFO column for each value of X. *;
%***************************************************************************;
proc summary data=temp;
id n;
var good_count;
output out=n_&n (drop=_type_) sum=;
run;
%***************************************************************************;
%* Append to BASE dataset to retain the sums and frequencies from all of *;
%* the datasets. BASE can be used to plot the N / number of Good records. *;
%***************************************************************************;
proc append data=n_&n base=base force; run;
%mend find_good_groups;
%******************************************************************************;
%main
Related
Let's say I have four samples: id=1, 2, 3, and 4, with one or more measurements on each of those samples:
Table
ID Value
1 1
1 2
2 3
2 -4
3 -5
4 6
I want to remove duplicates, keeping only one entry per ID - the one having the largest absolute value of the "value" column. I.e., this is what I want:
Result
ID Value
1 2
2 -4
3 -5
4 6
How might I do this in SAS?
I didn't find a solution to do this with SAS, so I tried to export it to Excel and use pivot table and "Value Field max" -setting, but that only gave me highest value, and and I need highest difference from zero.
SQL with a having clause will solve this quickly.
proc sql;
create table want as
select id, value
from have
group by id
having abs(value) = max(abs(value))
;
quit;
Output:
ID Value
1 2
2 -4
3 -5
4 6
If I understand your question correctly,there are a few ways to do this.
I would create a new variable with your absolute value, sort the dataset by id and descending absolute value and then keep the top record.
The code would look like:
data tmp1;
set data1;
absvalue = abs(value);
proc sort data=tmp1;
by id descending absvalue ;
data tmp2;
set tmp1;
by id;
if first.id then output tmp2;
run;
You could also use Proc Sql.
I have data that's tracking a certain eye phenomena. Some patients have it in both eyes, and some patients have it in a single eye. This is what some of the data looks like:
EyeID PatientID STATUS Gender
1 1 1 M
2 1 0 M
3 2 1 M
4 3 0 M
5 3 1 M
6 4 1 M
7 4 0 M
8 5 1 F
9 6 1 F
10 6 0 F
11 7 1 F
12 8 1 F
13 8 0 F
14 9 1 F
As you can see from the data above, there are 9 patients total and all of them have the particular phenomena in one eye.
I need the count the number of patients with this eye phenomena.
To get the number of total patients in the dataset, I used:
PROC FREQ data=new nlevels;
tables PatientID;
run;
To count the number of patients with this eye phenomena, I used:
PROC SORT data=new out=new1 nodupkey;
by Patientid Status;
run;
proc freq data=new1 nlevels;
tables Status;
run;
However, it gave the correct number of patients with the phenomena (9), but not the correct number without (0).
I now need to calculate the gender distribution of this phenomena. I used:
proc freq data=new1;
tables gender*Status/chisq;
run;
However, in the cross table, it has the correct number of patients who have the phenomena (9), but not the correct number without (0). Does anyone have any thoughts on how to do this chi-square, where if the has this phenomena in at least 1 eye, then they are positive for this phenomena?
Thanks!
PROC FREQ is doing what you told it to: counting the status=0 cases.
In general here you are using sort of blunt tools to accomplish what you're trying to accomplish, when you probably should use a more precise tool. PROC SORT NODUPKEY is sort of overkill for example, and it doesn't really do what you want anyway.
To set up a dataset of has/doesn't have, for example, let's do a few things. First I add one more row - someone who actually doesn't have - so we see that working.
data have;
input eyeID patientID status gender $;
datalines;
1 1 1 M
2 1 0 M
3 2 1 M
4 3 0 M
5 3 1 M
6 4 1 M
7 4 0 M
8 5 1 F
9 6 1 F
10 6 0 F
11 7 1 F
12 8 1 F
13 8 0 F
14 9 1 F
15 10 0 M
;;;;
run;
Now we use the data step. We want a patient-level dataset at the end, where we have eye-level now. So we create a new patient-level status.
data patient_level;
set have;
by patientID;
retain patient_status;
if first.patientID then patient_status =0;
patient_status = (patient_Status or status);
if last.patientID then output;
keep patientID patient_Status gender;
run;
Now, we can run your second proc freq. Also note you have a nice dataset of patients.
title "Patients with/without condition in any eye";
proc freq data=patient_level;
tables patient_status;
run;
title;
You also may be able to do your chi-square analysis, though I'm not a statistician and won't dip my toe into whether this is an appropriate analysis. It's likely better than your first, anyway - as it correctly identifies has/doesn't have status in at least one eye. You may need a different indicator, if you need to know number of eyes.
title "Crosstab of gender by patient having/not having condition";
proc freq data=patient_level;
tables gender*patient_Status/chisq;
run;
title;
If your actual data has every single patient having the condition, of course, it's unlikely a chi-square analysis is appropriate.
I am wanting to count the number of time a certain value appears in a particular column in sas. For example in the following dataset the value 1 appears 3 times
value 2 appears twice, value 3 appears once, value 4 appears 4 times and value 5 appears four times.
Game_ball
1
1
1
2
2
3
4
4
4
5
5
5
5
5
I want the dataset to represented like the following:
Game_ball Count
1 3
2 2
3 1
4 4
5 4
. .
. .
. .
Thanks in advance
As per #Dwal, proc freq is the easiest solution.
Using your sample data,
proc freq data=sample;
table game_ball/out=output;
run;
Or do it in one-pass data step
proc sort data = sample;by game_ball;run;
data output;
set sample;
retain count;
if first.game_ball then count = 0;
count + 1;
if last.game_ball then output;
by game_ball;
run;
Or in SQL
proc sql;
create table output as
select game_ball, count(*) as count
from sample
group by game_ball;
quit;
data jul11.merge11;
input month sales ;
datalines ;
1 3123
1 1234
2 7482
2 8912
3 1284
;
run;
data jul11.merge22;
input month goal ;
datalines;
1 4444
1 5555
1 8989
2 9099
2 8888
3 8989
;
run;
data jul11.merge1;
merge jul11.merge11 jul11.merge22 ;
by month;
difference =goal - sales ;
run;
proc print data=jul11.merge1 noobs;
run;
output:
month sales goal difference
1 3123 4444 1321
1 1234 5555 4321
1 1234 8989 7755
2 7482 9099 1617
2 8912 8888 -24
3 1284 8989 7705
Why it didn't match all observation in table 1 with in table 2 for common months ?
pdv retains data of observation to seek if any more observation are left for that particular by group before it reinitialises it , in that case it should have done cartesian product .
Gives perfect cartesian product for one to many merging but not for many to many .
This is because of how SAS processes the data step. A merge is never a true cartesian product (ie, all records are searched and matched up against all other records, like a SQL comma join might ); what SAS does (in the case of two datasets) is it follows down one dataset (the one on the left) and advances to the next particular by-group value; then it looks over on the right dataset, and advances until it gets to that by group value. If there are other records in between, it processes those singly. If there are not, but there is a match, then it matches up those records.
Then it looks on the left to see if there are any more in that by group, and if so, advances to the next. It does the same on the right. If only one of these has a match then it will only bring in those values; hence if it has 1 element on the left and 5 on the right, it will do 1x5 or 5 rows. However, if there are 2 on the left and 3 on the right, it won't do 2x3=6; it does 1:1, 2:2, and 2:3, because it's advancing record pointers sequentially.
The following example is a good way to see how this works. If you really want to see it in action, throw in the data step debugger and play around with it interactively.
data test1;
input x row1;
datalines;
1 1
1 2
1 3
1 4
2 1
2 2
2 3
3 1
;;;;
run;
data test2;
input x row2;
datalines;
1 1
1 2
1 3
2 1
3 1
3 2
3 3
;;;;
run;
data test_merge;
merge test1 test2;
by x;
put x= row1= row2=;
run;
If you do want to do a cartesian join in SAS datastep, you have to do nested SET statements.
data want;
set test1;
do _n_ = 1 to nobs_2;
set test2 point=_n_ nobs=nobs_2;
output;
end;
run;
That's the true cartesian, you can then test for by group equality; but that's messy, really. You could also use a hash table lookup, which works better with BY groups. There are a few different options discussed here.
SAS doesn't handle many-to-many merges very well within the datastep. You need to use a PROC SQL if you want to do a many-to-many merge.
I have a data set where each observation is a combination of binary indicator variables, but not necessarily all possible combinations. I'd like to eliminate observations that are subsets of other observations. As an example, suppose I had these three observations:
var1 var2 var3 var4
0 0 1 1
1 0 0 1
0 1 1 1
In this case, I would want to eliminate observation 1, because it's a subset of observation 3. Observation 2 isn't a subset of anything else, so my output data set should contain observations 2 and 3.
Is there an elegant and preferably fast way to do this in SAS? My best solution thus far is a brute force loop through the data set using a second set statement with the point option to see if the current observation is a subset of any others, but these data sets could become huge once I start working with a lot of variables, so I'm hoping to find a better way.
First off, one consideration: is it possible for one row to have 1 for all indicators? You should check for that first - if one row does have all 1s, then it will always be the unique solution.
_POINT_ is inefficient, but loading into a hash table isn't a terribly bad way to do it. Just load up a hash table with a string of the binary indicators CATted together, and then search that table.
First, use PROC SORT NODUPKEY to eliminate the exact matches. Unless you have a very large number of indicator variables, this will eliminate many rows.
Then, sort it in an order where the more "complicated" rows are at the top, and the less complicated at the bottom. This might be as simple as making a variable which is the sum of binary indicators and sort by that descending; or if your data suggests, might be sorting by a particular order of indicators (if some are more likely to be present). The purpose of this is to reduce the number of times we search; if the likely matches are on top, we will leave the loop faster.
Finally, use a hash iterator to search the list, in descending order by the indicators variable, for any matches.
See below for a partially-tested example. I didn't verify that it eliminated every valid elimination, but it eliminates around half of the rows, which sounds reasonable.
data have;
array vars var1-var20;
do _u = 1 to 1e4;
do _t = 1 to dim(Vars);
vars[_t] = round(ranuni(7),1);
end;
complexity = sum(of vars[*]);
indicators = cats(of vars[*]);
output;
end;
drop _:;
run;
proc sort nodupkey data=have;
by indicators;
run;
proc sort data=have;
by descending complexity;
run;
data want;
if _n_ = 1 then do;
format indicators $20.;
call missing(indicators, complexity);
declare hash indic(dataset:'have', ordered:'d');
indic.defineKey('indicators');
indic.defineData('complexity','indicators');
indic.defineDone();
declare hiter inditer('indic');
end;
set have(drop=indicators rename=complexity=thisrow_complex); *assuming have has a variable, "indicators", like "0011001";
array vars var1-var20;
rc=inditer.first();
rowcounter=1;
do while (rc=0 and complexity ge thisrow_complex);
do _t = 1 to dim(vars);
if vars[_t]=1 and char(indicators,_t) ne '1' then leave;
end;
if _t gt dim(Vars) then delete;
else rc=inditer.next();
rowcounter=rowcounter+1;
end;
run;
I'm prety sure their is probably a more math-oriented way of doing this but for now this what I can think about. Proceed with caution as I only checked on a small no. of test cases.
My pseudo-algorithm:
(Bit pattern= concatenation of all binary variables into a string.)
get a unique list of binary(bit) patterns which is sorted in
asceding order (this is what the PROC SQL step is doing
Set up two arrays. 1 array to track the variables (var1 to var4) on the current row and another array to track lag(of var1 to var4).
If a bit pattern is a subset of another bit pattern (lagged version) then "dot product vector multiplication" should give back the lagged bit pattern.
If bit pattern = lagged_bit_pattern then flag that pattern to be excluded.
In the last data step you will get the list of bit patterns you need to exclude. NOTE: this approach does not take care of duplicate patterns such as the following:
record1: 1 0 0 1
record2: 1 0 0 1
which can be easily excluded via PROC SORT & NODUPKEY.
/*sample data*/
data sample_data;
input id var1-var4;
bit_pattern = compress(catx('',var1,var2,var3,var4));
datalines;
1 0 0 1 1
2 1 0 0 1
3 0 1 1 1
4 0 0 0 1
5 1 1 1 0
;
run;
/*in the above example, 0001 0011 need to be eliminated. These will be highlighted in the last datastep*/
/*get unique combination of patterns in the dataset*/
proc sql ;
create table all_poss_patterns as
select
var1,var2, var3,var4,
count(*) as freq
from sample_data
group by var1,var2, var3,var4
order by var1,var2, var3,var4;
quit;
data patterns_to_exclude;
set all_poss_patterns;
by var1-var4;
length lagged1-lagged4 8;
array first_array{*} var1-var4;
array lagged{*}lagged1-lagged4;
length bit_pattern $32.;
length lagged_bit_pattern $32.;
bit_pattern = '';
lagged_bit_pattern='';
do i = 1 to dim(first_array);
lagged{i}=lag(first_array{i});
end;
do i = 1 to dim(first_array);
bit_pattern=cats("", bit_pattern,lagged{i}*first_array{i});
lagged_bit_pattern=cats("",lagged_bit_pattern,lagged{i});
end;
if bit_pattern=lagged_bit_pattern then exclude_pattern=1;
else exclude_pattern=0;
/*uncomment the following two lines to just keep the patterns that need to be excluded*/
/*if bit_pattern ne '....' and exclude_pattern=1;*/ /*note the bit_pattern ne '....' the no. of dots should equal no. of binary vars*/
/*keep bit_pattern;*/
run;