I have the following dataset:
dataseta:
No. Name1 Name2 Sales Inv Comp
1 TC Tribal Council Inc 100 100 0
2. TC Tribal Council Limited INC 20 25 65
desired output:
datasetb:
No. Name1 Name2 Sales Inv Comp
1 TC Tribal Council Limited Inc 120 125 0
Basically, I need to choose the row with the maximum length of characters for the column name2.
I tried the following, but it didn't work
proc sql;
create table datasetb as select no,name1,name2,sum(sales),sum(inv),min(comp) from dataseta group by 1,2,3 having length(name2)=max(length(name2));quit;
If I do the following code, it only partially resolves it, and I get duplicate rows
proc sql;
create table datasetb as select no,name1,max(length(name2)),sum(sales),sum(inv),min(comp) from dataseta group by 1,2 having length(name2)=max(length(name2));quit;
You appear to be joining the results of two separate aggregate computations.
Presuming:
no is unique so as to allow a tie breaker criteria and the first (per no) longest name2 is to be joined with the cost, inv, comp totals over name1.
The query will have lots going on...
1st longest name2 within name1, nested subqueries are needed to:
Determine the longest name2, then
Select first one, according to no, if more than one.
totals over name1
The totals will be a sub-query that is joined to, for delivering the desired result set.
Example (SQL)
data have;
length no 8 name1 $6 name2 $35 sales inv comp 8;
input
no name1& name2& sales inv comp; datalines;
1 TC Tribal Council Inc 100 100 0 * name1=TC group
2 TC Tribal Council Limited INC 20 25 65
3 TC Tribal council co 0 0 0
4 TC The Tribal council Assoctn 10 10 10
7 LS Longshore association 10 10 0 * name=LS group
8 LS The Longshore Group, LLC 2 4 8
9 LS The Longshore Group, llc 15 15 6
run;
proc sql;
create table want as
select
first_longest_name2.no,
first_longest_name2.name1,
first_longest_name2.name2,
name1_totals.sales,
name1_totals.inv,
name1_totals.comp
FROM
(
select
no, name1, name2
from
( select
no, name1, name2
from have
group by name1
having length(name2) = max(length(name2))
) longest_name2s
group by name1
having no = min(no)
) as
first_longest_name2
LEFT JOIN
(
select
name1,
sum(sales) as sales,
sum(inv) as inv,
sum(comp) as comp
from
have
group by name1
) as
name1_totals
ON
first_longest_name2.name1 = name1_totals.name1
;
quit;
Example (DATA Step)
Processing the data in a serial manner, when name1 groups are contiguous rows, can be accomplished using a DOW loop technique -- that is a loop with a SET statement within it.
data want2;
do until (last.name1);
set have;
by name1 notsorted;
if length(name2) > longest then do;
longest = length(name2);
no_at_longest = no;
name2_at_longest = name2;
end;
sales_sum = sum(sales_sum,sales);
inv_sum = sum(inv_sum,inv);
comp_sum = sum(comp_sum,comp);
end;
drop name2 no sales inv comp longest;
rename
no_at_longest = no
name2_at_longest = name2
sales_sum = sales
inv_sum = inv
comp_sum = comp
;
run;
Related
Observations from "other_claims" data set are to summed with the observations in the "event_claims" data set under the following conditions:
"Other_claims" occur within a 90-day window of the event_claim, "stay_discharge_dt," are to be summed with the event cost("cost_event").
If the "other_claim" partially overlaps with the 90-day period, only overlapping days are to be included.
The included fraction: (# of overlapping days)/(total # of days of the other_claim)
Here's the sql solution I am considering. I'm curious if this could be more efficient?
data event_claims;
input patient_id stay_admission_dt mmddyy10. #14stay_discharge_dt mmddyy10. doctor cost_event;
format stay_admission_dt stay_discharge_dt mmddyy10.;
datalines;
1 06/10/2019 06/15/2019 45 20000
2 10/18/2018 10/22/2018 78 30000
;
data other_claims;
length patient_id 3. type $19;
input patient_id Type$ service_start_date :mmddyy10. service_end_date :mmddyy10. service_cost dollar7.0;
format service_start_date service_end_date mmddyy10.;
datalines;
1 skilled_nursing 06/15/2019 06/25/2019 $7,000
1 home-health 06/25/2019 08/25/2019 $24,000
1 office_visit 07/1/2019 07/1/2019 $200
1 home_health 08/26/2019 09/26/2019 $12,000
2 er_visit 10/15/2018 10/16/2018 $1,500
2 home_health 10/23/2018 11/23/2018 $8,000
2 outpatient_services 01/18/2019 1/22/2019 $5,000
;
proc sql;
create table events_others as
select a.person_id
,a.stay_admission_dt
,a.stay_discharge_dt
,a.stay_discharge_dt+90 as service_deadline format mmddyy10.
,b.service_start_date
,b.service_end_date
,case when b.service_start_date > calculated service_deadline
or b.service_start_date < a.stay_admission_dt
then "service not payable"
else "payable" end as payable
,case when calculated payable = "payable"
and b.service_end_date > calculated service_deadline
then intck("days",b.service_end_date, calculated service_deadline )
else 0 end as overlap /* When the other claim event exceeds the 90-day window of the*/
,a.service_cost
,b.service_cost as service_cost_other
,case when calculated overlap ne 0
then (intck("days",b.service_start_date,b.service_end_date) + calculated overlap)/intck("days",b.service_start_date,b.service_end_date)
else 0 end as partial_factor
,calculated partial_factor * b.service_cost as final_other_cost format=dollar9.2
from event_claims a
left join other_claims b
on a.person_id=b.person_id
group by a.person_id
,a.stay_admission_dt
,a.stay_discharge_dt
order by a.person_id
,a.stay_admission_dt
;quit;
proc sql;
create table total_cost_of_care as
select a.*
,b.final_other_cost format=dollar9.2
,a.service_cost + final_other_cost as total_episode_cost format=dollar12.2
from events_others a
inner join
(select person_id
,stay_admission_dt
,sum(final_other_cost) as final_other_cost
from events_others
group by person_id
,stay_admission_dt
) b
on (a.person_id=b.person_id
and a.stay_admission_dt=b.stay_admission_dt)
;quit;
I'm working in SAS as a novice. I have two datasets:
Dataset1
Unique ID
ColumnA
1
15
1
39
2
20
3
10
Dataset2
Unique ID
ColumnB
1
40
2
55
2
10
For each UniqueID, I want to subtract all values of ColumnB by each value of ColumnA. And I would like to create a NewColumn that is 1 anytime 1>ColumnB-Column >30. For the first row of Dataset 1, where UniqueID= 1, I would want SAS to go through all the rows in Dataset 2 that also have a UniqueID = 1 and determine if there is any rows in Dataset 2 where the difference between ColumnB and ColumnA is greater than 1 or less than 30. For the first row of Dataset 1 the NewColumn should be assigned a value of 1 because 40 - 15 = 25. For the second row of Dataset 1 the NewColumn should be assigned a value of 0 because 40 - 39 = 1 (which is not greater than 1). For the third row of Dataset 1, I again want SAS to go through every row of ColumnB in Dataset 2 that has the same UniqueID as in Dataset1, so 55 - 20 = 35 (which is greater than 30) but NewColumn would still be assigned a value of 1 because (moving to row 3 of Datatset 2 which has UniqueID =2) 20 - 10 = 10 which satisfies the if statement.
So I want my output to be:
Unique ID
ColumnA
NewColumn
1
15
1
1
30
0
2
20
1
I have tried concatenating Dataset1 and Dataset2 into a FullDataset. Then I tried using a do loop statement but I can't figure out how to do the loop for each value of UniqueID. I tried using BY but that of course produces an error because that is only used for increments.
DATA FullDataset;
set Dataset1 Dataset2; /*Concatenate datasets*/
do i=ColumnB-ColumnA by UniqueID;
if 1<ColumnB-ColumnA<30 then NewColumn=1;
output;
end;
RUN;
I know I'm probably way off but any help would be appreciated. Thank you!
So, the way that answers your question most directly is the keyed set. This isn't necessarily how I'd do this, but it is fairly simple to understand (as opposed to a hash table, which is what I'd use, or a SQL join, probably what most people would use). This does exactly what you say: grabs a row of A, says for each matching row of B check a condition. It requires having an index on the datasets (well, at least on the B dataset).
data colA(index=(id));
input ID ColumnA;
datalines;
1 15
1 39
2 20
3 10
;;;;
data colB(index=(id));
input ID ColumnB;
datalines;
1 40
2 55
2 30
;;;;
run;
data want;
*base: the colA dataset - you want to iterate through that once per row;
set colA;
*now, loop while the check variable shows 0 (match found);
do while (_iorc_ = 0);
*bring in other dataset using ID as key;
set colB key=ID ;
* check to see if it matches your requirement, and also only check when _IORC_ is 0;
if _IORC_ eq 0 and 1 lt ColumnB-ColumnA lt 30 then result=1;
* This is just to show you what is going on, can remove;
put _all_;
end;
*reset things for next pass;
_ERROR_=0;
_IORC_=0;
run;
I've got the below code that works beautifully for comparing rows in a group when the first row doesnt matter.
data want_Find_Change;
set WORK.IA;
by ID;
array var[*] $ RATING;
array lagvar[*] $ zRATING;
array changeflag[*] RATING_UPDATE;
do i = 1 to dim(var);
lagvar[i] = lag(var[i]);
end;
do i = 1 to dim(var) ;
changeflag[i] = (var[i] NE lagvar[i] AND NOT first.ID);
end;
drop i;
run;
Unfortunately, when I use a dataset that has two rows per group I get incorrect returns, I'm assuming because the first row has to be used in the comparison. How can I compare the only to rows and a return only on the second row. This did not work:
data Change;
set WORK.Two;
by ID;
changeflag = last.RATING NE first.RATING;
run;
Example of the data I have and want
Group Name Sport DogName Eligibility
1 Tom BBALL Toto Yes
1 Tom golf spot Yes
2 Nancy vllyball Jimmy yes
2 Nancy vllyball rover no
want
Group Name Sport DogName Eligibility N_change S_change D_Change E_change
1 Tom BBall Toto Yes 0 0 0 0
1 Tom golf spot Yes 0 1 1 0
2 Nancy vllyball Jimmy yes 0 0 0 0
2 Nancy vllyball rover no 0 0 1 1
If you want only the first row to not be flagged, you first need to create a variable enumerating the rows within each group. You can do so with:
data temp;
set have;
count + 1;
by Group;
if first.Group then count = 1;
run;
In a second step, you can run a proc sql with a subquery, count distinct by groups, and case when:
proc sql;
create table want as
select
Group, Name, Sport, DogName, Eligibility,
case when count_name > 1 and count > 1 then 1 else 0 end as N_change,
case when count_sport > 1 and count > 1 then 1 else 0 end as S_change,
case when count_dog > 1 and count > 1 then 1 else 0 end as D_change,
case when count_E > 1 and count > 1 then 1 else 0 end as E_change
from (select *,
count(distinct(Name)) as count_name,
count(distinct(Sport)) as count_sport,
count(distinct(DogName)) as count_dog,
count(distinct(Eligibility)) as count_E
from temp
group by Group);
quit;
Best,
I have a sample that include two variables: ID and ym. ID id refer to the specific ID for each trader and ym refer to the year-month variable. And I want to create a variable that show the number of years over the 10 years period prior month t as shown in the following figure.
ID ym Want
1 200101 0
1 200301 1
1 200401 2
1 200501 3
1 200601 4
1 200801 5
1 201201 5
1 201501 4
2 200001 0
2 200203 1
2 200401 2
2 200506 3
I attempt to use by function and fisrt.id to count the number.
data want;
set have;
want+1;
by id;
if first.id then want=1;
run;
However, the year in ym is not continuous. When the time gap is higher than 10 years, this method is not working. Although I assume I need to count the number of year in a rolling window (10 years), I am not sure how to achieve it. Please give me some suggestions. Thanks.
Just do a self join in SQL. With your coding of YM it is easy to do interval that is a multiple of a year, but harder to do other intervals.
proc sql;
create table want as
select a.id,a.ym,count(b.ym) as want
from have a
left join have b
on a.id = b.id
and (a.ym - 1000) <= b.ym < a.ym
group by a.id,a.ym
order by a.id,a.ym
;
quit;
This method retains the previous values for each ID and directly checks to see how many are within 120 months of the current value. It is not optimized but it works. You can set the array m() to the maximum number of values you have per ID if you care about efficiency.
The variable d is a quick shorthand I often use which converts years/months into an integer value - so
200012 -> (2000*12) + 12 = 24012
200101 -> (2001*12) + 1 = 24013
time from 200012 to 200101 = 24013 - 24012 = 1 month
data have;
input id ym;
datalines;
1 200101
1 200301
1 200401
1 200501
1 200601
1 200801
1 201201
1 201501
2 200001
2 200203
2 200401
2 200506
;
proc sort data=have;
by id ym;
data want (keep=id ym want);
set have;
by id;
retain seq m1-m100;
array m(100) m1-m100;
** Convert date to comparable value **;
d = 12 * floor(ym/100) + mod(ym,10);
** Initialize number of previous records **;
want = 0;
** If first record, set retained values to missing and leave want=0 **;
if first.id then call missing(seq,of m1-m100);
** Otherwise loop through previous months and count how many were within 120 months **;
else do;
do i = 1 to seq;
if d <= (m(i) + 120) then want = want + 1;
end;
end;
** Increment variables for next iteration **;
seq + 1;
m(seq) = d;
run;
proc print data=want noobs;
I want to find the mean of following datalines;
the way I am trying, I am getting the mean based on no. of observation which in this case is 6. But I want it based on Day so it comes something like Mean = Timeread/(no. of day) which is 3
name Day Timeread
X 1 12
X 1 23
X 1 12
X 2 8
X 2 5
X 3 3
This is the code I used
proc summary data = xyz nway missing;
class Name;
var timeread;
output out = Average mean=;
run;
proc print data = Average;
run;
I'm not sure how to do this with proc mean but you can do this in SQL like so:
proc sql noprint;
create table want as
select name,
sum(timeread) / count(distinct day) as daily_mean
from have
group by name
;
quit;
This uses the HAVE dataset from #CarolinaJay65's answer.
If you are just wanting the mean of total timeread by total distinct days
Data HAVE;
Input name $ Day Timeread ;
Datalines;
X 1 12
X 1 23
X 1 12
X 2 8
X 2 5
X 3 3
;
Run;
Proc Sql;
Create table WANT as
Select Name, (select count(distinct(Day)) from HAVE) as DAYS
, sum(timeread) as TIMEREAD_TOTAL
, calculated timeread_total/calculated days as MEAN
From HAVE
Group by Name;
Quit;