I have a data set that looks like this:
company Assets Liabilities strategy1 strategy2 strategy3.....strategy22
1 500 500 0 50 50 50
2 200 300 33 30 33 0
My goal is to find the maximum value across the row for all strategies (strategy1 - strategy22), and basically bucket the company by the strategy they use. The problem comes when some companies have the same maximum value under multiple strategies. In this case I would want to place the firm into multiple buckets. The final dataset would be something like this:
company Assets Liab. strategy1 strategy2 strategy3.....strategy22 Strategy
1 500 500 0 50 50 50 Strategy2
1 500 500 0 50 50 50 Strategy3
1 500 500 0 50 50 50 Strategy22
Etc.
The end goal is to be able to run a proc means on the company's assets, liabilities, etc. by strategy. So far I have been able to achieve a dataset close to what I would like, but in the "Strategy" column I can't get it so SAS doesn't always output the first strategy with the maximum value.
Data want;
set have;
MAX = max(of strategy1-strategy22);
array nums {22} strategy1-strategy22;
do _n_=1 to 21;
count=1;
do _i_ = _n_+1 to 22;
if nums{_n_} = nums{_i_} AND nums{_i_} ne 0 then count + 1;
end;
if count > maxcount then do;
mode = nums{_n_};
maxcount = count;
end;
end;
Run;
Data want2;
set want (where=( maxcount > 1 AND Mode = Max));
by company;
strat=1;
do until (strat gt maxcount);
output;
strat = strat +1;
end;
Run;
Basically, I computed the mode and the count of identical maximum values and if maxcount > 1 and mode = max then I output identical observations. However, I am stuck regarding how to get SAS to output different strategies if there are multiple maximum values that are the same.
That seems more complicated than you need.
data want;
set have;
array strategies[22] strategy1-strategy22;
do strategy = 1 to dim(strategies);
if strategies[strategy] = max(of strategies[*]) then output;
end;
run;
Why not just output the the row if the strategy column matches the MAX?
My array language is off, but here is some pseudo code to do what I'm thinking...
If the column you're on has the value EQ MAX, then output that row with the strategy column set to the strategy that you're looking at:
Data want;
set have;
MAX = max(of strategy1-strategy22);
array nums {22} strategy1-strategy22;
do i = _n_+1 to 22;
if nums{i} eq MAX then do;
strategy = "strategy" + i
output;
end;
Run;
Related
I have a column for dollar-amount that I need to break apart into $1000 segments - so $0-$999, $1,000-$1,999, etc.
I could use Case/When, but there are an awful lot of groups I would have to make.
Is there a more efficient way to do this?
Thanks!
You could just use arithmetic. For example you could convert them to upper limit of the $1,000 range.
up_to = 1000*ceil(dollar/1000);
Let's make up some example data:
data test;
do dollar=0 to 5000 by 500 ;
up_to = 1000*ceil(dollar/1000);
output;
end;
run;
Results:
Obs dollar up_to
1 0 0
2 500 1000
3 1000 1000
4 1500 2000
5 2000 2000
6 2500 3000
7 3000 3000
8 3500 4000
9 4000 4000
10 4500 5000
11 5000 5000
Absolutely. This is a great use case for user-defined formats.
proc format;
value segment
0-<1000 = '0-1000'
1000-<2000 = '1000s'
2000-<3000 = '2000s'
;
quit;
If the number is too high to write out, do it with code!
data segments;
retain
fmtname 'segment'
type 'n' /* numeric format */
eexcl 'Y' /* exclude the "end" match, so 0-1000 excluding 1000 itself */
;
do start = 0 to 1e6 by 1000;
end = start + 1000;
label = catx('- <',start,end); * what you want this to show up as;
output;
end;
run;
proc format cntlin=segments;
quit;
Then you can use segment = put(dollaramt,segment.); to assign the value of segment, or just apply the format format dollaramt segment.; if you're just using it in PROC SUMMARY or somesuch.
And you can combine the two approaches above to generate a User Defined Format that will bin the amounts for you.
Create bins to set up a user defined format. One drawback of this method is that it requires you to know the range of data ahead of time.
Use a user defined function via PROC FCMP.
Use a manual calculation
I illustrate version of the solution for 1 & 3 below. #2 requires PROC FCMP but I think using it a plain data step can be simpler.
data thousands_format;
fmtname = 'thousands_fmt';
type = 'N';
do Start = 0 to 10000 by 1000;
END = Start + 1000 - 1;
label = catx(" - ", put(start, dollar12.0), put(end, dollar12.0));
output;
end;
run;
proc format cntlin=thousands_format;
run;
data demo;
do i=100 to 10000 by 50;
custom_format = put(i, thousands_fmt.);
manual_format = catx(" - ", put(floor(i/1000)*1000, dollar12.0), put((ceil(i/1000))*1000-1, dollar12.0));
output;
end;
run;
I have a dataset of money earned as a % every week in 2017 to 2018. Some don't have data at the start of 2017 as they didn't start earning until later on. The weeks are numbered as 201701, 201702 - 201752 and 201801 - 201852.
What I'd like to do is have 104 new variables called WEEK0 - WEEK103, where WEEK0 will have the first non empty column value of the money earned columns. Here is an example of the data:
MON_EARN_201701 MON_EARN_201702 MON_EARN_201703 MON_EARN_201704
30 21 50 65
. . 30 100
. 102 95 85
Then I want my data to have the following columns (example)
WEEK0 WEEK1 WEEK2 WEEK3
30 21 50 65
30 100 . .
102 95 85 .
These are just small examples of a very large dataset.
I was thinking I'd need to try and do some sort of do loops so what I've tried so far is:
DATA want;
SET have;
ARRAY mon_earn{104} mon_earn_201701 - mon_earn_201752 mon_earn_201801 -mon_earn_201852;
ARRAY WEEK {104} WEEK0 - WEEK103;
DO i = 1 to 104;
IF mon_earn{i} NE . THEN;
WEEK{i} = mon_earn{i};
END;
END;
RUN;
This doesn't work as it doesn't fill the WEEK0 when the first value is empty.
If anymore information is needed please comment and I will add it in.
Sounds like you just need to find the starting point for copying.
First look thru the list of earnings by calendar month until you find the first non missing value. Then copy the values starting from there into you new array of earnings by relative month.
data want;
set have;
array mon_earn mon_earn_201701 -- mon_earn_201852;
array week (104);
do i = 1 to dim(mon_earn) until(found);
if mon_earn{i} ne . then found=1;
end;
do j=1 to dim(week) while (i+j<dim(mon_earn));
week(j) = mon_earn(i+j-1);
end;
run;
NOTE: I simplified the ARRAY definitions. For the input array I assumed that the variables are defined in order so that you could use positional array list. For the WEEK array SAS and I both like to start counting from one, not zero.
You could do this if it was a long format. There's a chance you don't need it while in a long format.
proc sort data=have;
by ID week;
run;
data want;
set have;
by id; *for each group/id counter;
retain counter;
if first.id then counter=0;
if counter=0 and not missing(value) then do;
counter=1; new_week=0; end;
if counter = 1 then new_week+1;
run;
If you really need it wide:
Find first not missing value and store index in i
Loop from i to end of week dimension
Assign week to mon_earned from i to end of week.
data want;
set have;
array mon_earned(*) .... ;
array week(*) ... ;
found=0; i=0;
do while(found=0);
if not missing(mon_earned(i)) then found=1;
i+1;
end;
z=0;
do j=i to dim(week);
week(z) = mon_earned(j);
z+1;
end;
run;
You need a second index variable, call it j, to target the proper week assignment. j is only incremented when a months earning is not missing.
This example code will 'squeeze out` all missing earnings; even those missing earnings that occurring after some earning has occurred. For example
earning: . . . 10 . 120 . 25 … will squeeze to
week: 10 120 25 …
data have;
array earn earn_201701-earn_201752 earn_201801-earn_201852;
do _n_ = 1 to 1000;
call missing (of earn(*));
do _i_ = 1 + 25 * ranuni(123) to dim(earn);
if ranuni(123) < 0.95 then
earn(_i_) = round(10 + 125 * ranuni(123));
end;
output;
end;
run;
data want;
set have;
array earn earn_201701-earn_201752 earn_201801-earn_201852;
array week(0:103);
j = -1;
do i = 1 to dim(earn);
if not missing(earn(i)) then do;
j+1;
week(j) = earn(i);
end;
end;
drop i j;
run;
If you want to maintain interior missing earnings the logic would be
if not missing(earn(i)) or j >=0 then do;
j+1;
week(j) = earn(i);
end;
I have one problem and I think there is not much to correct to work right.
I have table (with desired output column 'sum_usage'):
id opt t_purchase t_spent bonus usage sum_usage
a 1 10NOV2017:12:02:00 10NOV2017:14:05:00 100 9 15
a 1 10NOV2017:12:02:00 10NOV2017:15:07:33 100 0 15
a 1 10NOV2017:12:02:00 10NOV2017:13:24:50 100 6 6
b 1 10NOV2017:13:54:00 10NOV2017:14:02:58 100 3 10
a 1 10NOV2017:12:02:00 10NOV2017:20:22:07 100 12 27
b 1 10NOV2017:13:54:00 10NOV2017:13:57:12 100 7 . 7
So, I need to sum all usage values from time_purchase (for one id, opt combination (group by id, opt) there is just one unique time_purchase) until t_spent.
Also, I have about milion rows, so hash table would be the best solution. I've tried with:
data want;
if _n_=1 then do;
if 0 then set have(rename=(usage=_usage));
declare hash h(dataset:'have(rename=(usage=_usage))',hashexp:20);
h.definekey('id','opt', 't_purchase', 't_spent');
h.definedata('_usage');
h.definedone();
end;
set have;
sum_usage=0;
do i=intck('second', t_purchase, t_spent) to t_spent ;
if h.find(key:user,key:id_option,key:i)=0 then sum_usage+_usage;
end;
drop _usage i;
run;
The fifth line from the bottom is not correct for sure (do i=intck('second', t_purchase, t_spent), but have no idea how to approach this. So, the main problem is how to set up time interval to calculate this. I have already one function in this hash table func with the same keys, but without time interval, so it would be pretty good to write this one too, but it's not necessary.
Personally, I would ditch the hash and use SQL.
Example Data:
data have;
input id $ opt
t_purchase datetime20.
t_spent datetime20.
bonus usage sum_usage;
format
t_purchase datetime20.
t_spent datetime20.;
datalines;
a 1 10NOV2017:12:02:00 10NOV2017:14:05:00 100 9 15
a 1 10NOV2017:12:02:00 10NOV2017:15:07:33 100 0 15
a 1 10NOV2017:12:02:00 10NOV2017:13:24:50 100 6 6
b 1 10NOV2017:13:54:00 10NOV2017:14:02:58 100 3 10
a 1 10NOV2017:12:02:00 10NOV2017:20:22:07 100 12 27
b 1 10NOV2017:13:54:00 10NOV2017:13:57:12 100 7 7
;
I'm leaving your sum_usage column here for comparison.
Now, create a table of sums. New value is sum_usage2.
proc sql noprint;
create table sums as
select a.id,
a.opt,
a.t_purchase,
a.t_spent,
sum(b.usage) as sum_usage2
from have as a,
have as b
where a.id = b.id
and a.opt = b.opt
and b.t_spent <= a.t_spent
and b.t_spent >= a.t_purchase
group by a.id,
a.opt,
a.t_purchase,
a.t_spent;
quit;
Now that you have the sums, join them back to the original table:
proc sql noprint;
create table want as
select a.*,
b.sum_usage2
from have as a
left join
sums as b
on a.id = b.id
and a.opt = b.opt
and a.t_spent = b.t_spent
and a.t_purchase = b.t_purchase;
quit;
This produces the table you want. Alternatively, you can use a hash to look up the values and add the sum in a Data Step (which can be faster given the size).
data want2;
set have;
format sum_usage2 best.;
if _n_=1 then do;
%create_hash(lk,id opt t_purchase t_spent, sum_usage2,"sums");
end;
rc = lk.find();
drop rc;
run;
%create_hash() macro available here https://github.com/FinancialRiskGroup/SASPerformanceAnalytics
I believe this question is a morph of one your earlier ones where you compute a rolling sum by do a hash lookup for every second over a 3 hour period for each record in your data set. Hopefully you realized the simplicity of that approach has a large cost of 3*3600 hash lookups per record as well as having to load the entire data vector into a hash.
The time log data presented has new records inserted at the top of the data, and I presume the data to be descending monotonic in time.
A DATA Step can, in a single pass of monotonic data, compute the rolling sum over a time window. The technique uses 'ring' arrays, where-in index advancement is adjusted by modulus. One array is for the time and the other is for the metric (usage). The required array size is the maximum number of items that could occur within the time window.
Consider some generated sample data with time steps of 1, 2, and one jump of 200 seconds:
data have;
time = '12oct2017:11:22:32'dt;
usage = 0;
do _n_ = 1 to &have_count;
time + 2; *ceil(25*ranuni(123));
if _n_ > 30 then time + -1;
if _n_ = 145 then time + 200;
usage = floor(180*ranuni(123));
delta = time-lag(time);
output;
end;
run;
Start with the case of computing a rolling sum from prior items when sorted time ascending. (The descending case will follow):
The example parameters are RING_SIZE 16 and TIME_WINDOW of 12 seconds.
%let RING_SIZE = 16;
%let TIME_WINDOW = '00:00:12't;
data want;
array ring_usage [0:%eval(&RING_SIZE-1)] _temporary_ (&RING_SIZE*0);
array ring_time [0:%eval(&RING_SIZE-1)] _temporary_ (&RING_SIZE*0);
retain ring_tail 0 ring_head -1 span 0 span_usage 0;
set have;
by time ; * cause error if data not sorted per algorithm requirement;
* unload from accumulated usage the tail items that fell out the window;
do while (span and time - ring_time(ring_tail) > &TIME_WINDOW);
span + -1;
span_usage + -ring_usage(ring_tail);
ring_tail = mod ( ring_tail + 1, &RING_SIZE ) ;
end;
ring_head = mod ( ring_head + 1, &RING_SIZE );
span + 1;
if span > 1 and (ring_head = ring_tail) then do;
_n_ = dim(ring_time);
put 'ERROR: Ring array too small, size=' _n_;
abort cancel;
end;
* update the ring array;
ring_time(ring_head) = time;
ring_usage(ring_head) = usage;
span_usage + usage;
drop ring_tail ring_head span;
run;
For the case of data sorted descending, you could jiggle things; sort ascending, compute rolling and resort descending.
What to do if such a jiggle can't be done, or you just want a single pass?
The items to be part of the rolling calculation have to be from 'lead' rows, or rows not yet read via SET. How is this possible ? A second SET statement can be used to open a separate channel to the data set, and thus obtain lead values.
There is a little more bookkeeping for processing lead data -- premature overwrite and diminished window at the end of data need to be handled.
data want2;
array ring_usage [-1:%eval(&RING_SIZE-1)] _temporary_;
array ring_time [-1:%eval(&RING_SIZE-1)] _temporary_;
retain lead_index 0 ring_tail -1 ring_head -1 span 1 span_usage . guard_index .;
set have;
&debug put / _N_ ':' time= ring_head=;
* unload ring_head slotted item from sum;
span + -1;
span_usage + -ring_usage(ring_head);
* advance ring_head slot by 1, the vacated slot will be overwritten by lead;
ring_head = mod ( ring_head + 1, &RING_SIZE );
&debug put +2 ring_time(ring_head)= span= 'head';
* load ring with lead values via a second SET of the same data;
if not end2 then do;
do until (_n_ > 1 or lead_index = 0 or end2);
set have(keep=time usage rename=(time=t usage=u)) end=end2; * <--- the second SET ;
if end2 then guard_index = lead_index;
&debug if end2 then put guard_index=;
ring_time(lead_index) = t;
ring_usage(lead_index) = u;
&debug put +2 ring_time(lead_index)= 'lead';
lead_index = mod ( lead_index + 1, &RING_SIZE);
end;
end;
* advance ring_tail to cover the time window;
if ring_tail ne guard_index then do;
ring_tail_was = ring_tail;
ring_tail = mod ( ring_tail + 1, &RING_SIZE ) ;
do while (time - ring_time(ring_tail) <= &TIME_WINDOW);
span + 1;
span_usage + ring_usage(ring_tail);
&debug put +2 ring_time(ring_tail)= span= 'seek';
ring_tail_was = ring_tail;
ring_tail = mod ( ring_tail + 1, &RING_SIZE ) ;
if ring_tail_was = guard_index then leave;
if span > 1 and (ring_head = ring_tail) then do;
_n_ = dim(ring_time);
put 'ERROR: Ring array too small, size=' _n_;
abort cancel;
end;
end;
* seek went beyond window, back tail off to prior index;
ring_tail = ring_tail_was;
end;
&debug put +2 ring_time(ring_tail)= span= 'mark';
drop lead_index t u ring_: guard_index span;
format ring: span: usage 6.;
run;
options source;
Confirm both methods have the same computation:
proc sort data=want2; by time;
run;
proc compare noprint data=want compare=want2 out=diff outnoequal;
id time;
var span_usage;
run;
---------- LOG ----------
NOTE: There were 150 observations read from the data set WORK.WANT.
NOTE: There were 150 observations read from the data set WORK.WANT2.
NOTE: The data set WORK.DIFF has 0 observations and 4 variables.
I have not benchmarked ring-array versus SQL versus Proc EXPAND versus Hash.
Caution: Dead reckoning rolling values using +in and -out operations can experience round-off errors when dealing with non-integer values.
Consider following exemplary SAS dataset with following layout.
Price Num_items
100 10
120 15
130 20
140 25
150 30
I want to group them into 4 categories by defining a new variable called cat such that the new dataset looks as follows:
Price Num_items Cat
100 10 1
120 15 1
130 20 2
140 25 3
150 30 4
Also I want to group them so that they have about equal number of items (For example in above grouping Group 1 has 25, Group 2 has 20 ,Group 3 has 25 and Group 4 has 30 observations). Note that the price column is sorted in ascending order (that is required).
I am struggling to start with SAS for above. So any help would be appreciated. I am not looking for a complete solution but pointers towards preparing a solution would help.
Cool problem, subtly complex. I agree with #J_Lard that a data step with some retainment would likely be the quickest way to accomplish this. If I understand your problem correctly, I think the code below would give you some ideas as to how you want to solve it. Note that depending on the num_items, and group_target, your mileage will vary.
Generate similar, but larger data set.
data have;
do price=50 to 250 by 10;
/*Seed is `_N_` so we'll see the same random item count.*/
num_items = ceil(ranuni(_N_)*10)*5;
output;
end;
run;
Categorize.
/*Desired group size specification.*/
%let group_target = 50;
data want;
set have;
/*The first record, initialize `cat` and `cat_num_items` to 1 with implicit retainment*/
if _N_=1 then do;
cat + 1;
cat_num_items + num_items;
end;
else do;
/*If the item count for a new price puts the category count above the target, apply logic.*/
if cat_num_items + num_items > &group_target. then do;
/*If placing the item into a new category puts the current cat count closer to the `group_target` than would keeping it, then put into new category.*/
if abs(&group_target. - cat_num_items) < abs(&group_target. - (cat_num_items+num_items)) then do;
cat+1;
cat_num_items = num_items;
end;
/*Otherwise keep it in the currnet category and increment category count.*/
else cat_num_items + num_items;
end;
/*Otherwise keep the item count in the current category and increment category count.*/
else cat_num_items + num_items;
end;
drop cat_num_items;
run;
Check.
proc sql;
create table check_want as
select cat,
sum(num_items) as cat_count
from want
group by cat;
quit;
I have a dataset that has weekly values stored by location. I want to determine how many times the value has changed. Initially I thought I could just count distinct values, but the issue is that sometimes the values are repeated. Consider the example below:
data have;
input location $ week value;
cards;
NC 1 100
NC 2 200
NC 3 200
NC 4 200
NC 5 100
NC 6 200
SC 1 500
SC 2 500
SC 3 500
SC 4 500
SC 5 500
SC 6 500
;
run;
Notice that the value at location NC changes three times, at weeks 2,5,6. The value at location SC changes 0 times.
I would like an output of the change frequency...something like:
NC 3
SC 0
Any help would be greatly appreciated. Thank you.
Use the NOTSORTED keyword on a BY statement and you can then count the number of FIRST. occurrences.
proc sort data=have;
by location week;
run;
data want
set have;
by location value notsorted ;
if first.location then nchange=0;
else nchange + first.value;
if last.location;
keep location nchange ;
run;
Make sure the data is sorted. Your example is, but if not then
proc sort data=have;
by location week;
run;
After that, use the BY statement inside the data step. This will create indicators that tell you when you are at the start and end of the BY group.
RETAIN, will keep values between lines.
data want;
set have;
by location;
retain last count;
if first.location then do;
count = 0;
last = value;
end;
if last ^= value then
count = count + 1;
last = value;
if last.location then
output;
run;