Computing running sum with moving time-window - stata

My data
I am working on a spell dataset in the following format:
cls
clear all
set more off
input id spellnr str7 bdate_str str7 edate_str employed
1 1 2008m1 2008m9 1
1 2 2008m12 2009m8 0
1 3 2009m11 2010m9 1
1 4 2010m10 2011m9 0
///
2 1 2007m4 2009m12 1
2 2 2010m4 2011m4 1
2 3 2011m6 2011m8 0
end
* translate to Stata monthly dates
gen bdate = monthly(bdate_str,"YM")
gen edate = monthly(edate_str,"YM")
drop *_str
format %tm bdate edate
list, sepby(id)
Corresponding to:
+---------------------------------------------+
| id spellnr employed bdate edate |
|---------------------------------------------|
1. | 1 1 1 2008m1 2008m9 |
2. | 1 2 0 2008m12 2009m8 |
3. | 1 3 1 2009m11 2010m9 |
4. | 1 4 0 2010m10 2011m9 |
|---------------------------------------------|
5. | 2 1 1 2007m4 2009m12 |
6. | 2 2 1 2010m4 2011m4 |
7. | 2 3 0 2011m6 2011m8 |
+---------------------------------------------+
Here a given person (id) can have multiple spells (spellnr) of two types (unempl: 1 for unemployment; 0 for employment). the start-end dates of each spell are definied by bdate and edate, respectively.
Imagine the data was already cleaned, and is such that no spells overlap with each other.
There might be "missing" periods in between any two spells though.
This is captured by the dummy dataset above.
My question:
For each unemployment spell, I need to compute the number of months spent in employment in the last 6 months, 12 months, and 24 months.
Note that, importantly, each id can go in and out from employment, and all past employment spells should be taken into account (not just the last one).
In my example, this would lead to the following desired output:
+--------------------------------------------------------------+
| id spellnr employed bdate edate m6 m24 m48 |
|--------------------------------------------------------------|
1. | 1 1 1 2008m1 2008m9 . . . |
2. | 1 2 0 2008m12 2009m8 4 9 9 |
3. | 1 3 1 2009m11 2010m9 . . . |
4. | 1 4 0 2010m10 2011m9 6 11 20 |
|--------------------------------------------------------------|
5. | 2 1 1 2007m4 2009m12 . . . |
6. | 2 2 1 2010m4 2011m4 . . . |
7. | 2 3 0 2011m6 2011m8 5 20 44 |
+--------------------------------------------------------------+
My (working) attempt:
The following code returns the desired result.
* expand each spell to one observation per time unit (here "months"; works also for days)
expand edate-bdate+1
bysort id spellnr: gen spell_date = bdate + _n - 1
format %tm spell_date
list, sepby(id spellnr)
* fill-in empty months (not covered by spells)
xtset id spell_date, monthly
tsfill
* compute cumulative time spent in employment and lagged values
bysort id (spell_date): gen cum_empl = sum(employed) if employed==1
bysort id (spell_date): replace cum_empl = cum_empl[_n-1] if cum_empl==.
bysort id (spell_date): gen lag_7 = L7.cum_empl if employed==0
bysort id (spell_date): gen lag_24 = L25.cum_empl if employed==0
bysort id (spell_date): gen lag_48 = L49.cum_empl if employed==0
qui replace lag_7=0 if lag_7==. & employed==0 // fix computation for first spell of each "id" (if not enough time to go back with "L.")
qui replace lag_24=0 if lag_24==. & employed==0
qui replace lag_48=0 if lag_48==. & employed==0
* compute time spent in employment in the last 6, 24, 48 months, at the beginning of each unemployment spell
bysort id (spell_date): gen m6 = cum_empl - lag_7 if employed==0
bysort id (spell_date): gen m24 = cum_empl - lag_24 if employed==0
bysort id (spell_date): gen m48 = cum_empl - lag_48 if employed==0
qui drop if (spellnr==.)
qui bysort id spellnr (spell_date): keep if _n == 1
drop spell_date cum_empl lag_*
list
This works fine, but becomes quite inefficient when using (several millions of) daily data. Can you suggest any alternative approach that does not involve expanding the dataset?
In words what I do above is:
I expand data to have one row per month;
I fill-in the "gaps" in between the spells with -tsfill-
I Compute the running time spent in employment, and use lag operators to get the three quantities of interest.
This is in the vein of what done here, in a past question that I posted. However the working example there was unnecessarily complicated and with some mistakes.
SOLUTIONS PERFORMANCE
I tried different approaches suggested in the accepted answer below (including using joinby as suggested in an earlier version of the answer). In order to create a larger dataset I used:
expand 500000
bysort id spellnr: gen new_id = _n
drop id
rename new_id id
which creates a dataset with 500,000 id's (for a total of 3,500,000 spells).
The first solution largely dominates the ones that use joinby or rangejoin (see also the comments to the accepted answer below).

Below code might save some running time.
bys id (employed): gen tag = _n if !employed
sum tag, meanonly
local maxtag = `r(max)'
foreach i in 6 24 48 {
gen m`i' = .
forval d = 1/`maxtag' {
by id: gen x = 1 + min(bdate[`d'],edate) - max(bdate[`d']-`i',bdate) if employed
egen y = total(x*(x>0)), by(id)
replace m`i' = y if tag == `d'
drop x y
}
}
sort id bdate
The same logic, along with -rangejoin- (ssc) should also deserve a try. Please kindly provide some feedback after testing with your (large) actual data.
preserve
keep if employed
replace employed = 0
tempfile em
save `em'
restore
foreach i in 6 24 48 {
gen _bd = bdate - `i'
rangejoin edate _bd bdate using `em', by(id employed) p(_)
egen m`i' = total(_edate - max(_bd,_bdate)+1) if !employed, by(id bdate)
bys id bdate: keep if _n==1
drop _*
}

Related

Stata - how to create T variables that have values for each t in panel data

Sorry if the title of my question is unclear, but it's hard to summarize it on one line. I have a panel data set (codes to generate it are at the bottom):
. xtset id year
panel variable: id (strongly balanced)
time variable: year, 1 to 3
delta: 1 unit
. l, sep(3)
+-----------------+
| id year x |
|-----------------|
1. | 1 1 1.1 |
2. | 1 2 1.2 |
3. | 1 3 1.3 |
|-----------------|
4. | 2 1 2.1 |
5. | 2 2 2.2 |
6. | 2 3 2.3 |
+-----------------+
I want to create variables x_1, x_2 and x_3, where x_j has the value of x in year j for each id. I can achieve it as follows (with no elegance pursued):
. forv k=1/3 {
2. capture drop tmp
3. gen tmp = x if year==`k'
4. by id: egen x_`k' = mean(tmp)
5. }
(4 missing values generated)
(4 missing values generated)
(4 missing values generated)
. drop tmp
. l, sep(3)
+-----------------------------------+
| id year x x_1 x_2 x_3 |
|-----------------------------------|
1. | 1 1 1.1 1.1 1.2 1.3 |
2. | 1 2 1.2 1.1 1.2 1.3 |
3. | 1 3 1.3 1.1 1.2 1.3 |
|-----------------------------------|
4. | 2 1 2.1 2.1 2.2 2.3 |
5. | 2 2 2.2 2.1 2.2 2.3 |
6. | 2 3 2.3 2.1 2.2 2.3 |
+-----------------------------------+
Is there a way without using a loop? I know I can write a program or an ado file (determining the variable names automatically), but I wonder if there are some builtin commands for my purpose.
The full commands are here.
clear all
set obs 6
gen id = floor((_n-1)/3)+1
by id, sort: gen year = _n
xtset id year
gen x = id+year/10
l, sep(3)
forv k=1/3 {
capture drop tmp
gen tmp = x if year==`k'
by id: egen x_`k' = mean(tmp)
}
drop tmp
l, sep(3)
Loops are good. What I can do for you is shorten your loop:
clear all
set obs 6
gen id = floor((_n-1)/3)+1
by id, sort: gen year = _n
xtset id year
gen x = id+year/10
l, sep(3)
forv k=1/3 {
by id: gen x_`k' = x[`k']
}
l, sep(3)
There is a decency assumption in there of a balanced panel. This loop makes no such assumption, but you need to loop over the observed years:
forv year = 1/3 {
by id: egen X_`year' = total(x / (year == `year'))
}
See also this discussion, especially Sections 9 and 10.
You may also be interested in separate, which avoids an explicit loop, but only gets you part of the way to where you want to be.
All that said, it's hard to believe that you need these variables at all. The mechanism of time series operators solves many problems, while tools such as rangestat (SSC) fill in many gaps.
A late entry, but you could avoid loops if you wanted by using reshape and merge:
clear *
input float(id year x)
1 1 1.1
1 2 1.2
1 3 1.3
2 1 2.1
2 2 2.2
2 3 2.3
end
tempfile master
save `master'
reshape wide x, i(id) j(year)
tempfile using
save `using'
use `master', clear
merge m:1 id using `using', nogen
This "answer", which I post because it is too long as a comment, contains results from practices following Nick Cox's answer. All credits go to him.
Method 1: Use egen and total, missing.
levelsof year, local(yearlevels)
foreach v of varlist x {
foreach year of local yearlevels {
by id: egen `v'_`year' = total(`v' / (year==`year')), missing
}
}
The missing option handles unbalanced panels.
Method 2: Use separate and then copy the values.
foreach v of varlist x {
separate `v', by(year) gen(`v'_)
local newvars = r(varlist)
foreach w of local newvars {
by id: egen f_`w' = total(`w'), missing
}
drop `newvars'
}
This also handles unbalanced panels, but the new variable names are f_x_1, etc. The first method needs the levels of year, while the second needs creating a set of intermediate variables. I personally slightly prefer the first. It would be wonderful if Method 2 can be shortened.

Count concurrent subscriptions

I have a database with a number of people who (may) have multiple subscriptions to a service running at once and transactional data for each event during the life of their subscription. I am trying to create a variable that counts the number of current active subscriptions the user has at a given transaction time.
To illustrate an example, my data lives in the form:
person | subscription | obs_date | sub_start_date | sub_end_date | num_concurrent_subs
--------------------------------------------------------------------------------------
1 | 1 | 09/01/10 | 09/01/10 | 09/01/11 | 1
1 | 1 | 10/01/10 | 09/01/10 | 09/01/11 | 2
1 | 1 | 11/01/10 | 09/01/10 | 09/01/11 | 2
1 | 2 | 10/01/10 | 10/01/10 | 09/01/11 | 2
1 | 2 | 11/01/10 | 10/01/10 | 09/01/11 | 2
1 | 3 | 11/01/14 | 09/01/14 | . | 1
1 | 3 | 11/01/16 | 09/01/14 | . | 1
1 | 4 | 11/01/15 | 10/01/15 | 11/01/15 | 3
1 | 5 | 11/01/15 | 10/01/15 | 11/01/15 | 3
And so on and so forth for each person. I want to generate the num_concurrent_subs as above.
That is, for each person, look at each observation and find how many subscriptions it falls into the range sub_start_date to sub_end_date.
I've read a bit on Stata's count function and believe I'm close to a solution, but I'm not sure how to check it across different subscriptions.
You can do this by separating the subscription information from the transaction data and convert the subscription data to long form, with one observation for the start date and another for the end date. Then you recombine with the transaction data and order by a single date variable. You use an onoff variable to track the start and end of each subscription. Something like:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(person subscription) str8(obs_date sub_start_date sub_end_date) byte num_concurrent_subs
1 1 "09/01/10" "09/01/10" "09/01/11" 1
1 1 "10/01/10" "09/01/10" "09/01/11" 2
1 1 "11/01/10" "09/01/10" "09/01/11" 2
1 2 "10/01/10" "10/01/10" "09/01/11" 2
1 2 "11/01/10" "10/01/10" "09/01/11" 2
1 3 "11/01/14" "09/01/14" "." 1
1 3 "11/01/16" "09/01/14" "." 1
1 4 "11/01/15" "10/01/15" "11/01/15" 3
1 5 "11/01/15" "10/01/15" "11/01/15" 3
end
* should always have an observation identifier
gen obsid = _n
* convert string to Stata numeric dates
gen odate = daily(obs_date,"MD20Y")
gen substart = daily(sub_start_date,"MD20Y")
gen subend = daily(sub_end_date,"MD20Y")
format %td odate substart subend
save "main_data.dta", replace
* reduce to subscription info with one obs for the start and one obs
* for the end of each subscription. use an onoff variable to tract
* start and end events
keep person subscription substart subend
bysort person subscription substart subend: keep if _n == 1
expand 2
bysort person subscription: gen adate = cond(_n == 1, substart, subend)
by person subscription: gen onoff = cond(_n == 1, 1, -1)
replace onoff = 0 if mi(adate)
format %td adate
append using "main_data.dta"
* include obs date in adate and nothing happens on the observation date
replace adate = odate if !mi(obsid)
replace onoff = 0 if !mi(obsid)
* order by person adate, put on event first, then obs events, then off events
gsort person adate -onoff
by person: gen concur = sum(onoff)
* return to original obs
keep if !mi(obsid)
sort obsid
Here's another way to do this using rangejoin (from SSC). To install it, type in Stata's Command window:
ssc install rangejoin
With rangejoin, you can pair each subscription with all transactional data that falls within the subscription start and end date. Then, it's just a matter of counting, per transaction observation, how many subscription is it paired with.
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte(person subscription) str8(obs_date sub_start_date sub_end_date) byte num_concurrent_subs
1 1 "09/01/10" "09/01/10" "09/01/11" 1
1 1 "10/01/10" "09/01/10" "09/01/11" 2
1 1 "11/01/10" "09/01/10" "09/01/11" 2
1 2 "10/01/10" "10/01/10" "09/01/11" 2
1 2 "11/01/10" "10/01/10" "09/01/11" 2
1 3 "11/01/14" "09/01/14" "." 1
1 3 "11/01/16" "09/01/14" "." 1
1 4 "11/01/15" "10/01/15" "11/01/15" 3
1 5 "11/01/15" "10/01/15" "11/01/15" 3
end
* should always have an observation identifier
gen obsid = _n
* convert string to Stata numeric dates
gen odate = daily(obs_date,"MD20Y")
gen substart = daily(sub_start_date,"MD20Y")
gen subend = daily(sub_end_date,"MD20Y")
format %td odate substart subend
save "main_data.dta", replace
* reduce to subscription start and end date per person
bysort person subscription substart subend: keep if _n == 1
keep person substart subend
* missing values will exclude obs so use a date in the future
replace subend = mdy(1,1,2099) if mi(subend)
* pair each subscription with an obs date
rangejoin odate substart subend using "main_data.dta", by(person)
* the number of current subcription is the number of pairings
bysort obsid: gen current = _N
* return to original obs
by obsid: keep if _n == 1
sort obsid
drop substart subend
rename (substart_U subend_U) (substart subend)

Stata: identify consecutive rows with numbers that can cancel out

I have a dataset in long form that lists observations by month. I want to identify if consecutive rows for a variable can cancel out (in other words, have the same absolute value). And if so, I want to change both observations to zero. In addition, I want to have an additional dummy variable that tells me if I've changed anything for that row. How can I structure the code?
For example,
Date Var1 Var 2
Jan2010 5 6
Feb2010 6 0
Mar2010 -6 1
In the above example, I want to make the dataset into below
Date Var1 Var 2 Dummy
Jan2010 5 6 0
Feb2010 0 0 1
Mar2010 0 0 1
This (seemingly) meets the criteria described, but other considerations may come into play if there are other factors not explicitly mentioned (e.g., do you need to consider whether Var2 "cancels out"? What if Apr2010 is 6? etc.).
clear
input str7 Date Var1 Var2
"Jan2010" 5 6
"Feb2010" 6 0
"Mar2010" -6 1
end
gen Dummy = Var1 == Var1[_n+1] * -1 | Var1 == Var1[_n-1] * -1
replace Var1 = 0 if Dummy
replace Var2 = 0 if Dummy
li , noobs
yielding
+-------------------------------+
| Date Var1 Var2 Dummy |
|-------------------------------|
| Jan2010 5 6 0 |
| Feb2010 0 0 1 |
| Mar2010 0 0 1 |
+-------------------------------+
Or perhaps more correctly, Dummy should be generated with respect to actual months and not observations:
gen Month = monthly(Date, "MY")
format Month %tm
tsset Month , monthly
gen Dummy = Var1 == Var1[_n+1] * -1 | Var1 == Var1[_n-1] * -1
Edit: As Roberto rightly points out, the previous code (using abs()) was written based on the example posted, but multiplying by -1 is more robust and yields the same result (for the sample data posted). And the suggestion to preserve the original variables is of course a generally good idea.

Stata: How to count the number of 'active' cases in a group when new case is opened?

I'm relatively new to Stata and am trying to count the number of active cases an employee has open over time in my dataset (see link below for example). I tried writing a loop using forvalues based on an example I found online, but keep getting
invalid syntax
For each EmpID I want to count the number of cases that employee had open when a new case was added to the queue. So if a case is added with an OpenDate of 03/15/2015 and the EmpID has two other cases open at the time, the code would assign a value of 2 to NumActiveWhenOpened field. A case is considered active if (1) its OpenDate is less then the new case's OpenDate & (2) its CloseDate is greater than the new case's OpenDate.
The link below provides an example. I'm trying to write a loop that creates the NumActiveWhenOpened column. Any help would be greatly appreciated. Thanks!
http://i.stack.imgur.com/z4iyR.jpg
EDIT
Here is the code that is not working. I'm sure there are several things wrong with it and I'm not sure how to store the count in the [NumActiveWhenOpen] field.
by EmpID: generate CaseNum = _n
egen group = group(EmpID)
su group, meanonly
gen NumActiveWhenOpen = 0
forvalues i = 1/ 'r(max)' {
forvalues x = 1/CaseNum if group == `i'{
count if OpenDate[_n] > OpenDate[_n-x] & CloseDate[_n-x] > OpenDate[_n]
}
}
This sounds like a problem discussed in http://www.stata-journal.com/article.html?article=dm0068 but let's try to be self-contained. I am not sure that I understand the definitions, but this may help.
I'll steal part of Roberto Ferrer's sandbox.
clear
set more off
input ///
caseid str15(open close) empid
1 "1/1/2010" "3/1/2010" 1
2 "2/5/2010" "" 1
3 "2/15/2010" "4/7/2010" 1
4 "3/5/2010" "" 1
5 "3/15/2010" "6/15/2010" 1
6 "3/24/2010" "3/24/2010" 1
1 "1/1/2010" "3/1/2010" 2
2 "2/5/2010" "" 2
3 "2/15/2010" "4/7/2010" 2
4 "3/5/2010" "" 2
5 "3/15/2010" "6/15/2010" 2
end
gen d1 = date(open, "MDY")
gen d2 = date(close, "MDY")
format %td d1 d2
drop open close
reshape long d, i(empid caseid) j(status)
replace status = -1 if status == 2
replace status = . if missing(d)
bysort empid (d) : gen nopen = sum(status)
bysort empid d : replace nopen = nopen[_N]
l
The idea is to reshape so that each pair of dates becomes two observations. Then if we code each opening by 1 and each closing by -1 the total number of active cases is their cumulative sum. That's all. Here are the results:
. l, sepby(empid)
+---------------------------------------------+
| empid caseid status d nopen |
|---------------------------------------------|
1. | 1 1 1 01jan2010 1 |
2. | 1 2 1 05feb2010 2 |
3. | 1 3 1 15feb2010 3 |
4. | 1 1 -1 01mar2010 2 |
5. | 1 4 1 05mar2010 3 |
6. | 1 5 1 15mar2010 4 |
7. | 1 6 1 24mar2010 4 |
8. | 1 6 -1 24mar2010 4 |
9. | 1 3 -1 07apr2010 3 |
10. | 1 5 -1 15jun2010 2 |
11. | 1 2 . . 2 |
12. | 1 4 . . 2 |
|---------------------------------------------|
13. | 2 1 1 01jan2010 1 |
14. | 2 2 1 05feb2010 2 |
15. | 2 3 1 15feb2010 3 |
16. | 2 1 -1 01mar2010 2 |
17. | 2 4 1 05mar2010 3 |
18. | 2 5 1 15mar2010 4 |
19. | 2 3 -1 07apr2010 3 |
20. | 2 5 -1 15jun2010 2 |
21. | 2 4 . . 2 |
22. | 2 2 . . 2 |
+---------------------------------------------+
The bottom line is no loops needed, but by: helps mightily. A detail useful here is that the cumulative sum function sum() ignores missings.
Try something along the lines of
clear
set more off
*----- example data -----
input ///
caseid str15(open close) empid numact
1 "1/1/2010" "3/1/2010" 1 0
2 "2/5/2010" "" 1 1
3 "2/15/2010" "4/7/2010" 1 2
4 "3/5/2010" "" 1 2
5 "3/15/2010" "6/15/2010" 1 3
6 "3/24/2010" "3/24/2010" 1 .
1 "1/1/2010" "3/1/2010" 2 0
2 "2/5/2010" "" 2 1
3 "2/15/2010" "4/7/2010" 2 2
4 "3/5/2010" "" 2 2
5 "3/15/2010" "6/15/2010" 2 3
end
gen opend = date(open, "MDY")
gen closed = date(close, "MDY")
format %td opend closed
drop open close
order empid
list, sepby(empid)
*----- what you want -----
gen numact2 = .
sort empid caseid
forvalues i = 1/`=_N' {
count if empid[`i'] == empid & /// a different count for each employee
opend[`i'] <= closed /// the date condition
in 1/`i' // no need to look at cases that have not yet occurred
replace numact2 = r(N) - 1 in `i'
}
list, sepby(empid)
This is resource intensive so if you have a large data set, it will take some time. The reason is it loops over observations checking conditions. See help stored results and help return for an explanation of r(N).
A good read is
Stata tip 51: Events in intervals, The Stata Journal, by Nicholas J. Cox.
Note how I provided an example data set within the code (see help input). That is how I recommend you do it for future questions. This will save other people's time and increase the probabilities of you getting an answer.

Count observations within dynamic range

Consider the following example:
input group day month year number treatment NUM
1 1 2 2000 1 1 2
1 1 6 2000 2 0 .
1 1 9 2000 3 0 .
1 1 5 2001 4 0 .
1 1 1 2010 5 1 1
1 1 5 2010 6 0 .
2 1 1 2001 1 1 0
2 1 3 2002 2 1 0
end
gen date = mdy(month,day,year)
format date %td
drop day month year
For each group, I have a varying number of observations. Each observations refers to an event that is specified with a date. Variable number is the numbering within each group.
Now, I want to count the number of observations that occur one year starting from the date of each treatment observation (excluding itself) within this group. This means, I want to create the variable NUM that I have already put into my example above. I do not care about the number of observations with treatment = 0.
EDIT Begin: The following information was found to be missing but necessary to tackle this problem: The treatment variable will have a value of 1 if there is no observation within the same group in the last year. Thus it is also not possible that the variable NUM will have to consider observations with treatment = 1. In principal, it is possible that there are two observations within a group that have identical dates. EDIT End
I have looked into Stata tip 51: Events in intervals. It seems to work out however my dataset is huge (> 1 mio observations) such that it is really really inefficient - especially because I do not care about all treatment = 0 observations.
I was wondering if there is any alternative. My approach was to look for the observation with the latest date within each group that is still in the range of 1 year (and maybe store it in variable latestDate). Then I would simply subtract the value in variable number of the observation found from the value in count of the treatment = 0 variable.
Note: My "inefficient" code looks as follows
gsort -treatment
gen treatment_id = _n
replace treatment_id = . if treatment==0
gen count=.
sum treatment_id, meanonly
qui forval i = 1/`r(max)'{
count if inrange(date-date[`i'],1,365) & group == group[`i']
replace count = r(N) in `i'
}
sort group date
I am assuming that treatment can't occur within 1 year of the previous treatment (in the group). This is true in your example data, but may not be true in general. But, assuming that it is the case, then this should work. I'm using carryforward which is on SSC (ssc install carryforward). Like your latestDate thought, I determine one year after the most recent treatment and count the number of observations in that window.
sort group date
gen yrafter = (date + 365) if treatment == 1
by group: carryforward yrafter, replace
format yrafter %td
gen in_window = date <= yrafter & treatment == 0
egen answer = sum(in_window), by(group yrafter)
replace answer = . if treatment == 0
I can't promise this will be faster than a loop but I suspect that it will be.
The question is not completely clear.
Consider the following data with two different results, num2 and num3:
+-----------------------------------------+
| date2 group treat num2 num3 |
|-----------------------------------------|
| 01feb2000 1 1 3 2 |
| 01jun2000 1 0 . . |
| 01sep2000 1 0 . . |
| 01nov2000 1 1 0 0 |
| 01may2002 1 0 . . |
| 01jan2010 1 1 1 1 |
| 01may2010 1 0 . . |
|-----------------------------------------|
| 01jan2001 2 1 0 0 |
| 01mar2002 2 1 0 0 |
+-----------------------------------------+
The variable num2 is computed assuming you are interested in counting all observations that are within a one-year period after a treated observation (treat == 1), be those observations equal to 0 or 1 for treat. For example, after 01feb2000, there are three observations that comply with the time span condition; two have treat==0 and one has treat == 1, and they are all counted.
The variable num3 is also counting observations that are within a one-year period after a treated observation, but only the cases for which treat == 0.
num2 is computed with code in the spirit of the article you have cited. The use of in makes the run more efficient and there is no gsort (as in your code), which is quite slow. I have assumed that in each group there are no repeated dates:
clear
set more off
input ///
group str15 date count treat num
1 01.02.2000 1 1 2
1 01.06.2000 2 0 .
1 01.09.2000 3 0 .
1 01.11.2000 3 1 .
1 01.05.2002 4 0 .
1 01.01.2010 5 1 1
1 01.05.2010 6 0 .
2 01.01.2001 1 1 0
2 01.03.2002 2 1 0
end
list
gen date2 = date(date,"DMY")
format date2 %td
drop date count num
order date
list, sepby(group)
*----- what you want -----
gen num2 = .
isid group date, sort
forvalues j = 1/`=_N' {
count in `j'/L if inrange(date2 - date2[`j'], 1, 365) & group == group[`j']
replace num2 = r(N) in `j'
}
replace num2 = . if !treat
list, sepby(group)
num3 is computed with code similar in spirit (and results) as that posted by #jfeigenbaum:
<snip>
*----- what you want -----
isid group date, sort
by group: gen indicat = sum(treat)
sort group indicat, stable
by group indicat: egen num3 = total(inrange(date2 - date2[1], 1, 365))
replace num3 = . if !treat
list, sepby(group)
Even more than two interpretations are possible for your problem, but I'll leave it at that.
(Note that I have changed your example data to include cases that probably make the problem more realistic.)