I'm using Stata 13 and have to clean a data set in a panel format with different ids for a given period from 2000 to 2003. My data looks like:
id year ln_wage
1 2000 2.30
1 2001 2.31
1 2002 2.31
2 2001 1.89
2 2002 1.89
2 2003 2.10
3 2002 1.60
4 2002 2.46
4 2003 2.47
5 2000 2.10
5 2001 2.10
5 2003 2.12
I would like to keep in my dataset for each year only individuals that appear in t-1 year. In this way, the first year of my sample (2000) will be dropped. I'm looking for output like:
2001:
id year ln_wage
1 2001 2.31
5 2001 2.10
2002:
id year ln_wage
1 2002 2.31
2 2002 1.89
2003:
id year ln_wage
2 2003 2.10
4 2003 2.47
Regards,
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte id int year float ln_wage
1 2000 2.3
1 2001 2.31
1 2002 2.31
2 2001 1.89
2 2002 1.89
2 2003 2.1
3 2002 1.6
4 2002 2.46
4 2003 2.47
5 2000 2.1
5 2001 2.1
5 2003 2.12
end
xtset id year
drop if missing(L.ln_wage)
sort year id
list, noobs sepby(year)
+---------------------+
| id year ln_wage |
|---------------------|
| 1 2001 2.31 |
| 5 2001 2.1 |
|---------------------|
| 1 2002 2.31 |
| 2 2002 1.89 |
|---------------------|
| 2 2003 2.1 |
| 4 2003 2.47 |
+---------------------+
// Alternatively, assuming no duplicate years within id exist
bysort id (year): gen todrop = year[_n-1] != year - 1
drop if todrop
Related
I have a dataset of patients and their alcohol-related patient data over time (in years) like below
clear
input long patid float(year cohort)
1051 1994 1
2051 1972 1
2051 1989 2
2051 1990 2
2051 2000 2
2051 2001 3
2051 2002 1
2051 2003 2
8051 1995 1
8051 1996 1
8051 2003 1
end
label values cohort cohortlab
label define cohortlab 0 "general population" 1 "no alcohol data" 2 "indeterminate" 3 "non-drinker" 4 "low_risk" 5 "hazardous" 6 "AUD" , replace
I would like to create a variable that shows the highest level of alcohol code that has been used so far at any (year) point in a patient's record, such that the dataset would be like below:
clear
input long patid float(year cohort highestsofar)
1051 1994 1 1
2051 1972 1 1
2051 1989 2 2
2051 1990 2 2
2051 2000 2 2
2051 2001 3 3
2051 2002 1 3
2051 2003 2 3
8051 1995 1 1
8051 1996 1 1
8051 2003 1 1
end
label values cohort cohortlab
label values highestsofar cohortlab
label define cohortlab 0 "general population" 1 "no alcohol data" 2 "indeterminate" 3 "lifetime_abstainer" 4 "low_risk" 5 "hazardous" 6 "AUD" , replace
Thanks for the clear example and question.
The problem is already covered by an FAQ link here on the StataCorp website. Here's a one-line solution using rangestat from SSC.
clear
input long patid float(year cohort)
1051 1994 1
2051 1972 1
2051 1989 2
2051 1990 2
2051 2000 2
2051 2001 3
2051 2002 1
2051 2003 2
8051 1995 1
8051 1996 1
8051 2003 1
end
label values cohort cohortlab
label define cohortlab 0 "general population" 1 "no alcohol data" 2 "indeterminate" 3 "non-drinker" 4 "low_risk" 5 "hazardous" 6 "AUD" , replace
rangestat (max) highestsofar = cohort, interval(year . 0) by(patid)
list, sepby(patid)
+-------------------------------------------+
| patid year cohort highes~r |
|-------------------------------------------|
1. | 1051 1994 no alcohol data 1 |
|-------------------------------------------|
2. | 2051 1972 no alcohol data 1 |
3. | 2051 1989 indeterminate 2 |
4. | 2051 1990 indeterminate 2 |
5. | 2051 2000 indeterminate 2 |
6. | 2051 2001 non-drinker 3 |
7. | 2051 2002 no alcohol data 3 |
8. | 2051 2003 indeterminate 3 |
|-------------------------------------------|
9. | 8051 1995 no alcohol data 1 |
10. | 8051 1996 no alcohol data 1 |
11. | 8051 2003 no alcohol data 1 |
+-------------------------------------------+
I would like to offer an answer:
by patid: g highestsofar=cohort if cohort>cohort[_n-1]|_n==1
by patid: replace highestsofar=highestsofar[_n-1] if cohort<=cohort[_n-1]&_n>1
by patid: replace highestsofar=highestsofar[_n-1] if (highestsofar<highestsofar[_n-1]) & ((cohort>cohort[_n-1])&_n>1)
label values highestsofar cohortlab
I would be happy if a more compact syntax could be discussed.
Thanks
I am attempting to make the data balanced for my sample. My data currently looks like:
id year y
1 2000 2
1 2002 4
1 2003 5
2 2001 2
2 2002 3
....
And I would like it to look like:
id year y
1 2000 2
1 2001 .
1 2002 4
1 2003 5
2 2000 .
2 2001 2
2 2002 3
....
I have tried creating a .dta of just the year and merging it to the data; however, I can't get it to work. Essentially I would like to add rows of missing data to the panel. I realize I could just drop ids with unbalanced data, but this is not an option for my methodology.
You need to skim the Data-Management Reference Manual [D] when looking for basic data management functionality. In this case fillin does what you seem to be asking.
clear
input id year y
1 2000 2
1 2002 4
1 2003 5
2 2001 2
2 2002 3
end
fillin id year
list, sepby(id)
+-------------------------+
| id year y _fillin |
|-------------------------|
1. | 1 2000 2 0 |
2. | 1 2001 . 1 |
3. | 1 2002 4 0 |
4. | 1 2003 5 0 |
|-------------------------|
5. | 2 2000 . 1 |
6. | 2 2001 2 0 |
7. | 2 2002 3 0 |
8. | 2 2003 . 1 |
+-------------------------+
I am trying to reshape some data. The issue is that usually data is either long or wide but this seems to be set up in a way that I cannot figure out how to reshape. The data looks as follows:
year australia canada denmark ...
1999 10 15 20
2000 12 16 25
2001 14 18 40
And I would like to get it into a panel format like the following
year country gdppc
1999 australia 10
2000 australia 12
2001 australia 14
1999 canada 16
2000 canada 18
The problem is just in the variable names. See e.g. this FAQ for the advice that you may need rename first before you can reshape.
For more complicated variants of this problem with similar data, see e.g. this paper.
clear
input year australia canada denmark
1999 10 15 20
2000 12 16 25
2001 14 18 40
end
rename (australia-denmark) gdppc=
reshape long gdppc , i(year) string j(country)
sort country year
list, sepby(country)
+--------------------------+
| year country gdppc |
|--------------------------|
1. | 1999 australia 10 |
2. | 2000 australia 12 |
3. | 2001 australia 14 |
|--------------------------|
4. | 1999 canada 15 |
5. | 2000 canada 16 |
6. | 2001 canada 18 |
|--------------------------|
7. | 1999 denmark 20 |
8. | 2000 denmark 25 |
9. | 2001 denmark 40 |
+--------------------------+
I have a repeated cross section every year. I have a variable, var1, which is the same across all observations in a given year (for instance, the mean of a variable in a given year). I'd like to create a variable, var1_l, that would be the lagged version of var1.
As an example, from the dataset
id1 year var1
3 1990 3.5
4 1990 3.5
5 1991 4
6 1991 4
7 1991 4
I would like to obtain
id1 year var1 var1_l
3 1990 3.5 .
4 1990 3.5 .
5 1991 4 3.5
6 1991 4 3.5
7 1991 4 3.5
A solution would be to use a merge but saving/restoring the dataset takes a lot of time when the dataset is big. For reference, below is my current merge solution:
preserve
keep year var1
replace year = year - 1
bys year: keep if _n == 1
rename var1 var1_l
sort year
tempfile temp
save `temp'
restore
merge m:1 year using `temp', nogen sorted
Another option would be to use the matrix returned by tabstat. I'm wondering if there is a more elegant solution (that returns . when there is no observation in year - 1).
This seems a little unusual, but could be just a twist on a standard problem as explained here.
. input id1 year var1
id1 year var1
1. 3 1990 3.5
2. 4 1990 3.5
3. 5 1991 4
4. 6 1991 4
5. 7 1991 4
6. end
. sort year id1
. generate var1_l = var1[_n-1] if year == year[_n-1] + 1
(4 missing values generated)
. replace var1_l = var1_l[_n-1] if year == year[_n-1] & missing(var1_l)
(2 real changes made)
. list
+----------------------------+
| id1 year var1 var1_l |
|----------------------------|
1. | 3 1990 3.5 . |
2. | 4 1990 3.5 . |
3. | 5 1991 4 3.5 |
4. | 6 1991 4 3.5 |
5. | 7 1991 4 3.5 |
+----------------------------+
This answer crossed with #Nick's but there's a slight difference in terms of results. I check only that years be different, while his code checks that years be consecutive.
clear
set more off
input ///
id year var1
1 1990 3.5
3 1990 3.5
2 1990 3.5
1 1991 2
2 1991 2
3 1991 2
3 1992 6
2 1992 6
1 1992 6
3 1993 6
2 1993 6
1 1993 6
4 1993 6
1 1994 4.3
2 1994 4.3
3 1994 4.3
end
list, sepby(year)
*----- what you want -----
sort year
generate var2 = var1[_n-1] if year != year[_n-1]
by year : replace var2 = var2[1]
list, sepby(year)
First, have a look at some variables of my dataset:
firm_id year dyrstr Lack total_workers
2432 2002 1980 29
2432 2003 1980 23
2432 2005 1980 1 283
2432 2006 1980 56
2432 2007 1980 21
2433 2004 2001 42
2433 2006 2001 1 29
2433 2008 2001 1 100
2434 2002 2002 21
2434 2003 2002 55
2434 2004 2002 22
2434 2005 2002 24
2434 2006 2002 17
2434 2007 2002 40
2434 2008 2002 110
2434 2009 2002 158
2434 2010 2002 38
2435 2002 2002 80
2435 2003 2002 86
2435 2004 2002 877
2435 2005 2002 254
2435 2006 2002 71
2435 2007 2002 116
2435 2008 2002 118
2435 2009 2002 1165
2435 2010 2002 67
2436 2002 1992 24
2436 2003 1992 25
2436 2004 1992 22
2436 2005 1992 23
2436 2006 1992 21
2436 2007 1992 100
2436 2008 1992 73
2436 2009 1992 23
2436 2010 1992 40
2437 2002 2002 30
2437 2003 2002 31
2437 2004 2002 21
2437 2006 2002 1 56
2437 2007 2002 20
The variables:
firm_id is an identifier for firms
year is the year of the observation
dyrstr is the founding year of a firm
Lack equals 1 if there is a missing observation in the year before (e.g. in line three of the dataset, Lack equals 1 because for the firm with ID 2432, there is no observation in the year 2004)
total_workers is the number of workers
I'd like to fill in the gaps, namely I'd like to create new observations as I show you in the following (only considering the firm with ID 2432):
firm_id year dyrstr Lack total_workers
2432 2002 1980 29
*2432* *2004* *1980* *156*
2432 2003 1980 23
2432 2005 1980 1 283
2432 2006 1980 56
2432 2007 1980 21
The line where I've put the values of the variables in asterisks is the newly created observation. This observation should be a copy of the previous observation but with some modification.
firm_id should stay the same as in the line before
year should be the year from the previous line plus one
dyrstr should stay the same as in the line before
Lack: here it doesn't matter which value this variable has
total_workers equals 0.5*(value of the previous observation + value of consecutive observation)
all other variables of my dataset (which I didn't list here) should stay the same as in the line before
I read something about the the command expand but help expand doesn't help me much. Hopefully one of you can help me!
My suggestions hinge on using expand, which in turn just requires information on the number of observations to be added. I ignore your variable Lack, as Stata itself can work out where the gaps are. My procedure for imputing total_workers is based on using the inbuilt command ipolate and thus would work over gaps longer than 1 year, which don't appear in your example. The number of workers so estimated is not necessarily an integer.
For other interpolation procedures, check out cipolate, csipolate, pchipolate, all accessible via ssc desc cipolate (or equivalent).
This kind of operation depends on getting sort order exactly right, which I don't think is trivial, even with experience, so in getting the code right for similar problems, be prepared for false starts; pepper your trial code with list statements; and work on a good toy example dataset (as you kindly provided here).
. clear
. input firm_id year dyrstr total_workers
firm_id year dyrstr total_w~s
1. 2432 2002 1980 29
2. 2432 2003 1980 23
3. 2432 2005 1980 283
4. 2432 2006 1980 56
5. 2432 2007 1980 21
6. 2433 2004 2001 42
7. 2433 2006 2001 29
8. 2433 2008 2001 100
9. 2434 2002 2002 21
10. 2434 2003 2002 55
11. 2434 2004 2002 22
12. 2434 2005 2002 24
13. 2434 2006 2002 17
14. 2434 2007 2002 40
15. 2434 2008 2002 110
16. 2434 2009 2002 158
17. 2434 2010 2002 38
18. 2435 2002 2002 80
19. 2435 2003 2002 86
20. 2435 2004 2002 877
21. 2435 2005 2002 254
22. 2435 2006 2002 71
23. 2435 2007 2002 116
24. 2435 2008 2002 118
25. 2435 2009 2002 1165
26. 2435 2010 2002 67
27. 2436 2002 1992 24
28. 2436 2003 1992 25
29. 2436 2004 1992 22
30. 2436 2005 1992 23
31. 2436 2006 1992 21
32. 2436 2007 1992 100
33. 2436 2008 1992 73
34. 2436 2009 1992 23
35. 2436 2010 1992 40
36. 2437 2002 2002 30
37. 2437 2003 2002 31
38. 2437 2004 2002 21
39. 2437 2006 2002 56
40. 2437 2007 2002 20
41. end
. scalar N = _N
. bysort firm_id (year) : gen gap = year - year[_n-1]
(6 missing values generated)
. expand gap
(6 missing counts ignored; observations not deleted)
(4 observations created)
. gen orig = _n <= scalar(N)
. bysort firm_id (year) : replace total_workers = . if !orig
(4 real changes made, 4 to missing)
. bysort firm_id (year orig) : replace year = year[_n-1] + 1 if _n > 1 & year != year[_n-1] + 1
(4 real changes made)
. bysort firm_id (year): ipolate total_workers year , gen(total_workers2)
. list, sepby(firm_id)
+------------------------------------------------------------+
| firm_id year dyrstr total_~s gap orig total_~2 |
|------------------------------------------------------------|
1. | 2432 2002 1980 29 . 1 29 |
2. | 2432 2003 1980 23 1 1 23 |
3. | 2432 2004 1980 . 2 0 153 |
4. | 2432 2005 1980 283 2 1 283 |
5. | 2432 2006 1980 56 1 1 56 |
6. | 2432 2007 1980 21 1 1 21 |
|------------------------------------------------------------|
7. | 2433 2004 2001 42 . 1 42 |
8. | 2433 2005 2001 . 2 0 35.5 |
9. | 2433 2006 2001 29 2 1 29 |
10. | 2433 2007 2001 . 2 0 64.5 |
11. | 2433 2008 2001 100 2 1 100 |
|------------------------------------------------------------|
12. | 2434 2002 2002 21 . 1 21 |
13. | 2434 2003 2002 55 1 1 55 |
14. | 2434 2004 2002 22 1 1 22 |
15. | 2434 2005 2002 24 1 1 24 |
16. | 2434 2006 2002 17 1 1 17 |
17. | 2434 2007 2002 40 1 1 40 |
18. | 2434 2008 2002 110 1 1 110 |
19. | 2434 2009 2002 158 1 1 158 |
20. | 2434 2010 2002 38 1 1 38 |
|------------------------------------------------------------|
21. | 2435 2002 2002 80 . 1 80 |
22. | 2435 2003 2002 86 1 1 86 |
23. | 2435 2004 2002 877 1 1 877 |
24. | 2435 2005 2002 254 1 1 254 |
25. | 2435 2006 2002 71 1 1 71 |
26. | 2435 2007 2002 116 1 1 116 |
27. | 2435 2008 2002 118 1 1 118 |
28. | 2435 2009 2002 1165 1 1 1165 |
29. | 2435 2010 2002 67 1 1 67 |
|------------------------------------------------------------|
30. | 2436 2002 1992 24 . 1 24 |
31. | 2436 2003 1992 25 1 1 25 |
32. | 2436 2004 1992 22 1 1 22 |
33. | 2436 2005 1992 23 1 1 23 |
34. | 2436 2006 1992 21 1 1 21 |
35. | 2436 2007 1992 100 1 1 100 |
36. | 2436 2008 1992 73 1 1 73 |
37. | 2436 2009 1992 23 1 1 23 |
38. | 2436 2010 1992 40 1 1 40 |
|------------------------------------------------------------|
39. | 2437 2002 2002 30 . 1 30 |
40. | 2437 2003 2002 31 1 1 31 |
41. | 2437 2004 2002 21 1 1 21 |
42. | 2437 2005 2002 . 2 0 38.5 |
43. | 2437 2006 2002 56 2 1 56 |
44. | 2437 2007 2002 20 1 1 20 |
+------------------------------------------------------------+
The following works if, like in your example database, you don't have consecutive years missing for any given firm. I also assume variable Lack to be numeric and the final result is an unbalanced panel (you were not specific about this point in your question).
* Expand database
expand 2 if Lack == 1, gen(x)
gsort firm_id year -x
* Substitution rules
replace year = year - 1 if x == 1
replace total_workers = (total_workers[_n-1] + total_workers[_n+1])/2 if x == 1
list, sepby(firm_id)
The expand line could be re-written as expand Lack + 1, gen(x), but maybe it is clearer that way.
For the more general case in which you do have consecutive years missing, the following should get you started under the assumption that Lack specifies the number of consecutive years missing. For example, if there is a jump from 2006 to 2009 for a given firm, then Lack = 2 for the 2009 observation.
* Expand database
expand Lack + 1, gen(x)
gsort firm_id year -x
* Substitution rules
replace year = year[_n-1] + 1 if x == 1
Now you just need to come up with an imputation rule for your total_workers:
replace total_workers = ...
If Lack is a string, convert to numeric using real.
You've already awarded the answer, but I have had to do similar before and always use the cross command as follows. Say I am using your dataset already & continue with the following code:
tempfile master year
save `master'
preserve
keep year
duplicates drop
save `year'
restore
//next two lines set me up to correct for different year ranges by firm; if year ranges were standard, this would be omitted
bys firm_id: egen minyear=min(year)
bys firm_id: egen maxyear=max(year)
keep firm_id minyear maxyear
duplicates drop
cross using `year'
merge m:1 firm_id year using `master', assert(1 3) nogen
drop if year<minyear | year>maxyear //this adjusts for years outside the earliest and latest years observed by firm; if year ranges standard, again omitted
Then from here, use the ipolate command in the spirit of #NickCox.
I'm particularly interested in any pros/cons regarding the use of expand and cross. (Beyond the fact that my use here specifically hinges on >0 records for each year being observed in order to construct the crossed dataset, which could be eliminated if I create the `year' tempfile differently.)