In pandas 0.18.1, python 2.7.6:
Imagine we have the following table:
ID,FROM_YEAR,FROM_MONTH,AREA
1,2015,1,200
1,2015,2,200
1,2015,3,200
1,2015,4,200
1,2015,5,200
1,2015,6,200
1,2015,7,200
1,2015,8,200
1,2015,9,200
1,2015,10,200
1,2015,11,200
1,2015,12,200
1,2016,1,100
1,2016,2,100
1,2016,3,100
1,2016,4,100
1,2016,5,100
1,2016,6,100
1,2016,7,100
1,2016,8,100
1,2016,9,100
1,2016,10,100
1,2016,11,100
1,2016,12,100
We are trying to get an calendar year average in the following format
ID,FROM_YEAR,TYPE,AREA
1,2015,A,200
1,2016,A,100
1,2015,B,200
1,2016,B,100
Note: TYPE is a string column for other information. Here we only have 2 types of 'TYPE': 'A' and 'B'
If we tried the following, the 'AREA' column name is missing, also the ID=1 only shows in the first case.
AREA_CY=df.groupby(['ID','FROM_YEAR'])['AREA'].mean()
it returns:
ID,FROM_YEAR,
1,2015,200
,2016,100
,2015,200
,2016,100
If we tried the following:
AREA_CY=df.groupby(['ID','FROM_YEAR'])['AREA'].mean(axis=1)
it returns:
TypeError: mean() got an unexpected keyword argument 'axis'
Could any guru enlighten? Thanks!
Try this:
In [102]: x = df.groupby(['ID','FROM_YEAR'])['AREA'].mean().reset_index(name='AREA')
In [103]: y = pd.DataFrame({'TYPE':['A','B']})
In [104]: x
Out[104]:
ID FROM_YEAR AREA
0 1 2015 200
1 1 2016 100
In [105]: y
Out[105]:
TYPE
0 A
1 B
In [106]: x.assign(key=0).merge(y.assign(key=0), on='key').drop('key', 1)
Out[106]:
ID FROM_YEAR AREA TYPE
0 1 2015 200 A
1 1 2015 200 B
2 1 2016 100 A
3 1 2016 100 B
Explanation:
Let's make a cartesian product (AKA full outer join) of x and y DFs:
In [126]: x.assign(key=0)
Out[126]:
ID FROM_YEAR AREA key
0 1 2015 200 0
1 1 2016 100 0
In [127]: y.assign(key=0)
Out[127]:
TYPE key
0 A 0
1 B 0
In [128]: x.assign(key=0).merge(y.assign(key=0), on='key')
Out[128]:
ID FROM_YEAR AREA key TYPE
0 1 2015 200 0 A
1 1 2015 200 0 B
2 1 2016 100 0 A
3 1 2016 100 0 B
Related
I have panel data of individuals, their marital status (0 = not married, 1 = married) and one random shock (0 = No shock, 1 = Shock). Now for the people who experience the shock (Everyone except id1), I would like to know which person was already married when they experienced the shock (n=2, id3, id5), who was not married when they experienced the shock but subsequently got married (n=1, id2) and who was not married when they experienced the shock and did not get married subsequently (n=1, id4).
* Example generated by -dataex-. For more info, type help dataex
clear
input int year str3 id float(shock maritalstatus)
2010 "id1" 0 1
2011 "id1" 0 1
2012 "id1" 0 1
2013 "id1" 0 0
2014 "id1" 0 0
2015 "id1" 0 0
2010 "id2" 1 0
2011 "id2" 0 1
2012 "id2" 0 1
2013 "id2" 0 1
2014 "id2" 0 1
2015 "id2" 0 1
2010 "id3" 0 1
2011 "id3" 0 1
2012 "id3" 0 1
2013 "id3" 1 1
2014 "id3" 0 1
2015 "id3" 0 1
2010 "id4" 1 0
2011 "id4" 0 0
2012 "id4" 0 0
2013 "id4" 0 0
2014 "id4" 0 0
2015 "id4" 0 0
2010 "id5" 0 1
2011 "id5" 0 1
2012 "id5" 1 1
2013 "id5" 0 1
2014 "id5" 0 1
2015 "id5" 0 1
end
Thanks for the data example.
Being married when the shock arrived is identifiable by looking at each observation, but the trick lies in spreading that to all observations for the same identifier.
egen married_at_shock = total(marital == 1 & shock == 1), by(id)
The next variable is a variation on the same theme.
egen not_married_at_shock = total(marital == 0 & shock == 1), by(id)
The last variable seems harder to me. I think you have to work out explicitly when the shock occurred
egen when_shock = mean(cond(shock == 1, year, .)), by(id)
and then check what happened afterwards
egen never_married_after_shock = total(marital & year > when_shock), by(id)
replace never_married_after_shock = never_married == 0 if when_shock < .
tabdisp id, c(*married*)
----------------------------------------------------------------------------
id | married_at_shock not_married_at_shock never_married_afte~k
----------+-----------------------------------------------------------------
id1 | 0 0 0
id2 | 0 1 0
id3 | 1 0 0
id4 | 0 1 1
id5 | 1 0 0
----------------------------------------------------------------------------
There are no doubt other ways to approach this.
Any reading list starts with underlining that true and false conditions yield 1 and 0 respectively
as discussed in this FAQ
which has many applications
such as applications to "any" and "all" questions, which include "ever" and "never"
The use of egen as a workhorse here is natural given your need to work both on observations for each identifier and over each history. Some tricks are covered in
this paper.
I have the below table. I need to group them base on product and increment group number when set = 1 but returns back to 1 if new product is in next line. I have created an index already.
Index
Product
Set
1
Table
0
2
Table
0
3
Table
1
4
Table
0
5
Table
0
6
Table
1
7
Table
0
8
Table
1
9
Chair
0
10
Chair
0
11
Chair
0
12
Chair
1
13
Chair
0
14
Chair
0
15
Chair
1
Here's the result I'm after:
Index
Product
Set
Group
1
Table
0
1
2
Table
0
1
3
Table
1
1
4
Table
0
2
5
Table
0
2
6
Table
1
2
7
Table
0
3
8
Table
1
3
9
Chair
0
1
10
Chair
0
1
11
Chair
0
1
12
Chair
1
1
13
Chair
0
2
14
Chair
0
2
15
Chair
1
2
With this
Grouping=
RANKX (
FILTER (
'fact',
'fact'[Set] <> 0
&& EARLIER ( 'fact'[Product] ) = 'fact'[Product]
),
'fact'[Index],
,
ASC
I am performing an event study, see reproducible example below. I only include one unit but this is enough for the question I'm asking.
input unit year treatment
1 2000 0
1 2001 0
1 2002 1
1 2003 0
1 2004 0
1 2005 1
1 2006 0
1 2007 0
end
I generate dif_year which should take the difference of years to the treatment:
sort unit year
bysort unit: gen year_nb = _n
bysort unit: gen year_target = year_nb if treatment == 1
by unit: egen target_distance = min(year_target)
drop year_target
gen dif_year = year_nb - target_distance
drop year_nb target_distance
It works well with one treatment by unit, but here I have two. Using the code snippet from above, I get the following result:
unit
year
treatment
dif_year
1
2000
0
-2
1
2001
0
-1
1
2002
1
0
1
2003
0
1
1
2004
0
2
1
2005
1
3
1
2006
0
4
1
2007
0
5
You can see that it is anchored to the first treatment (2002) but ignores the second one (2005). How can I adapt dif_year to make it work with multiple treatments (here, in 2005) ? The values for 2003 and before are correct, but I would expect to get the value -1 for 2004, 0 for 2005, -1 for 2006 and -2 for 2007.
This solution uses no loops. Evidently the problem hinges on looking backwards as well as forwards; hence reversing time temporarily is a device that can be used.
clear
input unit year treatment
1 2000 0
1 2001 0
1 2002 1
1 2003 0
1 2004 0
1 2005 1
1 2006 0
1 2007 0
end
bysort unit (year) : gen wanted1 = 0 if treatment
by unit: replace wanted1 = wanted1[_n-1] + 1 if missing(wanted1)
gen negyear = -year
bysort unit (negyear) : gen wanted2 = 0 if treatment
by unit: replace wanted2 = wanted2[_n-1] + 1 if missing(wanted2)
gen wanted = cond(abs(wanted2) < abs(wanted1), - wanted2, wanted1)
sort unit year
list , sep(0)
+---------------------------------------------------------------+
| unit year treatm~t wanted1 negyear wanted2 wanted |
|---------------------------------------------------------------|
1. | 1 2000 0 . -2000 2 -2 |
2. | 1 2001 0 . -2001 1 -1 |
3. | 1 2002 1 0 -2002 0 0 |
4. | 1 2003 0 1 -2003 2 1 |
5. | 1 2004 0 2 -2004 1 -1 |
6. | 1 2005 1 0 -2005 0 0 |
7. | 1 2006 0 1 -2006 . 1 |
8. | 1 2007 0 2 -2007 . 2 |
+---------------------------------------------------------------+
Here is a solution where the largest number of years does not need to be hardcoded.
clear
input unit year treatment
1 2000 0
1 2001 0
1 2002 1
1 2003 0
1 2004 0
1 2005 1
1 2006 0
1 2007 0
1 2008 0
1 2009 0
1 2010 1
end
sort unit year
*Set all treatment years to 0
gen diff_year = 0 if treatment == 1
*Initilize locals used in the loop
local stop "false"
local diff_distance = 0
while "`stop'" == "false" {
**Replace diff to one more than diff on row above if unit is the same,
* no diff for this row, and diff on row above is the diff distance
* for this iteration of the loop.
replace diff_year = diff_year[_n-1] + 1 if unit == unit[_n-1] & missing(diff_year) & diff_year[_n-1] == `diff_distance'
**Replace diff to one less than diff on row below if unit is the same,
* no diff for this row, and diff on row above is the diff distance
* for this iteration of the loop.
replace diff_year = diff_year[_n+1] - 1 if unit == unit[_n+1] & missing(diff_year) & diff_year[_n+1] == `diff_distance' * -1
*Test if there are still missing values, and if so set stop local to true
count if missing(diff_year)
if `r(N)' == 0 local stop "true"
*Increment the diff distance by one for next loop
local diff_distance = `diff_distance' + 1
}
I found a quick fix to my own question.
I generate a variable that takes missing values if there is no treatment. I then loop over rows, replacing the row below and above each treatment year by its value, until there isn't any remaining missing values.
Here, three iterations are enough but I set the loop until i = 10 just to show that adding more loops doesn't change the outcome.
sort unit year
bysort unit: gen year_nb = _n
bysort unit: gen year_target = year_nb if treatment == 1
gen closest_treatment = year_target
forvalues i = 1(1)10 {
bysort unit: replace closest_treatment = closest_treatment[_n-`i'] if(year_target[_n-`i'] != . & closest_treatment[_n] == .)
bysort unit: replace closest_treatment = closest_treatment[_n+`i'] if(year_target[_n+`i'] != . & closest_treatment[_n] == .)
}
replace year_target = closest_treatment if year_target == .
drop closest_treatment
gen dif_year = year_nb - year_target
drop year_nb year_target
Edit: in my example, the number of rows between the two treatments is even. But this solution also works for odd values, as the last row to be iterated over would be exactly in between two treatments. It doesn't matter whether we assign the distance to the previous or next treatment, unless you are interested in the sign of the number, which I assume you want to take into consideration while doing event studies (e.g. if the distance to previous treatment would be +3 years, the distance to the next treatment would be -3). This code snippet assigns value to the previous treatment (positive sign). If you want the opposite, just swap the two lines inside the loop.
I have a large Stata dataset that contains the following variables: year, state, household_id, individual_id, partner_id, and race. Here is an example of my data:
year state household_id individual_id partner_id race
1980 CA 23 2 1 3
1980 CA 23 1 2 1
1990 NY 43 4 2 1
1990 NY 43 2 4 1
Note that, in the above table, column 1 and 2 are married to each other.
I want to create a variable that is one if the person is in an interracial marriage.
As a first step, I used the following code
by household_id year: gen inter=0 if race==race[partner_id]
replace inter=1 if inter==.
This code worked well but gave the wrong result in a few cases. As an alternative, I created a string variable identifying each user and its partner, using
gen id_user=string(household_id)+"."+string(individual_id)+string(year)
gen id_partner=string(household_id)+"."+string(partner_id)+string(year)
What I want to do now is to create something like what vlookup does in Excel: for each column, save locally the id_partner, find it in the id_user and find their race, and compare it with the race of the original user.
I guess it should be something like this?
gen inter2==1 if (find race[idpartner]) == (race[iduser])
The expected output should be like this
year state household_id individual_id partner_id race inter2
1980 CA 23 2 1 3 1
1980 CA 23 1 2 1 1
1990 NY 43 4 2 1 0
1990 NY 43 2 4 1 0
I don't think you need anything so general. As you realise, the information on identifiers suffices to find couples, and that in turn allows comparison of race for the people in each couple.
In the code below _N == 2 is meant to catch data errors, such as one partner but not the other being an observation in the dataset or repetitions of one partner or both.
clear
input year str2 state household_id individual_id partner_id race
1980 CA 23 2 1 3
1980 CA 23 1 2 1
1990 NY 43 4 2 1
1990 NY 43 2 4 1
end
generate couple_id = cond(individual_id < partner_id, string(individual_id) + ///
" " + string(partner_id), string(partner_id) + ///
" " + string(individual_id))
bysort state year household_id couple_id : generate mixed = race[1] != race[2] if _N == 2
list, sepby(household_id) abbreviate(15)
+-------------------------------------------------------------------------------------+
| year state household_id individual_id partner_id race couple_id mixed |
|-------------------------------------------------------------------------------------|
1. | 1980 CA 23 2 1 3 1 2 1 |
2. | 1980 CA 23 1 2 1 1 2 1 |
|-------------------------------------------------------------------------------------|
3. | 1990 NY 43 4 2 1 2 4 0 |
4. | 1990 NY 43 2 4 1 2 4 0 |
+-------------------------------------------------------------------------------------+
This idea is documented in this article. The link gives free access to a pdf file.
I have an Input file:
ID,ROLL_NO,ADM_DATE,FEES
1,12345,01/12/2016,500
2,12345,02/12/2016,200
3,987654,01/12/2016,1000
4,12345,03/12/2016,0
5,12345,04/12/2016,0
6,12345,05/12/2016,100
7,12345,06/12/2016,0
8,12345,07/12/2016,0
9,12345,08/12/2016,0
10,987654,02/12/2016,150
11,987654,03/12/2016,300
I'm trying to find maximum count of consecutive days where FEES is 0 for a particular ROLL_NO. If FEES is not equal to zero for consecutive days, max count will be zero for that particular ROLL_NO.
Expected Output:
ID,ROLL_NO,MAX_CNT -- First occurrence of ID for a particular ROLL_NO should come as ID in output
1,12345,3
3,987654,0
This is what I've come up with so far,
import pandas as pd
df = pd.read_csv('I5.txt')
df['COUNT'] = df.groupby(['ROLLNO','ADM_DATE'])['ROLLNO'].transform(pd.Series.value_counts)
print df
But I don't believe this is the right way to approach this.
Could someone help out a python newbie out here?
You can use:
#consecutive groups
r = df['ROLL_NO'] * df['FEES'].eq(0)
a = r.ne(r.shift()).cumsum()
print (a)
ID
1 1
2 1
3 1
4 2
5 2
6 3
7 4
8 4
9 4
10 5
11 5
dtype: int32
#filter 0 FEES, count, get max per first level and last add missing roll no by reindex
mask = df['FEES'].eq(0)
df = (df[mask].groupby(['ROLL_NO',a[mask]])
.size()
.max(level=0)
.reindex(df['ROLL_NO'].unique(), fill_value=0)
.reset_index(name='MAX_CNT'))
print (df)
ROLL_NO MAX_CNT
0 12345 3
1 987654 0
Explanation:
First compare FEES column with 0, eq is same as == and multiple mask by column ROLL_NO:
mask = df['FEES'].eq(0)
r = df['ROLL_NO'] * mask
print (r)
0 0
1 0
2 0
3 12345
4 12345
5 0
6 12345
7 12345
8 12345
9 0
10 0
dtype: int64
Get consecutive groups by compare shifted Series r and cumsum:
a = r.ne(r.shift()).cumsum()
print (a)
0 1
1 1
2 1
3 2
4 2
5 3
6 4
7 4
8 4
9 5
10 5
dtype: int32
Filter only 0 in FEES and groupby with size, also filter a for same indexes:
print (df[mask].groupby(['ROLL_NO',a[mask]]).size())
ROLL_NO
12345 2 2
4 3
dtype: int64
Get max values per first level of MultiIndex:
print (df[mask].groupby(['ROLL_NO',a[mask]]).size().max(level=0))
ROLL_NO
12345 3
dtype: int64
Last add missing ROLL_NO without 0 by reindex:
print (df[mask].groupby(['ROLL_NO',a[mask]])
.size()
.max(level=0)
.reindex(df['ROLL_NO'].unique(), fill_value=0))
ROLL_NO
12345 3
987654 0
dtype: int64
and for columns from index use reset_index.
EDIT:
For first ID use drop_duplicates with insert and map:
r = df['ROLL_NO'] * df['FEES'].eq(0)
a = r.ne(r.shift()).cumsum()
s = df.drop_duplicates('ROLL_NO').set_index('ROLL_NO')['ID']
mask = df['FEES'].eq(0)
df1 = (df[mask].groupby(['ROLL_NO',a[mask]])
.size()
.max(level=0)
.reindex(df['ROLL_NO'].unique(), fill_value=0)
.reset_index(name='MAX_CNT'))
df1.insert(0, 'ID', df1['ROLL_NO'].map(s))
print (df1)
ID ROLL_NO MAX_CNT
0 1 12345 3
1 3 987654 0