If I have a dataframe such as:
index = pd.date_range(start='2014 01 01 00:00', end='2014 01 05 00:00', freq='12H')
df = pd.DataFrame(pd.np.random.randn(9),index=index,columns=['A'])
df
Out[5]:
A
2014-01-01 00:00:00 2.120577
2014-01-01 12:00:00 0.968724
2014-01-02 00:00:00 1.232688
2014-01-02 12:00:00 0.328104
2014-01-03 00:00:00 -0.836761
2014-01-03 12:00:00 -0.061087
2014-01-04 00:00:00 -1.239613
2014-01-04 12:00:00 0.513896
2014-01-05 00:00:00 0.089544
And I want to resample to daily frequency, it is quite easy:
df.resample(rule='1D',how='mean')
Out[6]:
A
2014-01-01 1.544650
2014-01-02 0.780396
2014-01-03 -0.448924
2014-01-04 -0.362858
2014-01-05 0.089544
However, I need to track how many instances are going into each day. Is there a good pythonic way of using resample to both perform the specified "how" operation AND track number of data points going into each mean value, e.g. yielding
Out[6]:
A Instances
2014-01-01 1.544650 2
2014-01-02 0.780396 2
2014-01-03 -0.448924 2
2014-01-04 -0.362858 2
2014-01-05 0.089544 2
Conveniently, how accepts a list:
df1 = df.resample(rule='1D', how=['mean', 'count'])
This will return a DataFrame with a MultiIndex column: one level for 'A' and another level for 'mean' and 'count'. To get a simple DataFrame like the desired output in your question, you can drop the extra level like df1.columns = df1.columns.droplevel(0) or, better, you can do your resampling on df['A'] instead of df.
Related
Example
Record Table
id value created_datetime
1 10 2022-01-18 10:00:00
2 11 2022-01-18 10:15:00
3 8 2022-01-18 15:15:00
4 25 2022-01-19 09:00:00
5 16 2022-01-19 12:00:00
6 9 2022-01-20 11:00:00
I want to filter this table 'Record Table' as getting each date latest value.For Example there are three dates 2022-01-18,2022-01-19,2022-01-20 in which latest value of these dates are as follows
Latest value of each dates are (Result that iam looking to get)
id value created_datetime
3 8 2022-01-18 15:15:00
5 16 2022-01-19 12:00:00
6 9 2022-01-20 11:00:00
So how to filter to recieve results as the above mentioned table
It can be done in two steps:
First get the latest datetime for each day and then filter the records by that.
max_daily_date_times = Record.objects.extra(select={'day': 'date( created_datetime )'}).values('day') \
.annotate(latest_datetime=Max('created_datetime'))
records = Record.objects.filter(
created_datetime__in=[entry["latest_datetime"] for entry in max_daily_date_times]).values("id", "value",
"created_datetime")
I am reading a csv in python with multiple columns.
The first column is the date and I have to delete the rows that correspond to years previous to 2017.
time high low Volume Plot Rango
0 2017-12-22 25.17984 24.280560 970 0.329943 0.899280
1 2017-12-26 25.17984 23.381280 2579 1.057921 1.798560
2 2017-12-27 25.17984 23.381280 2499 0.998083 1.798560
3 2017-12-28 25.17984 24.280560 1991 0.919885 0.899280
4 2017-12-29 25.17984 24.100704 2703 1.237694 1.079136
.. ... ... ... ... ... ...
580 2020-04-16 5.45000 4.450000 117884 3.168380 1.000000
581 2020-04-17 5.35000 4.255200 58531 1.370538 1.094800
582 2020-04-20 4.66500 4.100100 25770 0.582999 0.564900
583 2020-04-21 4.42000 3.800000 20914 0.476605 0.620000
584 2020-04-22 4.22000 3.710100 23212 0.519275 0.509900
I want to delete the rows corresponding to years prior to 2018, so 2017,2016,2015... should be deleted
I am trying with this but does not work
if 2017 in datos['time']: datos['time'].remove() #check if number 2017 is in each of the items of the column 'time'
The dates are recognized as numbers, not as datatime but I think I do not need to declare it as datatime.
In pandas
Given your data
Use Boolean indexing
time must be datetime64[ns] format
df.info() will give the dtypes
df['date'] = pd.to_datetime(df['date'])
df[df['time'].dt.year >= 2018]
I have a csv file like below.
Beat,Hour,Month,Primary Type,COUNTER
111,10AM,Apr,ASSAULT,12
111,10AM,Apr,BATTERY,5
111,10AM,Apr,BURGLARY,1
111,10AM,Apr,CRIMINAL DAMAGE,4
111,10AM,Aug,MOTOR VEHICLE THEFT,2
111,10AM,Aug,NARCOTICS,1
111,10AM,Aug,OTHER OFFENSE,18
111,10AM,Aug,THEFT,38
Now I want to find the % of each Primary Type grouped by the first three columns. For eg, For Beat = 111, Hour=10AM, Month=Apr, %Assault=12/(12+5+1+4) * 100. Can anyone give a clue on how to do this using pandas?
You can using transform sum
df['New']=df.COUNTER/df.groupby(['Beat','Hour','Month']).COUNTER.transform('sum')*100
df
Out[575]:
Beat Hour Month Primary Type COUNTER New
0 111 10AM Apr ASSAULT 12 54.545455
1 111 10AM Apr BATTERY 5 22.727273
2 111 10AM Apr BURGLARY 1 4.545455
3 111 10AM Apr CRIMINAL DAMAGE 4 18.181818
4 111 10AM Aug MOTOR VEHICLE THEFT 2 3.389831
5 111 10AM Aug NARCOTICS 1 1.694915
6 111 10AM Aug OTHER OFFENSE 18 30.508475
7 111 10AM Aug THEFT 38 64.406780
My pandas dataframe is structured like this (with 'date' as index):
starttime duration_seconds
date
2012-12-24 11:52:00 31800
2012-12-23 0:28:00 35940
2012-12-22 2:00:00 26820
2012-12-21 1:57:00 23520
2012-12-20 1:32:00 23100
2012-12-19 0:50:00 25080
2012-12-18 1:17:00 24780
2012-12-17 0:38:00 25440
2012-12-15 10:38:00 32760
2012-12-14 0:35:00 23160
2012-12-12 22:54:00 3960
2012-12-12 0:21:00 24060
2012-12-10 23:45:00 900
2012-12-11 11:00:00 24840
2012-12-10 0:27:00 25980
2012-12-09 19:29:00 4320
2012-12-09 3:00:00 29880
2012-12-08 2:07:00 34380
I use the following to groupby date and sum the total seconds each day:
df_sum = df.groupby(df.index.date).sum()
What I'd like to do is sum duration_seconds from noon on one day to noon on the following day. Is there an elegant (pandas) way of doing this? Thanks in advance!
pd.TimeGrouper is a custom groupby class for time-interval grouping of NDFrames with a DatetimeIndex, TimedeltaIndex or PeriodIndex. (If your dataframe index is using date-strings, you'll need to convert it to a DatetimeIndex first by using df.index = pd.DatetimeIndex(df.index).)
df.groupby(pd.TimeGrouper('24H')).sum() groups df using 24-hour intervals starting at time 00:00:00.
df.groupby(pd.TimeGrouper('24H'), base=12).sum() groups df using 24-hour intervals starting at time 12:00:00:
In [90]: df.groupby(pd.TimeGrouper('24H', base=12)).sum()
Out[90]:
duration_seconds
2012-12-07 12:00:00 34380.0
2012-12-08 12:00:00 34200.0
2012-12-09 12:00:00 26880.0
2012-12-10 12:00:00 24840.0
2012-12-11 12:00:00 28020.0
2012-12-12 12:00:00 NaN
2012-12-13 12:00:00 23160.0
2012-12-14 12:00:00 32760.0
2012-12-15 12:00:00 NaN
2012-12-16 12:00:00 25440.0
2012-12-17 12:00:00 24780.0
2012-12-18 12:00:00 25080.0
2012-12-19 12:00:00 23100.0
2012-12-20 12:00:00 23520.0
2012-12-21 12:00:00 26820.0
2012-12-22 12:00:00 35940.0
2012-12-23 12:00:00 31800.0
Documentation on pd.TimeGrouper is a little sparse. It is a subclas of pd.Grouper and thus many of its parameters have the same meaning as those documented for pd.Grouper. You can find more examples of pd.TimeGrouper usage in the Cookbook. I found the base parameter by inspecting the source code. The base parameter in pd.TimeGrouper has the same meaning as the base parameter in pd.resample and that is not surprising since pd.resample is implemented using pd.TimeGrouper.
In fact, come to think of it, another way to compute the desired result is
df.resample('24H', base=12).sum()
I have a DataFrame as follows, where Id is a string and Date is a datetime:
Id Date
1 3-1-2012
1 4-8-2013
2 1-17-2013
2 5-4-2013
2 10-30-2012
3 1-3-2013
I'd like to consolidate the table to just show one row for each Id which has the most recent date.
Any thoughts on how to do this?
You can groupby the Id field:
In [11]: df
Out[11]:
Id Date
0 1 2012-03-01 00:00:00
1 1 2013-04-08 00:00:00
2 2 2013-01-17 00:00:00
3 2 2013-05-04 00:00:00
4 2 2012-10-30 00:00:00
5 3 2013-01-03 00:00:00
In [12]: g = df.groupby('Id')
If you are not certain about the ordering, you could do something along the lines:
In [13]: g.agg(lambda x: x.iloc[x.Date.argmax()])
Out[13]:
Date
Id
1 2013-04-08 00:00:00
2 2013-05-04 00:00:00
3 2013-01-03 00:00:00
which for each group grabs the row with largest (latest) date (the argmax part).
If you knew they were in order you could take the last (or first) entry:
In [14]: g.last()
Out[14]:
Date
Id
1 2013-04-08 00:00:00
2 2012-10-30 00:00:00
3 2013-01-03 00:00:00
(Note: they're not in order, so this doesn't work in this case!)
In the Hayden response, I think that using x.loc in place of x.iloc is better, as the index of the df dataframe could be sparse (and in this case the iloc will not work).
(I haven't enought points on stackoverflow to post it in comments of the response).