How do I remove the DATE and TIMEs with the NAN value in the 'oo' column.
this is my csv
DATE,TIME,OPEN,HIGH,LOW,CLOSE,VOLUME
02/03/1997,09:04:00,3046.00,3048.50,3046.00,3047.50,505
02/03/1997,09:05:00,3047.00,3048.00,3046.00,3047.00,162
02/03/1997,09:06:00,3047.50,3048.00,3047.00,3047.50,98
02/03/1997,09:07:00,3047.50,3047.50,3047.00,3047.50,228
02/03/1997,09:08:00,3048.00,3048.00,3047.50,3048.00,136
02/03/1997,09:09:00,3048.00,3048.00,3046.50,3046.50,174
02/03/1997,09:10:00,3046.50,3046.50,3045.00,3045.00,134
02/03/1997,09:11:00,3045.50,3046.00,3044.00,3045.00,43
02/03/1997,09:12:00,3045.00,3045.50,3045.00,3045.00,214
02/03/1997,09:13:00,3045.50,3045.50,3045.50,3045.50,8
02/03/1997,09:14:00,3045.50,3046.00,3044.50,3044.50,152
02/03/1997,09:15:00,3044.00,3044.00,3042.50,3042.50,126
02/03/1997,09:16:00,3043.50,3043.50,3043.00,3043.00,128
02/03/1997,09:17:00,3042.50,3043.50,3042.50,3043.50,23
02/03/1997,09:18:00,3043.50,3044.50,3043.00,3044.00,51
02/03/1997,09:19:00,3044.50,3044.50,3043.00,3043.00,18
02/03/1997,09:20:00,3043.00,3045.00,3043.00,3045.00,23
02/03/1997,09:21:00,3045.00,3045.00,3044.50,3045.00,51
02/03/1997,09:22:00,3045.00,3045.00,3045.00,3045.00,47
02/03/1997,09:23:00,3045.50,3046.00,3045.00,3045.00,77
02/03/1997,09:24:00,3045.00,3045.00,3045.00,3045.00,131
02/03/1997,09:25:00,3044.50,3044.50,3043.50,3043.50,138
02/03/1997,09:26:00,3043.50,3043.50,3043.50,3043.50,6
02/03/1997,09:27:00,3043.50,3043.50,3043.00,3043.00,56
02/03/1997,09:28:00,3043.00,3044.00,3043.00,3044.00,32
02/03/1997,09:29:00,3044.50,3044.50,3044.50,3044.50,63
02/03/1997,09:30:00,3045.00,3045.00,3045.00,3045.00,28
here's my code.
exp = pd.read_csv('example.txt', parse_dates = [["DATE", "TIME"]], index_col=0)
exp['oo'] = opcl.OPEN.resample("5Min").first()
print exp['oo']
and I get this
DATE_TIME
1997-02-03 09:04:00 NaN
1997-02-03 09:05:00 3047.0
1997-02-03 09:06:00 NaN
1997-02-03 09:07:00 NaN
1997-02-03 09:08:00 NaN
1997-02-03 09:09:00 NaN
1997-02-03 09:10:00 3046.5
I want to get rid of all the DATE_TIME rows with NaN vaules in the 'oo' column.
I've tried.
exp['oo'] = exp['oo'].dropna()
But I get the same thing.
I've looked all threw the http://pandas.pydata.org/pandas-docs/stable/missing_data.html
And looked all over this website.
I would like to keep my csv reader the same but idk.
If anybody could help it would be greatly appreciated thanks so much for your time.
I think you want this:
>>> exp.OPEN.resample("5Min", how='first')
DATE_TIME
1997-02-03 09:00:00 3046.0
1997-02-03 09:05:00 3047.0
1997-02-03 09:10:00 3046.5
1997-02-03 09:15:00 3044.0
1997-02-03 09:20:00 3043.0
1997-02-03 09:25:00 3044.5
1997-02-03 09:30:00 3045.0
Freq: 5T, Name: OPEN, dtype: float64
I'm trying to convert a time series index to a seconds of the day i.e. so that the seconds increases from 0-86399 as the day progresses. I currently can recover the time of the day, but am having trouble converting this to seconds in a vectorized way:
df['timeofday'] = df.index.time
Any ideas? Thanks.
As #Jeff points out my original answer misunderstood what you were doing. But the following should work and it is vectorized. My answer relies on numpy datetime64 operations (subtract the beginning of the day from the current datetime64 and the divide through with a timedelta64 to get seconds):
>>> df
A
2011-01-01 00:00:00 -0.112448
2011-01-01 01:00:00 1.006958
2011-01-01 02:00:00 -0.056194
2011-01-01 03:00:00 0.777821
2011-01-01 04:00:00 -0.552584
2011-01-01 05:00:00 0.156198
2011-01-01 06:00:00 0.848857
2011-01-01 07:00:00 0.248990
2011-01-01 08:00:00 0.524785
2011-01-01 09:00:00 1.510011
2011-01-01 10:00:00 -0.332266
2011-01-01 11:00:00 -0.909849
2011-01-01 12:00:00 -1.275335
2011-01-01 13:00:00 1.361837
2011-01-01 14:00:00 1.924534
2011-01-01 15:00:00 0.618478
df['sec'] = (df.index.values
- df.index.values.astype('datetime64[D]'))/np.timedelta64(1,'s')
A sec
2011-01-01 00:00:00 -0.112448 0
2011-01-01 01:00:00 1.006958 3600
2011-01-01 02:00:00 -0.056194 7200
2011-01-01 03:00:00 0.777821 10800
2011-01-01 04:00:00 -0.552584 14400
2011-01-01 05:00:00 0.156198 18000
2011-01-01 06:00:00 0.848857 21600
2011-01-01 07:00:00 0.248990 25200
2011-01-01 08:00:00 0.524785 28800
2011-01-01 09:00:00 1.510011 32400
2011-01-01 10:00:00 -0.332266 36000
2011-01-01 11:00:00 -0.909849 39600
2011-01-01 12:00:00 -1.275335 43200
2011-01-01 13:00:00 1.361837 46800
2011-01-01 14:00:00 1.924534 50400
2011-01-01 15:00:00 0.618478 54000
May be a bit overdone, but this would be my answer:
from pandas import date_range, Series, to_datetime
# Some test data
rng = date_range('1/1/2011 01:01:01', periods=3, freq='s')
df = Series(randn(len(rng)), index=rng).to_frame()
def sec_in_day(timestamp):
date = timestamp.date() # We get the date less the time
elapsed_time = timestamp.to_datetime() - to_datetime(date) # We get the time
return elapsed_time.total_seconds()
Series(df.index).apply(sec_in_day)
I modified KarlD's answer for datetime with time zone:
d = pd.DataFrame({"t_naive":pd.date_range("20160101","20160102", freq = "2H")})
d['t_utc'] = d['t_naive'].dt.tz_localize("UTC")
d['t_ct'] = d['t_utc'].dt.tz_convert("America/Chicago")
print(d.head())
# t_naive t_utc t_ct
# 0 2016-01-01 00:00:00 2016-01-01 00:00:00+00:00 2015-12-31 18:00:00-06:00
# 1 2016-01-01 02:00:00 2016-01-01 02:00:00+00:00 2015-12-31 20:00:00-06:00
# 2 2016-01-01 04:00:00 2016-01-01 04:00:00+00:00 2015-12-31 22:00:00-06:00
# 3 2016-01-01 06:00:00 2016-01-01 06:00:00+00:00 2016-01-01 00:00:00-06:00
# 4 2016-01-01 08:00:00 2016-01-01 08:00:00+00:00 2016-01-01 02:00:00-06:00
The answer by KarlD gives sec of day in UTC
s0 = (d["t_naive"].values - d["t_naive"].values.astype('datetime64[D]'))/np.timedelta64(1,'s')
s0
# array([ 0., 7200., 14400., 21600., 28800., 36000., 43200.,
# 50400., 57600., 64800., 72000., 79200., 0.])
s1 = (d["t_ct"].values - d["t_ct"].values.astype('datetime64[D]'))/np.timedelta64(1,'s')
s1
# array([ 0., 7200., 14400., 21600., 28800., 36000., 43200.,
# 50400., 57600., 64800., 72000., 79200., 0.])
For sec of day in local time, use:
s2 = (d["t_ct"].view("int64") - d["t_ct"].dt.normalize().view("int64"))//pd.Timedelta(1, unit='s')
#use d.index.normalize() for index
s2.values
# array([64800, 72000, 79200, 0, 7200, 14400, 21600, 28800, 36000,
# 43200, 50400, 57600, 64800], dtype=int64)
or,
s3 = d["t_ct"].dt.hour*60*60 + d["t_ct"].dt.minute*60+ d["t_ct"].dt.second
s3.values
# array([64800, 72000, 79200, 0, 7200, 14400, 21600, 28800, 36000,
# 43200, 50400, 57600, 64800], dtype=int64)
If I have a dataframe such as:
index = pd.date_range(start='2014 01 01 00:00', end='2014 01 05 00:00', freq='12H')
df = pd.DataFrame(pd.np.random.randn(9),index=index,columns=['A'])
df
Out[5]:
A
2014-01-01 00:00:00 2.120577
2014-01-01 12:00:00 0.968724
2014-01-02 00:00:00 1.232688
2014-01-02 12:00:00 0.328104
2014-01-03 00:00:00 -0.836761
2014-01-03 12:00:00 -0.061087
2014-01-04 00:00:00 -1.239613
2014-01-04 12:00:00 0.513896
2014-01-05 00:00:00 0.089544
And I want to resample to daily frequency, it is quite easy:
df.resample(rule='1D',how='mean')
Out[6]:
A
2014-01-01 1.544650
2014-01-02 0.780396
2014-01-03 -0.448924
2014-01-04 -0.362858
2014-01-05 0.089544
However, I need to track how many instances are going into each day. Is there a good pythonic way of using resample to both perform the specified "how" operation AND track number of data points going into each mean value, e.g. yielding
Out[6]:
A Instances
2014-01-01 1.544650 2
2014-01-02 0.780396 2
2014-01-03 -0.448924 2
2014-01-04 -0.362858 2
2014-01-05 0.089544 2
Conveniently, how accepts a list:
df1 = df.resample(rule='1D', how=['mean', 'count'])
This will return a DataFrame with a MultiIndex column: one level for 'A' and another level for 'mean' and 'count'. To get a simple DataFrame like the desired output in your question, you can drop the extra level like df1.columns = df1.columns.droplevel(0) or, better, you can do your resampling on df['A'] instead of df.