Convert Pandas DF column from UTC to US-Eastern time without date - python-2.7

I have this Pandas dataframe column:
time_UTC
0 2015-01-05 16:44:34+00:00
1 2015-08-11 16:44:38+00:00
2 2015-08-02 16:53:25+00:00
3 2015-08-17 16:53:25+00:00
4 2015-09-28 16:53:26+00:00
Name: time_UTC, dtype: datetime64[ns, UTC]
and I converted it from UTC to US-Eastern timezone using:
list_temp = []
for row in df['time_UTC']:
list_temp.append(Timestamp(row, tz = 'UTC').tz_convert('US/Eastern'))
df['time_EST'] = list_temp
to get this:
0 2015-01-05 11:44:34-05:00
1 2015-08-11 11:44:38-05:00
2 2015-08-02 11:53:25-05:00
3 2015-08-17 11:53:25-05:00
4 2015-09-28 11:53:26-05:00
Name: time_EST, dtype: datetime64[ns, US/Eastern]
Now, I need to drop the date part of the entries so that I only get the time. Here is what I need:
0 11:44:34-05:00
1 11:44:38-05:00
2 11:53:25-05:00
3 11:53:25-05:00
4 11:53:26-05:00
Name: time_EST, dtype: datetime64[ns, US/Eastern]
Attempt:
I tried this:
print df['time_EST'].apply(lambda x: dt.time(x.hour,x.minute,x.second))
The conversion is made so that date is dropped and I only get time. But it is reverting back to the UTC timezone. Here is the output of the above command:
0 16:44:34
1 16:44:38
2 16:53:25
3 16:53:25
4 16:53:26
Name: time_EST, dtype: object
Question:
Is there a way to drop the date and keep time as US-Eastern, without automatically reverting back to UTC?
EDIT:
To recreate the problem, just copy the first DataFrame above and use this code:
import pandas as pd
from pandas.lib import Timestamp
import datetime as dt
df = pd.read_clipboard()
Then copy the remaining lines of code from the question. Any assistance with this problem would be greatly appreciated.

You want to use strftime to format your string, also note the vectorized date manipulations:
df = pd.read_clipboard()
df.time_UTC = pd.to_datetime(df.time_UTC)
df['EST'] = (df.time_UTC.dt.tz_localize('UTC')
.tz_convert('US/Eastern')
.strftime("%H:%M:%S"))
In [41]: df
Out[41]:
time_UTC EST
time_UTC
2016-02-15 16:44:34 2016-02-15 16:44:34 11:44:34
2016-02-15 16:44:38 2016-02-15 16:44:38 11:44:38
2016-02-15 16:53:25 2016-02-15 16:53:25 11:53:25
2016-02-15 16:53:25 2016-02-15 16:53:25 11:53:25
2016-02-15 16:53:26 2016-02-15 16:53:26 11:53:26

pandas.Series.dt.tz_convert:
Convert tz-aware Datetime Array/Index from one time zone to another.
df['est_time'] = df['time_UTC'].dt.tz_convert('US/Eastern')
df['est_time'] = df['est_time'].dt.strftime("%H:%M:%S")
print(df['est_time'])
which produce output like:
0 12:44:34
1 12:44:38
2 12:53:25
3 12:53:25
4 12:53:26

Related

why does pd.to_datetime fail to convert?

I have an object column with values which are dates. I manually placed 2016-08-31 instead of NaN after reading from csv.
close_date
0 1948-06-01 00:00:00
1 2016-08-31 00:00:00
2 2016-08-31 00:00:00
3 1947-07-01 00:00:00
4 1967-05-31 00:00:00
Running df['close_date'] = pd.to_datetime(df['close_date']) results in
TypeError: invalid string coercion to datetime
Adding coerce=Trueargument results in:
TypeError: to_datetime() got an unexpected keyword argument 'coerce'
Furthermore, even though I call the column 'close_date', all the columns in the dataframe, some int64, float64, and datetime64[ns], change to dtype object.
What am I doing wrong?
You need errors='coerce' parameter what convert some not parseable values to NaT:
df['close_date'] = pd.to_datetime(df['close_date'], errors='coerce')
print (df)
close_date
0 1948-06-01
1 2016-08-31
2 2016-08-31
3 1947-07-01
4 1967-05-31
print (df['close_date'].dtypes)
datetime64[ns]
But if there are some mixed values - numeric with datetimes convert to str first:
df['close_date'] = pd.to_datetime(df['close_date'].astype(str), errors='coerce')

Difference between two dates in Pandas DataFrame

I have many columns in a data frame and I have to find the difference of time in two column named as in_time and out_time and put it in the new column in the same data frame.
The format of time is like this 2015-09-25T01:45:34.372Z.
I am using Pandas DataFrame.
I want to do like this:
df.days = df.out_time - df.in_time
I have many columns and I have to increase 1 more column in it named days and put the differences there.
You need to convert the strings to datetime dtype, you can then subtract whatever arbitrary date you want and on the resulting series call dt.days:
In [15]:
df = pd.DataFrame({'date':['2015-09-25T01:45:34.372Z']})
df
Out[15]:
date
0 2015-09-25T01:45:34.372Z
In [19]:
df['date'] = pd.to_datetime(df['date'])
df['day'] = (df['date'] - dt.datetime.now()).dt.days
df
Out[19]:
date day
0 2015-09-25 01:45:34.372 -252
Well, it all kinda depends on the time format you use. I'd recommend using datetime.
If in_time and out_time are currently strings, convert them with datetime.strptime():
from datetime import datetime
f = lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ')
df.in_time = df.in_time.apply(f)
df.out_time = df.out_time.apply(f)
and then you can simply subtract them, and assign the result to a new column named 'days':
df['days'] = df.out_time - df.in_time
Example: (3 seconds and 1 day differences)
In[5]: df = pd.DataFrame({'in_time':['2015-09-25T01:45:34.372Z','2015-09-25T01:45:34.372Z'],
'out_time':['2015-09-25T01:45:37.372Z','2015-09-26T01:45:34.372Z']})
In[6]: df
Out[6]:
in_time out_time
0 2015-09-25T01:45:34.372Z 2015-09-25T01:45:37.372Z
1 2015-09-25T01:45:34.372Z 2015-09-26T01:45:34.372Z
In[7]: type(df.loc[0,'in_time'])
Out[7]: str
In[8]: df.in_time = df.in_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[9]: df.out_time = df.out_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[10]: df # notice that it looks exactly the same, but the type is different
Out[10]:
in_time out_time
0 2015-09-25 01:45:34.372 2015-09-25T01:45:37.372Z
1 2015-09-25 01:45:34.372 2015-09-26T01:45:34.372Z
In[11]: type(df.loc[0,'in_time'])
Out[11]: pandas.tslib.Timestamp
And the creation of the new column:
In[12]: df['days'] = df.out_time - df.in_time
In[13]: df
Out[13]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0 days 00:00:03
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1 days 00:00:00
Now you can play with the output format. For example, the portion of seconds difference:
In[14]: df.days = df.days.apply(lambda x: x.total_seconds()/60)
In[15]: df
Out[15]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0.05
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1440.00
Note: Regarding the in_time and out_time format, notice that I made some assumptions (for example, that you're using a 24H clock (thus using %H and not %I)). To play with the format have a look at: strptime() documentation.
Note2: It would obviously be better if you can design your program to use datetime from the beginning (instead of using strings and converting them).
First of all, you need to convert in_time and out_time columns to datetime type.
for col in ('in_time', 'out_time') : # Looping a tuple is faster than a list
df[col] = pd.to_datetime(df[col])
You can check the type using dtypes:
df['in_time'].dtypes
Should give: datetime64[ns, UTC]
Now you can substract them and get the difference time using dt.days or from numpy using np.timedelta64.
Example:
import numpy as np
df['days'] = (df['out_time'] - df['in_time']).dt.days
# Or
df['days'] = (df['out_time'] - df['in_time']) / np.timedelta64(1, 'D')

Building a numpy array (matrix) from several dataframes

I have several dataframes which have the same look but different data.
DataFrame 1
bid
close
time
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000611
2016-05-24 00:10:00 -0.000244
2016-05-24 00:15:00 -0.000122
DataFrame 2
bid
close
time
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000811
2016-05-24 00:10:00 -0.000744
2016-05-24 00:15:00 -0.000322
I need to build a list of the dataframes, then pass that list of dataframes to a function that can take a list of dataframes and converts it to a numpy array. So below, each entry in the matrix is the elements of the dataframe ('bid
close') column. Notice I don't need the index 'time' column
data = np.array([dataFrames])
returns this (example not actual data)
[[-0.00114415 0.02502565 0.00507831 ..., 0.00653057 0.02183072
-0.00194293] `DataFrame` 1 is here ignore that the data doesn't match above
[-0.01527224 0.02899528 -0.00327654 ..., 0.0322364 0.01821731
-0.00766773] `DataFrame` 2 is here ignore that the data doesn't match above
....]]
Try
master_matrix = pd.concat(list_of_dfs, axis=1)
master_matrix = master_matrix.values.reshape(master_matrix.shape, order='F')
if each row in the final matrix corresponds to the same date
master_matrix = pd.concat(list_of_dfs, axis=1).values
otherwise.
Edit to address the newly added example.
In this case, you can use np.vstack on columns returned from each dataframe.
import pandas as pd
import numpy as np
from io import StringIO
df1 = pd.read_csv(StringIO(
'''
time bid_close
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000611
2016-05-24 00:10:00 -0.000244
2016-05-24 00:15:00 -0.000122
'''), sep=r' +')
df2 = pd.read_csv(StringIO(
'''
time bid_close
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000811
2016-05-24 00:10:00 -0.000744
2016-05-24 00:15:00 -0.000322
'''), sep=r' +')
dfs = [df1, df2]
out = np.vstack(df.iloc[:,-1].values for df in dfs)
Result:
In [10]: q.out
Out[10]:
array([[ nan, 0.000611, -0.000244, -0.000122],
[ nan, 0.000811, -0.000744, -0.000322]])
Setup
import pandas as pd
import numpy as np
df1 = pd.DataFrame([1, 2, 3, 4],
index=pd.date_range('2016-04-01', periods=4),
columns=pd.MultiIndex.from_tuples([('bid', 'close')]))
df2 = pd.DataFrame([5, 6, 7, 8],
index=pd.date_range('2016-03-01', periods=4),
columns=pd.MultiIndex.from_tuples([('bid', 'close')]))
print df1
bid
close
2016-04-01 1
2016-04-02 2
2016-04-03 3
2016-04-04 4
print df2
bid
close
2016-03-01 5
2016-03-02 6
2016-03-03 7
2016-03-04 8
Solution
df = np.concatenate([d.T.values for d in [df1, df2]])
print df
[[1 2 3 4]
[5 6 7 8]]
Note
The indices were not required to line up. This just takes the raw np.array from each dataframe and uses np.concatenate to do the rest.

Pandas - Create two columns - Simple, no?

Well hello everyone!
I want to create a (panda) dataset called df. This df panda form must contain "Id" and "Feature" columns. Any idea on how to do it?
I have done the following code but... the ## dictionaries are messy and put in random the two columns. I want "Id" as first column and "Feature" as a second one.
Thank you in advance! Have a loooong weekend!
df = DataFrame({'Feature': X["Feature"],'Id': X["Id"] })
From the pandas docs "If no columns are passed, the columns will be the sorted list of dict keys." I do this simple trick to arrange the columns. Just add "1", "2", etc. to beginning of your column names. For example:
>>>> df1 = pd.DataFrame({"Id":[1,2,3],"Feature":[5,6,7]})
>>>> df1
Feature Id
0 5 1
1 6 2
2 7 3
>>>> df2 = pd.DataFrame({"1Id":[1,2,3],"2Feature":[5,6,7]})
>>>> df2
1Id 2Feature
0 1 5
1 2 6
2 3 7
>>>> df2.columns = ["Id","Feature"]
>>>> df2
Id Feature
0 1 5
1 2 6
2 3 7
Now you have the order you wanted for printing or saving the DataFrame.
If this what you wanted?
import pandas as pd
data=["id","Feature"]
index=[1,2]
s = pd.Series(data,index=index)
df = pd.DataFrame(np.random.randn(2,2), index=index, columns=('id','features'))
The data frame :
>>> df['id']
1 0.254105
2 -0.132025
Name: id, dtype: float64
>>> df['features']
1 0.189972
2 2.262103
Name: features, dtype: float64

Pandas Aggregate/Group by based on most recent date

I have a DataFrame as follows, where Id is a string and Date is a datetime:
Id Date
1 3-1-2012
1 4-8-2013
2 1-17-2013
2 5-4-2013
2 10-30-2012
3 1-3-2013
I'd like to consolidate the table to just show one row for each Id which has the most recent date.
Any thoughts on how to do this?
You can groupby the Id field:
In [11]: df
Out[11]:
Id Date
0 1 2012-03-01 00:00:00
1 1 2013-04-08 00:00:00
2 2 2013-01-17 00:00:00
3 2 2013-05-04 00:00:00
4 2 2012-10-30 00:00:00
5 3 2013-01-03 00:00:00
In [12]: g = df.groupby('Id')
If you are not certain about the ordering, you could do something along the lines:
In [13]: g.agg(lambda x: x.iloc[x.Date.argmax()])
Out[13]:
Date
Id
1 2013-04-08 00:00:00
2 2013-05-04 00:00:00
3 2013-01-03 00:00:00
which for each group grabs the row with largest (latest) date (the argmax part).
If you knew they were in order you could take the last (or first) entry:
In [14]: g.last()
Out[14]:
Date
Id
1 2013-04-08 00:00:00
2 2012-10-30 00:00:00
3 2013-01-03 00:00:00
(Note: they're not in order, so this doesn't work in this case!)
In the Hayden response, I think that using x.loc in place of x.iloc is better, as the index of the df dataframe could be sparse (and in this case the iloc will not work).
(I haven't enought points on stackoverflow to post it in comments of the response).