Building a numpy array (matrix) from several dataframes - python-2.7

I have several dataframes which have the same look but different data.
DataFrame 1
bid
close
time
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000611
2016-05-24 00:10:00 -0.000244
2016-05-24 00:15:00 -0.000122
DataFrame 2
bid
close
time
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000811
2016-05-24 00:10:00 -0.000744
2016-05-24 00:15:00 -0.000322
I need to build a list of the dataframes, then pass that list of dataframes to a function that can take a list of dataframes and converts it to a numpy array. So below, each entry in the matrix is the elements of the dataframe ('bid
close') column. Notice I don't need the index 'time' column
data = np.array([dataFrames])
returns this (example not actual data)
[[-0.00114415 0.02502565 0.00507831 ..., 0.00653057 0.02183072
-0.00194293] `DataFrame` 1 is here ignore that the data doesn't match above
[-0.01527224 0.02899528 -0.00327654 ..., 0.0322364 0.01821731
-0.00766773] `DataFrame` 2 is here ignore that the data doesn't match above
....]]

Try
master_matrix = pd.concat(list_of_dfs, axis=1)
master_matrix = master_matrix.values.reshape(master_matrix.shape, order='F')
if each row in the final matrix corresponds to the same date
master_matrix = pd.concat(list_of_dfs, axis=1).values
otherwise.
Edit to address the newly added example.
In this case, you can use np.vstack on columns returned from each dataframe.
import pandas as pd
import numpy as np
from io import StringIO
df1 = pd.read_csv(StringIO(
'''
time bid_close
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000611
2016-05-24 00:10:00 -0.000244
2016-05-24 00:15:00 -0.000122
'''), sep=r' +')
df2 = pd.read_csv(StringIO(
'''
time bid_close
2016-05-24 00:00:00 NaN
2016-05-24 00:05:00 0.000811
2016-05-24 00:10:00 -0.000744
2016-05-24 00:15:00 -0.000322
'''), sep=r' +')
dfs = [df1, df2]
out = np.vstack(df.iloc[:,-1].values for df in dfs)
Result:
In [10]: q.out
Out[10]:
array([[ nan, 0.000611, -0.000244, -0.000122],
[ nan, 0.000811, -0.000744, -0.000322]])

Setup
import pandas as pd
import numpy as np
df1 = pd.DataFrame([1, 2, 3, 4],
index=pd.date_range('2016-04-01', periods=4),
columns=pd.MultiIndex.from_tuples([('bid', 'close')]))
df2 = pd.DataFrame([5, 6, 7, 8],
index=pd.date_range('2016-03-01', periods=4),
columns=pd.MultiIndex.from_tuples([('bid', 'close')]))
print df1
bid
close
2016-04-01 1
2016-04-02 2
2016-04-03 3
2016-04-04 4
print df2
bid
close
2016-03-01 5
2016-03-02 6
2016-03-03 7
2016-03-04 8
Solution
df = np.concatenate([d.T.values for d in [df1, df2]])
print df
[[1 2 3 4]
[5 6 7 8]]
Note
The indices were not required to line up. This just takes the raw np.array from each dataframe and uses np.concatenate to do the rest.

Related

python2.7 pandas: how to the previous 2 years data in a dataframe which index by every friday of a week

I have a dataframe as following, the index is datetime(every Friday in a week).
begin close
date
2014-1-10 1.0 2.5
2014-1-17 2.6 2.6
........................
2016-12-30 3.5 3.8
2017-6-16 4.5 4.7
I want to extract the previour 2 year data from 2017-6-16. My code is following.
import datetime
from dateutil.relativedelta import relativedelta
df_index = df.index
df_index_test = df_index[-1] - relativedelta(years=2)
df_test = df[df_index_test:-1]
But it seems it is wrong, since the day of df_index_test may not in the dataframe.
Thanks!
You need boolean indexing, instead relativedelta is possible use DateOffset:
df_test = df[df.index >= df_index_test]
Sample:
rng = pd.date_range('2001-04-03', periods=10, freq='15M')
df = pd.DataFrame({'a': range(10)}, index=rng)
print (df)
a
2001-04-30 0
2002-07-31 1
2003-10-31 2
2005-01-31 3
2006-04-30 4
2007-07-31 5
2008-10-31 6
2010-01-31 7
2011-04-30 8
2012-07-31 9
df_test = df[df.index >= df.index[-1] - pd.offsets.DateOffset(years=2)]
print (df_test)
a
2011-04-30 8
2012-07-31 9

How to apply operations on rows of a dataframe, but with variable columns affected?

I have a dataframe that gets read in from csv and has extraneous data.
Judgment on what is extraneous is made by evaluating one column, SystemStart.
Any data per row that is in a column with a heading of date value lower than SystemStart for that row, is set to nan. For example, index = 'one' has a SystemStart date of '2016-1-5', and when the pd.date_range is set up, it has no nan values to populate. index= 'three' is '2016-1-7' and hence has two nan values replacing the original data.
I can go row-by-row and throw np.nan values at all columns, but that is slow. Is there a faster way?
I've created a representative dataframe below, and am looking to get the same result without iterative operations, or a way to speed up those operations. Any help would be greatly appreciated.
import pandas as pd
import numpy as np
start_date = '2016-1-05'
end_date = '2016-1-7'
dates = pd.date_range(start_date, end_date, freq='D')
dt_dates = pd.to_datetime(dates, unit='D')
ind = ['one', 'two', 'three']
df = pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns = dt_dates, index = ind)
df['SystemStart'] = pd.to_datetime(['2016-1-5', '2016-1-6', '2016-1-7'])
print 'Initial Dataframe: \n', df
for msn in df.index:
zero_date_range = pd.date_range(start_date, df.loc[msn,'SystemStart'] - pd.Timedelta(days=1), freq='D')
# we set zeroes for all columns in the index element in question - this is a horribly slow way to do this
df.loc[msn, zero_date_range] = np.NaN
print '\nAltered Dataframe: \n', df
Below are the df outputs, Initial and Altered:
Initial Dataframe:
2016-01-05 00:00:00 2016-01-06 00:00:00 2016-01-07 00:00:00 \
one 24 23 65
two 21 91 59
three 62 77 2
SystemStart
one 2016-01-05
two 2016-01-06
three 2016-01-07
Altered Dataframe:
2016-01-05 00:00:00 2016-01-06 00:00:00 2016-01-07 00:00:00 \
one 24.0 23.0 65
two NaN 91.0 59
three NaN NaN 2
SystemStart
one 2016-01-05
two 2016-01-06
three 2016-01-07
First thing I do is make sure SystemStart is datetime
df.SystemStart = pd.to_datetime(df.SystemStart)
Then I strip out SystemStart to a separate series
st = df.SystemStart
Then I drop SytstemStart from my df
d1 = df.drop('SystemStart', 1)
Then I convert the columns I have left to datetime
d1.columns = pd.to_datetime(d1.columns)
Finally I use numpy broadcasting to mask the appropriate cells and join SystemStart back in.
d1.where(d1.columns.values >= st.values[:, None]).join(st)

Difference between two dates in Pandas DataFrame

I have many columns in a data frame and I have to find the difference of time in two column named as in_time and out_time and put it in the new column in the same data frame.
The format of time is like this 2015-09-25T01:45:34.372Z.
I am using Pandas DataFrame.
I want to do like this:
df.days = df.out_time - df.in_time
I have many columns and I have to increase 1 more column in it named days and put the differences there.
You need to convert the strings to datetime dtype, you can then subtract whatever arbitrary date you want and on the resulting series call dt.days:
In [15]:
df = pd.DataFrame({'date':['2015-09-25T01:45:34.372Z']})
df
Out[15]:
date
0 2015-09-25T01:45:34.372Z
In [19]:
df['date'] = pd.to_datetime(df['date'])
df['day'] = (df['date'] - dt.datetime.now()).dt.days
df
Out[19]:
date day
0 2015-09-25 01:45:34.372 -252
Well, it all kinda depends on the time format you use. I'd recommend using datetime.
If in_time and out_time are currently strings, convert them with datetime.strptime():
from datetime import datetime
f = lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ')
df.in_time = df.in_time.apply(f)
df.out_time = df.out_time.apply(f)
and then you can simply subtract them, and assign the result to a new column named 'days':
df['days'] = df.out_time - df.in_time
Example: (3 seconds and 1 day differences)
In[5]: df = pd.DataFrame({'in_time':['2015-09-25T01:45:34.372Z','2015-09-25T01:45:34.372Z'],
'out_time':['2015-09-25T01:45:37.372Z','2015-09-26T01:45:34.372Z']})
In[6]: df
Out[6]:
in_time out_time
0 2015-09-25T01:45:34.372Z 2015-09-25T01:45:37.372Z
1 2015-09-25T01:45:34.372Z 2015-09-26T01:45:34.372Z
In[7]: type(df.loc[0,'in_time'])
Out[7]: str
In[8]: df.in_time = df.in_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[9]: df.out_time = df.out_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[10]: df # notice that it looks exactly the same, but the type is different
Out[10]:
in_time out_time
0 2015-09-25 01:45:34.372 2015-09-25T01:45:37.372Z
1 2015-09-25 01:45:34.372 2015-09-26T01:45:34.372Z
In[11]: type(df.loc[0,'in_time'])
Out[11]: pandas.tslib.Timestamp
And the creation of the new column:
In[12]: df['days'] = df.out_time - df.in_time
In[13]: df
Out[13]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0 days 00:00:03
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1 days 00:00:00
Now you can play with the output format. For example, the portion of seconds difference:
In[14]: df.days = df.days.apply(lambda x: x.total_seconds()/60)
In[15]: df
Out[15]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0.05
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1440.00
Note: Regarding the in_time and out_time format, notice that I made some assumptions (for example, that you're using a 24H clock (thus using %H and not %I)). To play with the format have a look at: strptime() documentation.
Note2: It would obviously be better if you can design your program to use datetime from the beginning (instead of using strings and converting them).
First of all, you need to convert in_time and out_time columns to datetime type.
for col in ('in_time', 'out_time') : # Looping a tuple is faster than a list
df[col] = pd.to_datetime(df[col])
You can check the type using dtypes:
df['in_time'].dtypes
Should give: datetime64[ns, UTC]
Now you can substract them and get the difference time using dt.days or from numpy using np.timedelta64.
Example:
import numpy as np
df['days'] = (df['out_time'] - df['in_time']).dt.days
# Or
df['days'] = (df['out_time'] - df['in_time']) / np.timedelta64(1, 'D')

Python 2.7 CSV graph time format

I have a CSV file, one of colons value is timestamps but when I use numpy.getfromtxt it change it to string. My goal is to create a graph but with normal time format I prefer seconds only.
this is my array that I get from bellow code:
array([('0:00:00',), ('0:00:00.001000',), ('0:00:00.002000',),
('0:00:00.081000',), ('0:00:00.095000',), ('0:00:00.195000',),
('0:00:00.294000',), ...
this is my code:
col1 = numpy.genfromtxt("mycsv.csv",usecols=(1),delimiter=',',dtype=None, names=True)
The problem that I am having that format is in string but I need it in seconds (us can be ignored or not). How can I achive that?
If you can, the best way for working with csv files in python is to use pandas. It takes care of this for you. I will assume the name of the time column is time, change it to whatever you use:
>>> import numpy as np
>>> import pandas as pd
>>>
>>> df = pd.read_csv('test.csv', parse_dates=[1]) # read time as date
>>> print(df)
test1 time test2 test3
0 5 2015-08-20 00:00:00.000 10 11.7
1 5 2015-08-20 00:00:00.001 11 11.6
2 5 2015-08-20 00:00:00.002 12 11.5
3 5 2015-08-20 00:00:00.081 13 11.4
4 5 2015-08-20 00:00:00.095 14 11.3
5 5 2015-08-20 00:00:00.195 15 11.2
6 5 2015-08-20 00:00:00.294 16 11.1
>>> df['time'] -= pd.datetime.now().date() # convert to timedelta
>>> print(df)
test1 time test2 test3
0 5 00:00:00 10 11.7
1 5 00:00:00.001000 11 11.6
2 5 00:00:00.002000 12 11.5
3 5 00:00:00.081000 13 11.4
4 5 00:00:00.095000 14 11.3
5 5 00:00:00.195000 15 11.2
6 5 00:00:00.294000 16 11.1
>>> df['time'] /= np.timedelta64(1,'s') # convert to seconds
>>> print(df)
test1 time test2 test3
0 5 0.000 10 11.7
1 5 0.001 11 11.6
2 5 0.002 12 11.5
3 5 0.081 13 11.4
4 5 0.095 14 11.3
5 5 0.195 15 11.2
6 5 0.294 16 11.1
You can work with pandas dataframes (what you have here) and series (what you would from getting a single column, such as df['time']) in most of the same ways as numpy arrays, including plotting. However, if you really, really need to convert it to a numpy array, it is as easy as arr = df['time'].values.
use the datetime library
import datetime
for x in array:
for y .... # it's not realy obvious what the nesting is here...
timestamp = datetime.strptime(y, '%H:%M:%S.%f')
You can use a converter for the timestamp field.
For example, suppose times.dat contains:
time
0:00:00
0:00:00.001000
0:00:00.002000
0:00:00.081000
0:00:00.095000
0:00:00.195000
0:00:00.294000
Define a converter that converts a timestamp string into the number of seconds as a floating point value:
In [5]: def convert_timestamp(s):
...: h, m, s = [float(w) for w in s.split(':')]
...: return h*3600 + m*60 + s
...:
Then use the converter in genfromtxt:
In [21]: genfromtxt('times.dat', skiprows=1, converters={0: convert_timestamp})
Out[21]: array([ 0. , 0.001, 0.002, 0.081, 0.095, 0.195, 0.294])

Pandas Aggregate/Group by based on most recent date

I have a DataFrame as follows, where Id is a string and Date is a datetime:
Id Date
1 3-1-2012
1 4-8-2013
2 1-17-2013
2 5-4-2013
2 10-30-2012
3 1-3-2013
I'd like to consolidate the table to just show one row for each Id which has the most recent date.
Any thoughts on how to do this?
You can groupby the Id field:
In [11]: df
Out[11]:
Id Date
0 1 2012-03-01 00:00:00
1 1 2013-04-08 00:00:00
2 2 2013-01-17 00:00:00
3 2 2013-05-04 00:00:00
4 2 2012-10-30 00:00:00
5 3 2013-01-03 00:00:00
In [12]: g = df.groupby('Id')
If you are not certain about the ordering, you could do something along the lines:
In [13]: g.agg(lambda x: x.iloc[x.Date.argmax()])
Out[13]:
Date
Id
1 2013-04-08 00:00:00
2 2013-05-04 00:00:00
3 2013-01-03 00:00:00
which for each group grabs the row with largest (latest) date (the argmax part).
If you knew they were in order you could take the last (or first) entry:
In [14]: g.last()
Out[14]:
Date
Id
1 2013-04-08 00:00:00
2 2012-10-30 00:00:00
3 2013-01-03 00:00:00
(Note: they're not in order, so this doesn't work in this case!)
In the Hayden response, I think that using x.loc in place of x.iloc is better, as the index of the df dataframe could be sparse (and in this case the iloc will not work).
(I haven't enought points on stackoverflow to post it in comments of the response).