I have an object column with values which are dates. I manually placed 2016-08-31 instead of NaN after reading from csv.
close_date
0 1948-06-01 00:00:00
1 2016-08-31 00:00:00
2 2016-08-31 00:00:00
3 1947-07-01 00:00:00
4 1967-05-31 00:00:00
Running df['close_date'] = pd.to_datetime(df['close_date']) results in
TypeError: invalid string coercion to datetime
Adding coerce=Trueargument results in:
TypeError: to_datetime() got an unexpected keyword argument 'coerce'
Furthermore, even though I call the column 'close_date', all the columns in the dataframe, some int64, float64, and datetime64[ns], change to dtype object.
What am I doing wrong?
You need errors='coerce' parameter what convert some not parseable values to NaT:
df['close_date'] = pd.to_datetime(df['close_date'], errors='coerce')
print (df)
close_date
0 1948-06-01
1 2016-08-31
2 2016-08-31
3 1947-07-01
4 1967-05-31
print (df['close_date'].dtypes)
datetime64[ns]
But if there are some mixed values - numeric with datetimes convert to str first:
df['close_date'] = pd.to_datetime(df['close_date'].astype(str), errors='coerce')
Related
I am reading a csv in python with multiple columns.
The first column is the date and I have to delete the rows that correspond to years previous to 2017.
time high low Volume Plot Rango
0 2017-12-22 25.17984 24.280560 970 0.329943 0.899280
1 2017-12-26 25.17984 23.381280 2579 1.057921 1.798560
2 2017-12-27 25.17984 23.381280 2499 0.998083 1.798560
3 2017-12-28 25.17984 24.280560 1991 0.919885 0.899280
4 2017-12-29 25.17984 24.100704 2703 1.237694 1.079136
.. ... ... ... ... ... ...
580 2020-04-16 5.45000 4.450000 117884 3.168380 1.000000
581 2020-04-17 5.35000 4.255200 58531 1.370538 1.094800
582 2020-04-20 4.66500 4.100100 25770 0.582999 0.564900
583 2020-04-21 4.42000 3.800000 20914 0.476605 0.620000
584 2020-04-22 4.22000 3.710100 23212 0.519275 0.509900
I want to delete the rows corresponding to years prior to 2018, so 2017,2016,2015... should be deleted
I am trying with this but does not work
if 2017 in datos['time']: datos['time'].remove() #check if number 2017 is in each of the items of the column 'time'
The dates are recognized as numbers, not as datatime but I think I do not need to declare it as datatime.
In pandas
Given your data
Use Boolean indexing
time must be datetime64[ns] format
df.info() will give the dtypes
df['date'] = pd.to_datetime(df['date'])
df[df['time'].dt.year >= 2018]
I have a lot of data, that data is pretty dirty, example:
A table ORM :
id = models.CharField(default='', max_length=50)
time = models.DateTimeField(default=timezone.now)
number = models.CharField(default='', max_length=20)
value = models.CharField(default='', max_length=20)
unique_together = ['id', 'time', 'number']
A table DATA :
id time number value
1 2018-07-16 00:00:00 1 64
1 2018-07-16 00:00:00 2 -99
1 2018-07-16 00:00:00 3 655
1 2018-07-16 00:00:00 4 3
2 2018-07-16 00:00:00 0 12
Import Datas (sample) :
id time number value
1 2018-07-16 00:00:00 1 64
3 2018-07-16 00:00:00 0 -99
3 2018-07-16 00:00:00 0 11
4 2018-07-16 00:00:00 0 -99
4 2018-07-16 00:00:00 1 -99
So, When I Do
for loop....
objs = []
objs.append(A(**kwargs))
A.objects.bulk_create(objs, batch_size=50000)
It will raise two kind duplicate.
A Table already exist " 1 2018-07-16 00:00:00 1"
Import Datas already exist 3 2018-07-16 00:00:00 0 for two times in objs, so when I bulks create it will raise duplicate, then it will roll back all commit !!!
the "1", I can use get or create to solve it
but "2", I can't check now I append data exist in the objs or not
I tried to use this to check exist or not, but when data row over 1000000,
the complexity will be terrible.
def search(id, time, number, objs):
for obj in objs:
if obj['id'] == id and obj['time'] == time and obj['number'] == number:
return True
return False
Is there have any better way? thanks.
You can add a tuple with id, time and number to a set:
objs = []
duplicate_check = set()
for loop....
data = kwargs['id'], kwargs['time'], kwargs['number']
if not data in duplicate_check:
objs.append(A(**kwargs))
duplicate_check.add(data)
A.objects.bulk_create(objs, batch_size=50000)
The set operations have a complexity of O(1).
I have a dataframe that gets read in from csv and has extraneous data.
Judgment on what is extraneous is made by evaluating one column, SystemStart.
Any data per row that is in a column with a heading of date value lower than SystemStart for that row, is set to nan. For example, index = 'one' has a SystemStart date of '2016-1-5', and when the pd.date_range is set up, it has no nan values to populate. index= 'three' is '2016-1-7' and hence has two nan values replacing the original data.
I can go row-by-row and throw np.nan values at all columns, but that is slow. Is there a faster way?
I've created a representative dataframe below, and am looking to get the same result without iterative operations, or a way to speed up those operations. Any help would be greatly appreciated.
import pandas as pd
import numpy as np
start_date = '2016-1-05'
end_date = '2016-1-7'
dates = pd.date_range(start_date, end_date, freq='D')
dt_dates = pd.to_datetime(dates, unit='D')
ind = ['one', 'two', 'three']
df = pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns = dt_dates, index = ind)
df['SystemStart'] = pd.to_datetime(['2016-1-5', '2016-1-6', '2016-1-7'])
print 'Initial Dataframe: \n', df
for msn in df.index:
zero_date_range = pd.date_range(start_date, df.loc[msn,'SystemStart'] - pd.Timedelta(days=1), freq='D')
# we set zeroes for all columns in the index element in question - this is a horribly slow way to do this
df.loc[msn, zero_date_range] = np.NaN
print '\nAltered Dataframe: \n', df
Below are the df outputs, Initial and Altered:
Initial Dataframe:
2016-01-05 00:00:00 2016-01-06 00:00:00 2016-01-07 00:00:00 \
one 24 23 65
two 21 91 59
three 62 77 2
SystemStart
one 2016-01-05
two 2016-01-06
three 2016-01-07
Altered Dataframe:
2016-01-05 00:00:00 2016-01-06 00:00:00 2016-01-07 00:00:00 \
one 24.0 23.0 65
two NaN 91.0 59
three NaN NaN 2
SystemStart
one 2016-01-05
two 2016-01-06
three 2016-01-07
First thing I do is make sure SystemStart is datetime
df.SystemStart = pd.to_datetime(df.SystemStart)
Then I strip out SystemStart to a separate series
st = df.SystemStart
Then I drop SytstemStart from my df
d1 = df.drop('SystemStart', 1)
Then I convert the columns I have left to datetime
d1.columns = pd.to_datetime(d1.columns)
Finally I use numpy broadcasting to mask the appropriate cells and join SystemStart back in.
d1.where(d1.columns.values >= st.values[:, None]).join(st)
I have many columns in a data frame and I have to find the difference of time in two column named as in_time and out_time and put it in the new column in the same data frame.
The format of time is like this 2015-09-25T01:45:34.372Z.
I am using Pandas DataFrame.
I want to do like this:
df.days = df.out_time - df.in_time
I have many columns and I have to increase 1 more column in it named days and put the differences there.
You need to convert the strings to datetime dtype, you can then subtract whatever arbitrary date you want and on the resulting series call dt.days:
In [15]:
df = pd.DataFrame({'date':['2015-09-25T01:45:34.372Z']})
df
Out[15]:
date
0 2015-09-25T01:45:34.372Z
In [19]:
df['date'] = pd.to_datetime(df['date'])
df['day'] = (df['date'] - dt.datetime.now()).dt.days
df
Out[19]:
date day
0 2015-09-25 01:45:34.372 -252
Well, it all kinda depends on the time format you use. I'd recommend using datetime.
If in_time and out_time are currently strings, convert them with datetime.strptime():
from datetime import datetime
f = lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ')
df.in_time = df.in_time.apply(f)
df.out_time = df.out_time.apply(f)
and then you can simply subtract them, and assign the result to a new column named 'days':
df['days'] = df.out_time - df.in_time
Example: (3 seconds and 1 day differences)
In[5]: df = pd.DataFrame({'in_time':['2015-09-25T01:45:34.372Z','2015-09-25T01:45:34.372Z'],
'out_time':['2015-09-25T01:45:37.372Z','2015-09-26T01:45:34.372Z']})
In[6]: df
Out[6]:
in_time out_time
0 2015-09-25T01:45:34.372Z 2015-09-25T01:45:37.372Z
1 2015-09-25T01:45:34.372Z 2015-09-26T01:45:34.372Z
In[7]: type(df.loc[0,'in_time'])
Out[7]: str
In[8]: df.in_time = df.in_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[9]: df.out_time = df.out_time.apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S.%fZ'))
In[10]: df # notice that it looks exactly the same, but the type is different
Out[10]:
in_time out_time
0 2015-09-25 01:45:34.372 2015-09-25T01:45:37.372Z
1 2015-09-25 01:45:34.372 2015-09-26T01:45:34.372Z
In[11]: type(df.loc[0,'in_time'])
Out[11]: pandas.tslib.Timestamp
And the creation of the new column:
In[12]: df['days'] = df.out_time - df.in_time
In[13]: df
Out[13]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0 days 00:00:03
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1 days 00:00:00
Now you can play with the output format. For example, the portion of seconds difference:
In[14]: df.days = df.days.apply(lambda x: x.total_seconds()/60)
In[15]: df
Out[15]:
in_time out_time days
0 2015-09-25 01:45:34.372 2015-09-25 01:45:37.372 0.05
1 2015-09-25 01:45:34.372 2015-09-26 01:45:34.372 1440.00
Note: Regarding the in_time and out_time format, notice that I made some assumptions (for example, that you're using a 24H clock (thus using %H and not %I)). To play with the format have a look at: strptime() documentation.
Note2: It would obviously be better if you can design your program to use datetime from the beginning (instead of using strings and converting them).
First of all, you need to convert in_time and out_time columns to datetime type.
for col in ('in_time', 'out_time') : # Looping a tuple is faster than a list
df[col] = pd.to_datetime(df[col])
You can check the type using dtypes:
df['in_time'].dtypes
Should give: datetime64[ns, UTC]
Now you can substract them and get the difference time using dt.days or from numpy using np.timedelta64.
Example:
import numpy as np
df['days'] = (df['out_time'] - df['in_time']).dt.days
# Or
df['days'] = (df['out_time'] - df['in_time']) / np.timedelta64(1, 'D')
I have a DataFrame as follows, where Id is a string and Date is a datetime:
Id Date
1 3-1-2012
1 4-8-2013
2 1-17-2013
2 5-4-2013
2 10-30-2012
3 1-3-2013
I'd like to consolidate the table to just show one row for each Id which has the most recent date.
Any thoughts on how to do this?
You can groupby the Id field:
In [11]: df
Out[11]:
Id Date
0 1 2012-03-01 00:00:00
1 1 2013-04-08 00:00:00
2 2 2013-01-17 00:00:00
3 2 2013-05-04 00:00:00
4 2 2012-10-30 00:00:00
5 3 2013-01-03 00:00:00
In [12]: g = df.groupby('Id')
If you are not certain about the ordering, you could do something along the lines:
In [13]: g.agg(lambda x: x.iloc[x.Date.argmax()])
Out[13]:
Date
Id
1 2013-04-08 00:00:00
2 2013-05-04 00:00:00
3 2013-01-03 00:00:00
which for each group grabs the row with largest (latest) date (the argmax part).
If you knew they were in order you could take the last (or first) entry:
In [14]: g.last()
Out[14]:
Date
Id
1 2013-04-08 00:00:00
2 2012-10-30 00:00:00
3 2013-01-03 00:00:00
(Note: they're not in order, so this doesn't work in this case!)
In the Hayden response, I think that using x.loc in place of x.iloc is better, as the index of the df dataframe could be sparse (and in this case the iloc will not work).
(I haven't enought points on stackoverflow to post it in comments of the response).