python/pandas:need help adding double quotes to columns - python-2.7
I need to add double quotes to specific columns in a csv file that my script generates.
Below is the goofy way I thought of doing this. For these two fixed-width fields, it works:
df['DATE'] = df['DATE'].str.ljust(9,'"')
df['DATE'] = df['DATE'].str.rjust(10,'"')
df['DEPT CODE'] = df['DEPT CODE'].str.ljust(15,'"')
df[DEPT CODE'] = df['DEPT CODE'].str.rjust(16,'"')
For the following field, it doesn't. It has a variable length. So, if the value is shorter than the standard 6-digits, I get extra double-quotes: "5673"""
df['ID'] = df['ID'].str.ljust(7,'"')
df['ID'] = df['ID'].str.rjust(8,'"')
I have tried zfill, but the data in the column is a series-- I get "pandas.core.series.Series" when i run
print type(df['ID'])
and I have not been able to convert it to string using astype. I'm not sure why. I have not imported numpy.
I tried using len() to get the length of the ID number and pass it to str.ljust and str.rjust as its first argument, but I think it got hung up on the data not being a string.
Is there a simpler way to apply double-quotes as I need, or is the zfill going to be the way to go?
You can add a speech mark before / after:
In [11]: df = pd.DataFrame([["a"]], columns=["A"])
In [12]: df
Out[12]:
A
0 a
In [13]: '"' + df['A'] + '"'
Out[13]:
0 "a"
Name: A, dtype: object
Assigning this back:
In [14]: df['A'] = '"' + df.A + '"'
In [15]: df
Out[15]:
A
0 "a"
If it's for exporting to csv you can use the quoting kwarg:
In [21]: df = pd.DataFrame([["a"]], columns=["A"])
In [22]: df.to_csv()
Out[22]: ',A\n0,a\n'
In [23]: df.to_csv(quoting=1)
Out[23]: '"","A"\n"0","a"\n'
With numpy, not pandas, you can specify the formatting method when saving to a csv file. As very simple example:
In [209]: np.savetxt('test.txt',['string'],fmt='%r')
In [210]: cat test.txt
'string'
In [211]: np.savetxt('test.txt',['string'],fmt='"%s"')
In [212]: cat test.txt
"string"
I would expect the pandas csv writer to have a similar degree of control, if not more.
Related
Convert blank cells to NaN in Excel File
When parsing an excel file in Pandas, xls = pd.ExcelFile('file.xlsx') df = xls.parse(0, parse_dates=[0, 1]) Is there a way to convert all of the blank cells to NaN rather than to 0?
You can try with: df = df.replace('', np.nan, regex=True)
deleting semicolons in a column of csv in python
I have a column of different times and I want to find the values in between 2 different times but can't find out how? For example: 09:04:00 threw 09:25:00. And just use the values in between those different times. I was gonna just delete the semicolons separating hours:minutes:seconds and do it that way. But really don't know how to do that. But I know how to find a value in a column so I figured that way would be easier idk. Here is the csv I'm working with. DATE,TIME,OPEN,HIGH,LOW,CLOSE,VOLUME 02/03/1997,09:04:00,3046.00,3048.50,3046.00,3047.50,505 02/03/1997,09:05:00,3047.00,3048.00,3046.00,3047.00,162 02/03/1997,09:06:00,3047.50,3048.00,3047.00,3047.50,98 02/03/1997,09:07:00,3047.50,3047.50,3047.00,3047.50,228 02/03/1997,09:08:00,3048.00,3048.00,3047.50,3048.00,136 02/03/1997,09:09:00,3048.00,3048.00,3046.50,3046.50,174 02/03/1997,09:10:00,3046.50,3046.50,3045.00,3045.00,134 02/03/1997,09:11:00,3045.50,3046.00,3044.00,3045.00,43 02/03/1997,09:12:00,3045.00,3045.50,3045.00,3045.00,214 02/03/1997,09:13:00,3045.50,3045.50,3045.50,3045.50,8 02/03/1997,09:14:00,3045.50,3046.00,3044.50,3044.50,152 02/03/1997,09:15:00,3044.00,3044.00,3042.50,3042.50,126 02/03/1997,09:16:00,3043.50,3043.50,3043.00,3043.00,128 02/03/1997,09:17:00,3042.50,3043.50,3042.50,3043.50,23 02/03/1997,09:18:00,3043.50,3044.50,3043.00,3044.00,51 02/03/1997,09:19:00,3044.50,3044.50,3043.00,3043.00,18 02/03/1997,09:20:00,3043.00,3045.00,3043.00,3045.00,23 02/03/1997,09:21:00,3045.00,3045.00,3044.50,3045.00,51 02/03/1997,09:22:00,3045.00,3045.00,3045.00,3045.00,47 02/03/1997,09:23:00,3045.50,3046.00,3045.00,3045.00,77 02/03/1997,09:24:00,3045.00,3045.00,3045.00,3045.00,131 02/03/1997,09:25:00,3044.50,3044.50,3043.50,3043.50,138 02/03/1997,09:26:00,3043.50,3043.50,3043.50,3043.50,6 02/03/1997,09:27:00,3043.50,3043.50,3043.00,3043.00,56 02/03/1997,09:28:00,3043.00,3044.00,3043.00,3044.00,32 02/03/1997,09:29:00,3044.50,3044.50,3044.50,3044.50,63 02/03/1997,09:30:00,3045.00,3045.00,3045.00,3045.00,28 02/03/1997,09:31:00,3045.00,3045.50,3045.00,3045.50,75 02/03/1997,09:32:00,3045.50,3045.50,3044.00,3044.00,54 02/03/1997,09:33:00,3043.50,3044.50,3043.50,3044.00,96 02/03/1997,09:34:00,3044.00,3044.50,3044.00,3044.50,27 02/03/1997,09:35:00,3044.50,3044.50,3043.50,3044.50,44 02/03/1997,09:36:00,3044.00,3044.00,3043.00,3043.00,61 02/03/1997,09:37:00,3043.50,3043.50,3043.50,3043.50,18 Thanks for the time
If you just want to replace semicolons with commas you can use the built in string replace function. line = '02/03/1997,09:24:00,3045.00,3045.00,3045.00,3045.00,131' line = line.replace(':',',') print(line) Output 02/03/1997,09,04,00,3046.00,3048.50,3046.00,3047.50,505 Then split on commas to separate the data. line.split(',') If you only want the numerical values you could also do the following (using a regular expression): import re line = '02/03/1997,09:04:00,3046.00,3048.50,3046.00,3047.50,505' values = [float(x) for x in re.sub(r'[^\w.]+', ',', line).split(',')] print values Which gives you a list of numerical values that you can process. [2.0, 3.0, 1997.0, 9.0, 4.0, 0.0, 3046.0, 3048.5, 3046.0, 3047.5, 505.0]
Use the csv module! :) >>>import csv >>> with open('myFile.csv', newline='') as csvfile: ... myCsvreader = csv.reader(csvfile, delimiter=',', quotechar='|') ... for row in myCsvreader: ... for item in row: ... item.spit(':') # Returns hours without semicolons Once you extracted different time stamps, you can use the datetime module, such as: from datetime import datetime, date, time x = time(hour=9, minute=30, second=30) y = time(hour=9, minute=30, second=42) diff = datetime.combine(date.today(), y) - datetime.combine(date.today(), x) print diff.total_seconds()
python replace string function throws asterix wildcard error
When i use * i receive the error raise error, v # invalid expression error: nothing to repeat other wildcard characters such as ^ work fine. the line of code: df.columns = df.columns.str.replace('*agriculture', 'agri') am using pandas and python edit: when I try using / to escape, the wildcard does not work as i intend In[44]df = pd.DataFrame(columns=['agriculture', 'dfad agriculture df']) In[45]df Out[45]: Empty DataFrame Columns: [agriculture, dfad agriculture df] Index: [] in[46]df.columns.str.replace('/*agriculture*','agri') Out[46]: Index([u'agri', u'dfad agri df'], dtype='object') I thought the wildcard should output Index([u'agri', u'agri'], dtype='object) edit: I am currently using hierarchical columns and would like to only replace agri for that specific level (level = 2). original: df.columns[0] = ('grand total', '2005', 'agriculture') df.columns[1] = ('grand total', '2005', 'other') desired: df.columns[0] = ('grand total', '2005', 'agri') df.columns[1] = ('grand total', '2005', 'other') I'm looking at this link right now: Changing columns names in Pandas with hierarchical columns and that author says it will get easier at 0.15.0 so I am hoping there are more recent updated solutions
You need to the asterisk * at the end in order to match the string 0 or more times, see the docs: In [287]: df = pd.DataFrame(columns=['agriculture']) df Out[287]: Empty DataFrame Columns: [agriculture] Index: [] In [289]: df.columns.str.replace('agriculture*', 'agri') Out[289]: Index(['agri'], dtype='object') EDIT Based on your new and actual requirements, you can use str.contains to find matches and then use this to build a dict to map the old against new names and then call rename: In [307]: matching_cols = df.columns[df.columns.str.contains('agriculture')] df.rename(columns = dict(zip(matching_cols, ['agri'] * len(matching_cols)))) Out[307]: Empty DataFrame Columns: [agri, agri] Index: []
AttributeError: 'DataFrame' object has no attribute 'Height'
I am able to convert a csv file to pandas DataFormat and able to print out the table, as seen below. However, when I try to print out the Height column I get an error. How can I fix this? import pandas as pd df = pd.read_csv('/path../NavieBayes.csv') print df #this prints out as seen below print df.Height #this gives me the "AttributeError: 'DataFrame' object has no attribute 'Height' Height Weight Classifer 0 70.0 180 Adult 1 58.0 109 Adult 2 59.0 111 Adult 3 60.0 113 Adult 4 61.0 115 Adult
I have run into a similar issue before when reading from csv. Assuming it is the same: col_name =df.columns[0] df=df.rename(columns = {col_name:'new_name'}) The error in my case was caused by (I think) by a byte order marker in the csv or some other non-printing character being added to the first column label. df.columns returns an array of the column names. df.columns[0] gets the first one. Try printing it and seeing if something is odd with the results.
PS On above answer by JAB - if there is clearly spaces in your column names use skipinitialspace=True in read_csv e.g. df = pd.read_csv('/path../NavieBayes.csv',skipinitialspace=True)
df = pd.read_csv(r'path_of_file\csv_file_name.csv') OR df = pd.read_csv('path_of_file/csv_file_name.csv') Example: data = pd.read_csv(r'F:\Desktop\datasets\hackathon+data+set.csv') Try it, it will work.
Print columns of Pandas dataframe to separate files + dataframe with datetime (min/sec)
I am trying to print a Pandas dataframe's columns to separate *.csv files in Python 2.7. Using this code, I get a dataframe with 4 columns and an index of dates: import pandas as pd import numpy as np col_headers = list('ABCD') dates = pd.date_range(dt.datetime.today().strftime("%m/%d/%Y"),periods=rows) df2 = pd.DataFrame(np.random.randn(10, 4), index=dates, columns = col_headers) df = df2.tz_localize('UTC') #this does not seem to be giving me hours/minutes/seconds I then remove the index and set it to a separate column: df['Date'] = df.index col_headers.append('Date') #update the column keys At this point, I just need to print all 5 columns of the dataframe to separate files. Here is what I have tried: for ijk in range(0,len(col_headers)): df.to_csv('output' + str(ijk) + '.csv', columns = col_headers[ijk]) I get the following error message: KeyError: "[['D', 'a', 't', 'e']] are not in ALL in the [columns]" If I say: for ijk in range(0,len(col_headers)-1): then it works, but it does not print the 'Date' clumn. That is not what I want. I need to also print the date column. Questions: How do I get it to print the 'Dates' column to a *.csv file? How do I get the time with hours, minutes and seconds? If the number of rows is changed from 10 to 5000, then will the seconds change from one row of the dataframe to the next? EDIT: - Answer for Q2 (See here) ==> in the case of my particular code, see this: dates = pd.date_range(dt.datetime.today().strftime("%m/%d/%Y %H:%M"),periods=rows)
I don't quite understand your logic but the following is a simpler method to do it: for col in df: df[col].to_csv('output' + col + '.csv') example: In [41]: for col in df2: print('output' + col + '.csv') outputA.csv outputB.csv outputC.csv outputD.csv outputDate.csv