I am reading a file in python and splitting the file with '\n' . when i am printing the splitted list it is giving 'Magni\xef\xac\x81cent Mary' instead of 'Magnificient Mary'
Here is my code...
with open('/home/naveen/Desktop/answer.txt') as ans:
content = ans.read()
content = content.split('\n')
print content
note: answer.txt contains following lines
Magnificent Mary
Flying Sikh
Payyoli Express
Here is my output of the program
the problem is in your text file. There are some unicodes characters in "Magnificent Mary" If you fix that your program should work. If you want to read with unicodes characters, you have to properly decode texts to UTF-8.
Have a look at this one (assuming you want to use python 2) Backporting Python 3 open(encoding="utf-8") to Python 2
python2
with codecs.open(filename='/Users/emily/Desktop/answers.txt', mode='rb', encoding='UTF-8') as ans:
content = ans.read().splitlines()
for i in content: print i
If you can use python3, you can actually do this:
with open('/home/naveen/Desktop/answer.txt', encoding='UTF-8') as ans:
content = ans.read().splitlines()
print(content)
There is a problem with your 'f' in Magnificent Mary . It is not the normal f , but it is the
LATIN SMALL LIGATURE FI . You can simply delete your 'f' and retype it in gedit.
To verify the difference , simply include
print [(ord(a),a) for a in (file.split("\n"))[0]]
at the end of your code for both the fs.
If there is no way to edit the file , you could first convert the string to unicode , and then use the unicodedata of python.
import unicodedata
file = open("answer.txt")
file = (file.read()).decode('utf-8')
print unicodedata.normalize('NFKD',
file).encode('ascii','ignore').split("\n")
I have a script that processes an Excel file. The department that sends it has a system that generated it, and my script stopped working.
I suddenly got the error Can only use .str accessor with string values, which use np.object_ dtype in pandas for the following line of code:
df['DATE'] = df['Date'].str.replace(r'[^a-zA-Z0-9\._/-]', '')
I checked the type of the date columns in the file from the old system (dtype: object) vs the file from the new system (dtype: datetime64[ns]).
How do I change the date format to something my script will understand?
I saw this answer but my knowledge about date formats isn't this granular.
You can use apply function on the dataframe column to convert the necessary column to String. For example:
df['DATE'] = df['Date'].apply(lambda x: x.strftime('%Y-%m-%d'))
Make sure to import datetime module.
apply() will take each cell at a time for evaluation and apply the formatting as specified in the lambda function.
pd.to_datetime returns a Series of datetime64 dtype, as described here:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html
df['DATE'] = df['Date'].dt.date
or this:
df['Date'].map(datetime.datetime.date)
You can use pd.to_datetime
df['DATE'] = pd.to_datetime(df['DATE'])
I'm reading in an RDD/DF like so:
testRDD=sc.textFile('s3n://sample.txt')\.map(lambda x: x.split('|')).map(lambda x: Row(event_date=x[0])).cache()
testDF = sqlContext.createDataFrame(testRDD)
testDF.registerTempTable("testDF")
The RDD returns data that looks fine:
for i in testRDD.take(1):
print(i)
Row(event_date=u'2016-04-01 00:00:17')
But the DF comes up with some encoding issues where the first several characters are missing and the string ends in a bunch of encoded bytes:
for i in testDF.take(1):
print(i)
Row(event_date=u'01 00:00:17\x00\x00\x00\x00\x00\x05\x00\x00')
Any ideas where I'm going wrong? I've tried using decode('utf-8') on the incoming string with no luck.
I have hundreds of CSV files and I'm trying to write a Python script that will parse through all of them and print out rows that have matching string(s). I'll be happy if we can get this to work using one string (and not a list of strings). Using Python 2.7.5. I've figured out so far:
The csv module in Python will print the row with the matching string in a particular column (the eighth column from the left):
import csv
reader = csv.reader(open('2015-08-25.csv'))
for row in reader:
col8 = str(row[8])
if col8 == '36862210':
print row
So the above works for one .csv file. Now I need to parse hundreds of .csv files with glob. The glob module will print out all the file names with this code:
import glob
for name in glob.glob('20??-??-??.csv'):
print name
I tried putting the two together into one script but the error message reads:
File "test7.py", line 6, in
reader = csv.reader(open(csvfiles))
TypeError: coercing to Unicode: need string or buffer, list found
import csv
import glob
csvfiles = glob.glob('20??-??-??.csv')
for filename in csvfiles:
reader = csv.reader(open(csvfiles))
for row in reader:
col8 = str(row[8])
if col8 == '36862210':
print row
You are trying to open a List - csvfiles is the list you are iterating on.
Use this instead, because open() expects a filename:
reader = csv.reader(open(filename))
I have a list which contains list of file names, i wanted to sort based on timestamp, which ( i.e timestamp ) is inbuild in each file name.
Note: In file, Hello_Hi_2015-02-20T084521_1424543480.tar.gz --> 2015-02-20T084521 represents as "year-moth-dayTHHMMSS" ( Based on this i wanted to sort )
Input file below:
file_list = ['Hello_Hi_2015-02-20T084521_1424543480.tar.gz',
'Hello_Hi_2015-02-20T095845_1424543481.tar.gz',
'Hello_Hi_2015-02-20T095926_1424543481.tar.gz',
'Hello_Hi_2015-02-20T100025_1424543482.tar.gz',
'Hello_Hi_2015-02-20T111631_1424543483.tar.gz',
'Hello_Hi_2015-02-20T111718_1424543483.tar.gz',
'Hello_Hi_2015-02-20T112502_1424543483.tar.gz',
'Hello_Hi_2015-02-20T112633_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113427_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113456_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113608_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113659_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113809_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113901_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113955_1424543485.tar.gz',
'Hello_Hi_2015-03-20T114122_1424543485.tar.gz',
'Hello_Hi_2015-02-20T114532_1424543486.tar.gz',
'Hello_Hi_2015-02-20T120045_1424543487.tar.gz',
'Hello_Hi_2015-02-20T120146_1424543487.tar.gz',
'Hello_WR_2015-02-20T084709_1424543480.tar.gz',
'Hello_WR_2015-02-20T113016_1424543486.tar.gz']
Output should be:
file_list = ['Hello_Hi_2015-02-20T084521_1424543480.tar.gz',
'Hello_WR_2015-02-20T084709_1424543480.tar.gz',
'Hello_Hi_2015-02-20T095845_1424543481.tar.gz',
'Hello_Hi_2015-02-20T095926_1424543481.tar.gz',
'Hello_Hi_2015-02-20T100025_1424543482.tar.gz',
'Hello_Hi_2015-02-20T111631_1424543483.tar.gz',
'Hello_Hi_2015-02-20T111718_1424543483.tar.gz',
'Hello_Hi_2015-02-20T112502_1424543483.tar.gz',
'Hello_Hi_2015-02-20T112633_1424543484.tar.gz',
'Hello_WR_2015-02-20T113016_1424543486.tar.gz',
'Hello_Hi_2015-02-20T113427_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113456_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113608_1424543484.tar.gz',
'Hello_Hi_2015-02-20T113659_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113809_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113901_1424543485.tar.gz',
'Hello_Hi_2015-02-20T113955_1424543485.tar.gz',
'Hello_Hi_2015-02-20T114532_1424543486.tar.gz',
'Hello_Hi_2015-02-20T120045_1424543487.tar.gz',
'Hello_Hi_2015-02-20T120146_1424543487.tar.gz',
'Hello_Hi_2015-03-20T114122_1424543485.tar.gz']
Below is the code which i have tried.
def sort( dir ):
os.chdir( dir )
file_list = glob.glob('Hello_*')
file_list.sort(key=os.path.getmtime)
print("\n".join(file_list))
return 0
Thanks in advance!!
So this worked for me and it sorted files by created time that did not have the time stamp in the name;
import os
import re
files = [file for file in os.listdir(".") if (file.lower().endswith('.gz'))]
files.sort(key=os.path.getmtime)
for file in sorted(files,key=os.path.getmtime):
print(file)
Would this work?
You could write list contents to a file line by line and read the file:
lines = sorted(open(open_file).readlines(), key = lambda line :
line.split("_")[2])
Further, you could print out lines.
Your code is trying to sort based on the filesystem-stored modified time, not the filename time.
Since your filename encoding is slightly sane :-) if you want to sort based on filename alone, you may use:
sorted(os.listdir(dir), key=lambda s: s[9:]))
That will do, but only because the timestamp encoding in the filename is sane: fixed-length prefix, zero-padded, constant-width numbers, going in sequence from biggest time reference (year) to the lowest one (second).
If your prefix is not fixed, you can try something with RegExp like this (which will sort by the value after the second underscore):
import re
pat = re.compile('_.*?(_)')
sorted(os.listdir(dir), key=lambda s: s[pat.search(s).end():])