Encoding error when writing in gml file - python-2.7

In one of my previous posts I had a problem with reading and writing strings that are in a language different from English. The problem was in the encoding of my system. ton1c mentioned that writing the strings in a txt is fine and indeed it is! Now I am trying to pass these string in a gml file and I am encountering a problem with the encoding again. Here is the code and the results.
import urllib2
import BeautifulSoup
import networkx as nx
url = 'http://www.bbc.co.uk/zhongwen/simp/'
page = urllib2.urlopen(url).read().decode("utf-8")
dom = BeautifulSoup.BeautifulSoup(page)
data = dom.findAll('meta', {'name' : 'keywords'})
data = data.encode("utf-8")
datalist = data.split(',')
G = nx.Graph()
G.add_node( "name", Strings = datalist );
It returns
File "C:\...\name.py", line 23, in <module> nx.write_gml(G, 'Gname')
File "<string>", line 2, in write_gml
File "C:\Python27\lib\site-packages\networkx\utils\decorators.py", line 263, in _open_file
result = func(*new_args, **kwargs)
File "C:\Python27\lib\site-packages\networkx\readwrite\gml.py", line 392, in write_gml
path.write(line.encode('latin-1'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 13: ordinal not in range(128)
Any suggestions? I would like also to mention that in the site of networkx it mentions GML specifications indicate that the file should only use 7bit ASCII text encoding.iso8859-1 (latin-1). (http://networkx.lanl.gov/reference/generated/networkx.readwrite.gml.write_gml.html)
PS: Please any suggestion in Python 2.7 compatibility please.

You just do the following:
import urllib2
import BeautifulSoup
import networkx as nx
url = 'http://www.bbc.co.uk/zhongwen/simp/'
page = urllib2.urlopen(url).read().decode("latin-1")
dom = BeautifulSoup.BeautifulSoup(page)
data = dom.findAll('meta', {'name' : 'keywords'})
data = data[0]['content'].encode("latin-1")
#datalist = data.split(',')
with open("tags.txt", "w") as text_file:
text_file.write("%s"%data)
G = nx.Graph()
G.add_node( "name", Strings = data.decode("latin-1") );
nx.write_gml(G,"test.gml")
graph [
node [
id 0
label "name"
Strings "BBC中文网,主页,国际新闻,中国新闻,台湾新闻,香港新闻,英国新闻,信息,财经,科技,卫生 互动,多媒体,视频,音频,图辑,bbcchinese.com, homepage, world news, China news, uk news, hong kong, taiwan, sci-tech, business, interactive, forum"
]
]

Related

Google Vision API 'TypeError: invalid file'

The following piece of code comes from Google's Vision API Documentation, the only modification I've made is adding the argument parser for the function at the bottom.
import argparse
import os
from google.cloud import vision
import io
def detect_text(path):
"""Detects text in the file."""
client = vision.ImageAnnotatorClient()
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')
for text in texts:
print('\n"{}"'.format(text.description))
vertices = (['({},{})'.format(vertex.x, vertex.y)
for vertex in text.bounding_poly.vertices])
print('bounds: {}'.format(','.join(vertices)))
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", type=str,
help="path to input image")
args = vars(ap.parse_args())
detect_text(args)
If I run it from a terminal like below, I get this invalid file error:
PS C:\VisionTest> python visionTest.py --image C:\VisionTest\test.png
Traceback (most recent call last):
File "visionTest.py", line 31, in <module>
detect_text(args)
File "visionTest.py", line 10, in detect_text
with io.open(path, 'rb') as image_file:
TypeError: invalid file: {'image': 'C:\\VisionTest\\test.png'}
I've tried with various images and image types as well as running the code from different locations with no success.
Seems like either the file doesn't exist or is corrupt since it isn't even read. Can you try another image and validate it is in the location you expect?

open pdf without text with python

I want open a PDF for a Django views but my PDF has not a text and python returns me a blank PDF.
On each page, this is a scan of a page : link
from django.http import HttpResponse
def views_pdf(request, path):
with open(path) as pdf:
response = HttpResponse(pdf.read(),content_type='application/pdf')
response['Content-Disposition'] = 'inline;elec'
return response
Exception Type: UnicodeDecodeError
Exception Value: 'charmap' codec can't decode byte 0x9d in position 373: character maps to < undefined >
Unicode error hint
The string that could not be encoded/decoded was: � ��`����
How to say at Python that is not a text but a picture ?
By default, Python 3 opens files in text mode, that is, it tries to interpret the contents of a file as text. This is what causes the exception that you see.
Since a PDF file is (generally) a binary file, try opening the file in binary mode. In that case, read() will return a bytes object.
Here's an example (in IPython). First, opening as text:
In [1]: with open('2377_001.pdf') as pdf:
...: data = pdf.read()
...:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-1-d807b6ccea6e> in <module>()
1 with open('2377_001.pdf') as pdf:
----> 2 data = pdf.read()
3
/usr/local/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
--> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe2 in position 10: invalid continuation byte
Next, reading the same file in binary mode:
In [2]: with open('2377_001.pdf', 'rb') as pdf:
...: data = pdf.read()
...:
In [3]: type(data)
Out[3]: bytes
In [4]: len(data)
Out[4]: 45659
In [5]: data[:10]
Out[5]: b'%PDF-1.4\n%'
That solves the first part, how to read the data.
The second part is how to pass it to a HttpResponse. According to the Django documentation:
"Typical usage is to pass the contents of the page, as a string, to the HttpResponse constructor"
So passing bytes might or might not work (I don't have Django installed to test). The Django book says:
"content should be an iterator or a string."
I found the following gist to write binary data:
from django.http import HttpResponse
def django_file_download_view(request):
filepath = '/path/to/file.xlsx'
with open(filepath, 'rb') as fp: # Small fix to read as binary.
data = fp.read()
filename = 'some-filename.xlsx'
response = HttpResponse(mimetype="application/ms-excel")
response['Content-Disposition'] = 'attachment; filename=%s' % filename # force browser to download file
response.write(data)
return response
The problem is probably that the file you are trying to using isn't using the correct type of encoding. You can easily find the encoding of your pdf in most pdf viewers like adobe acrobat (in properties). Once you've found out what encoding it's using you can give it to Python like so:
Replace
with open(path) as pdf:
with :
with open(path, encoding="whatever encoding your pdf is in") as pdf:
Try Latin-1 encoding this often works

Regex & BeautifulSoup - TypeError: expected string or bytes-like object

My code is running into some unexpedt error. Tried to tweak with have 'u' instead of 'r', but still get same error. Tried other solutions from stacks, but didn't go anywhere. Any suggestion?
#use urlib and beautifulsoup to scrpe table
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
import pandas as pd
url = 'https://www.example.com/profiles'
page = urlopen(url).read()
soup = BeautifulSoup(page, 'lxml')
#print(soup)
reEngName = re.compile(r'\[\*\*.+\*\*\]')
reKorName = re.compile(r'\([^\/h]*\)')
reProfile = re.compile(r'\|.+')
for line in re.findall(reEngName, soup):
print(line)
Error message:
Traceback (most recent call last):
File "ckurllib.py", line 18, in <module>
for line in re.findall(reEngName, soup):
File "C:\Users\Sammy\Anaconda3\lib\re.py", line 222, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or bytes-like object
Regex works with strings. If you want to search whole raw text of file, give the page to regex. Soap is a parser, that internally splits html into its syntactic components, organized into a tree, you can iterate through them. For example, to iterate all <a> tags:
soup = BeautifulSoup.BeautifulSoup(urllib2.urlopen(url).read())
for a in soup('a'):
out = doThings(a)
in doThings(a):
if a['href'].startswith("http:///www.domain.net"):
Naturally, in latter stage you can use regexes to check for matches in strings.

Getting ParseError when parsing using xml.etree.ElementTree

I am trying to extract the <comment> tag (using xml.etree.ElementTree) from the XML and find the comment count number and add all of the numbers. I am reading the file via a URL using urllib package.
sample data: http://python-data.dr-chuck.net/comments_42.xml
But currently i am trying to trying to print the name, and count.
import urllib
import xml.etree.ElementTree as ET
serviceurl = 'http://python-data.dr-chuck.net/comments_42.xml'
address = raw_input("Enter location: ")
url = serviceurl + urllib.urlencode({'sensor': 'false', 'address': address})
print ("Retrieving: ", url)
link = urllib.urlopen(url)
data = link.read()
print("Retrieved ", len(data), "characters")
tree = ET.fromstring(data)
tags = tree.findall('.//comment')
for tag in tags:
Name = ''
count = ''
Name = tree.find('commentinfo').find('comments').find('comment').find('name').text
count = tree.find('comments').find('comments').find('comment').find('count').number
print Name, count
Unfortunately, I am not able to even parse the XML file into Python, because i am getting this error as follows:
Traceback (most recent call last):
File "ch13_parseXML_assignment.py", line 14, in <module>
tree = ET.fromstring(data)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML
parser.feed(text)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed
self._raiseerror(v)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: syntax error: line 1, column 49
I have read previously in a similar situation that maybe the parser isn't accepting the XML file. Anticipating this, i did a Try and Except around tree = ET.fromstring(data) and I was able to get past this line, but later it is throwing an erro saying tree variable is not defined. This defeats the purpose of the output I am expecting.
Can somebody please point me in a direction that helps me?

removing double quotes and brackets from csv in python

I am trying to remove quotes and brackets from csv in python,I tryed for the folloing code but it can't give proper csv the code is:
import json
import urllib2
import re
import os
from BeautifulSoup import BeautifulSoup
import csv
u = urllib2.urlopen("http://timesofindia.indiatimes.com/")
content = u.read()
u.close()
soup2 = BeautifulSoup(content)
blog_posts = []
for e in soup2.findAll("a", attrs={'pg': re.compile('^Head')}):
for b in soup2.findAll("div", attrs={'style': re.compile('^color:#ffffff;font-size:12px;font-family:arial;padding-top:3px;text-align:center;')}):
blog_posts.append(("The Times Of India",e.text,b.text))
print blog_posts
out_file = os.path.join('resources', 'ch05-webpages','newspapers','time1.csv')
f = open(out_file, 'wb')
wr = csv.writer(f, quoting=csv.QUOTE_MINIMAL)
#f.write(json.dumps(blog_posts, indent=1))
wr.writerow(blog_posts)
f.close()
print 'Wrote output file to %s' % (f.name, )
the csv looks like:
"('The Times Of India', u'Missing jet: Air search expands to remote south Indian Ocean', u'Fri, Mar 21, 2014 | Updated 11.53AM IST')",
but i want csv like this:
The Times Of India,u'Missing jet: Air search expands to remote south Indian Ocean, u'Fri, Mar 21, 2014 | Updated 11.53AM IST
So what can i do for getting this type of csv?
Writer.writerow() expects a sequence containing strings or numbers. You are passing a sequence of tuples. Use Writer.writerows() instead.