I need to use smart_str on the results of a query in my view to take care of latin characters. How can I convert each item in my queryset?
I have tried:
...
mylist = []
myquery_set = Locality.objects.all()
for item in myquery_set:
mylist.append(smart_str(item))
...
But I get the error:
coercing to Unicode: need string or buffer, <object> found
What is the best way to do this? Or can I take care of it in the template as I iterate the results?
EDIT: if I output the values to a template then all is good. However, I want to output the response as an .xls file using the code:
...
filename = "locality.xls"
response['Content-Disposition'] = 'attachment; filename='+filename
response['Content-Type'] = 'application/vnd.ms-excel; charset=utf-8'
return response
The view works fine (gives me the file etc.) but the latin characters are not rendered properly.
In your code you're executing smart_str on Model object, instead of a string (so basically you're trying to convert object to string). The solution is to smart_str on a field:
mylist.append(smart_str(item.fieldname))
Related
I'm trying to write a small crawler to crawl multiple wikipedia pages.
I want to make the crawl somewhat dynamic by concatenating the hyperlink for the exact wikipage from a file which contains a list of names.
For example, the first line of "deutsche_Schauspieler.txt" says "Alfred Abel" and the concatenated string would be "https://de.wikipedia.org/wiki/Alfred Abel". Using the txt file will result in heading being none, yet when I complete the link with a string inside the script, it works.
This is for python 2.x.
I already tried to switch from " to ',
tried + instead of %s
tried to put the whole string into the txt file (so that first line reads "http://..." instead of "Alfred Abel"
tried to switch from "Alfred Abel" to "Alfred_Abel
from bs4 import BeautifulSoup
import requests
file = open("test.txt","w")
f = open("deutsche_Schauspieler.txt","r")
content = f.readlines()
for line in content:
link = "https://de.wikipedia.org/wiki/%s" % (str(line))
response = requests.get(link)
html = response.content
soup = BeautifulSoup(html)
heading = soup.find(id='Vorlage_Personendaten')
uls = heading.find_all('td')
for item in uls:
file.write(item.text.encode('utf-8') + "\n")
f.close()
file.close()
I expect to get the content of the table "Vorlage_Personendaten" which actually works if i change line 10 to
link = "https://de.wikipedia.org/wiki/Alfred Abel"
# link = "https://de.wikipedia.org/wiki/Alfred_Abel" also works
But I want it to work using the textfile
Looks like the problem in your text file where you have used "Alfred Abel" that is why you are getting the following exceptions
uls = heading.find_all('td')
AttributeError: 'NoneType' object has no attribute 'find_all'
Please remove the string quotes "Alfred Abel" and use Alfred Abel inside the text file deutsche_Schauspieler.txt . it will work as expected.
I found the solution myself.
Although there are no additionaly lines on the file, the content array displays like
['Alfred Abel\n'], but printing out the first index of the array will result in 'Alfred Abel'. It still gets interpreted like the string in the array, thus forming a false link.
So you want to move the last(!) character from the current line.
A solution could look like so:
from bs4 import BeautifulSoup
import requests
file = open("test.txt","w")
f = open("deutsche_Schauspieler.txt","r")
content = f.readlines()
print (content)
for line in content:
line=line[:-1] #Note how this removes \n which are technically two characters
link = "https://de.wikipedia.org/wiki/%s" % str(line)
response = requests.get(link)
html = response.content
soup = BeautifulSoup(html,"html.parser")
try:
heading = soup.find(id='Vorlage_Personendaten')
uls = heading.find_all('td')
for item in uls:
file.write(item.text.encode('utf-8') + "\n")
except:
print ("That did not work")
pass
f.close()
file.close()
I am stuck with a silly problem.
I have a json data and trying to save it in my model.
Here is the code.
response = response.json() #this gives json data
response = json.loads(response) #loads string to json
json_string = response #ready to get data from list
modelfielda = json_string.get("abc") # this works fine
modelfieldb = json_string.get('["c"]["d"]["e"]') #this does not give data though data is present.
My json data comes like this:
{
"abc":"AP003",
"c":[
{
"d":{
"e":"some data",
"f":"some data"
}
}
]
}
So my question is how to get data inside c.
Try this for e:bnm = json_string.get('c').get('d').get('e') or with list:
bnm = json_string.get('c')[0].get('d').get('e')
By using multiple .gets:
bnm = json_string.get('c')[0].get('d').get('e') # bnm = 'some data'
Or perhaps better (since it will error in case the key does not exists):
bnm = json_string['c'][0]['d']['e'] # bnm = 'some data'
Since you converted it to a Python dictionary, you basically work with a dictionary, and you can obtain the value corresponding to a key by using some_dict[some_key]. Since we here have a cascade of dictionaries, we thus obtain the subdictionary for which we again obtain the corresponding value. The value corresponding to c is a list, and we can obtain the first element by writing [0].
I've wrote a simple script to extract data from some site. Script works as expected but I'm not pleased with output format
Here is my code
class ArticleSpider(Spider):
name = "article"
allowed_domains = ["example.com"]
start_urls = (
"http://example.com/tag/1/page/1"
)
def parse(self, response):
next_selector = response.xpath('//a[#class="next"]/#href')
url = next_selector[1].extract()
# url is like "tag/1/page/2"
yield Request(urlparse.urljoin("http://example.com", url))
item_selector = response.xpath('//h3/a/#href')
for url in item_selector.extract():
yield Request(urlparse.urljoin("http://example.com", url),
callback=self.parse_article)
def parse_article(self, response):
item = ItemLoader(item=Article(), response=response)
# here i extract title of every article
item.add_xpath('title', '//h1[#class="title"]/text()')
return item.load_item()
I'm not pleased with the output, something like:
[scrapy] DEBUG: Scraped from <200 http://example.com/tag/1/article_name>
{'title': [u'\xa0"\u0412\u041e\u041e\u0411\u0429\u0415-\u0422\u041e \u0421\u0412\u041e\u0411\u041e\u0414\u0410 \u0417\u0410\u041a\u0410\u041d\u0427\u0418\u0412\u0410\u0415\u0422\u0421\u042f"']}
I think I need to use custom ItemLoader class but I don't know how. Need your help.
TL;DR I need to convert text, scraped by Scrapy from unicode to utf-8
As you can see below, this isn't much of a Scrapy issue but more of Python itself. It could also marginally be called an issue :)
$ scrapy shell http://censor.net.ua/resonance/267150/voobscheto_svoboda_zakanchivaetsya
In [7]: print response.xpath('//h1/text()').extract_first()
"ВООБЩЕ-ТО СВОБОДА ЗАКАНЧИВАЕТСЯ"
In [8]: response.xpath('//h1/text()').extract_first()
Out[8]: u'\xa0"\u0412\u041e\u041e\u0411\u0429\u0415-\u0422\u041e \u0421\u0412\u041e\u0411\u041e\u0414\u0410 \u0417\u0410\u041a\u0410\u041d\u0427\u0418\u0412\u0410\u0415\u0422\u0421\u042f"'
What you see is two different representations of the same thing - a unicode string.
What I would suggest is run crawls with -L INFO or add LOG_LEVEL='INFO' to your settings.py in order to not show this output in the console.
One annoying thing is that when you save as JSON, you get escaped unicode JSON e.g.
$ scrapy crawl example -L INFO -o a.jl
gives you:
$ cat a.jl
{"title": "\u00a0\"\u0412\u041e\u041e\u0411\u0429\u0415-\u0422\u041e \u0421\u0412\u041e\u0411\u041e\u0414\u0410 \u0417\u0410\u041a\u0410\u041d\u0427\u0418\u0412\u0410\u0415\u0422\u0421\u042f\""}
This is correct but it takes more space and most applications handle equally well non-escaped JSON.
Adding a few lines in your settings.py can change this behaviour:
from scrapy.exporters import JsonLinesItemExporter
class MyJsonLinesItemExporter(JsonLinesItemExporter):
def __init__(self, file, **kwargs):
super(MyJsonLinesItemExporter, self).__init__(file, ensure_ascii=False, **kwargs)
FEED_EXPORTERS = {
'jsonlines': 'myproject.settings.MyJsonLinesItemExporter',
'jl': 'myproject.settings.MyJsonLinesItemExporter',
}
Essentially what we do is just setting ensure_ascii=False for the default JSON Item Exporters. This prevents escaping. I wish there was an easier way to pass arguments to exporters but I can't see any since they are initialized with their default arguments around here. Anyway, now your JSON file has:
$ cat a.jl
{"title": " \"ВООБЩЕ-ТО СВОБОДА ЗАКАНЧИВАЕТСЯ\""}
which is better-looking, equally valid and more compact.
There are 2 independant issues affecting display of unicode string.
if you return a list of strings, the output file will have some issue them because it will use ascii codec by default to serialize list elements. You can work around as below but it's more appropriate to use extract_first() as suggested by #neverlastn
class Article(Item):
title = Field(serializer=lambda x: u', '.join(x))
the default implementation of repr() method will serialize unicode string to their escaped version \uxxxx. You can change this behaviour by overriding this method in your item class
class Article(Item):
def __repr__(self):
data = self.copy()
for k in data.keys():
if type(data[k]) is unicode:
data[k] = data[k].encode('utf-8')
return super.__repr__(data)
I'm writing a Django function that takes some user input, and generates a pdf for the user. However, the process for generating the pdf is quite intensive, and I'll get a lot of repeated requests so I'd like to store the generated pdfs on the server and check if they already exist before generating them.
The problem is that django-wkhtmltopdf (which I'm using for generation) is meant to return to the user directly, and I'm not sure how to store it on the file.
I have the following, which works for returning a pdf at /pdf:
urls.py
urlpatterns = [
url(r'^pdf$', views.createPDF.as_view(template_name='site/pdftemplate.html', filename='my_pdf.pdf'))
]
views.py
class createPDF(PDFTemplateView):
filename = 'my_pdf.pdf'
template_name = 'site/pdftemplate.html'
So that works fine to create a pdf. What I'd like is to call that view from another view and save the result. Here's what I've got so far:
#Create pdf
pdf = createPDF.as_view(template_name='site/pdftemplate.html', filename='my_pdf.pdf')
pdf = pdf(request).render()
pdfPath = os.path.join(settings.TEMP_DIR,'temp.pdf')
with open(pdfPath, 'w') as f:
f.write(pdf.content)
This creates temp.pdf and is about the size I'd expect but the file isn't valid (it renders as a single completely blank page).
Any suggestions?
Elaborating on the previous answer given: to generate a pdf file and save to disk do this anywhere in your view:
...
context = {...} # build your context
# generate response
response = PDFTemplateResponse(
request=self.request,
template=self.template_name,
filename='file.pdf',
context=context,
cmd_options={'load-error-handling': 'ignore'})
# write the rendered content to a file
with open("file.pdf", "wb") as f:
f.write(response.rendered_content)
...
I have used this code in a TemplateView class so request and template fields were set like that, you may have to set it to whatever is appropriate in your particular case.
Well, you need to take a look to the code of wkhtmltopdf, first you need to use the class PDFTemplateResponse in wkhtmltopdf.views to get access to the rendered_content property, this property get us access to the pdf file:
response = PDFTemplateResponse(
request=<your_view_request>,
template=<your_template_to_render>,
filename=<pdf_filename.pdf>,
context=<a_dcitionary_to_render>,
cmd_options={'load-error-handling': 'ignore'})
Now you could use the rendered_content property to get access to the pdf file:
mail.attach('pdf_filename.pdf', response.rendered_content, 'application/pdf')
In my case I'm using this pdf to attach to an email, you could store it.
I'm trying to export model data to a Microsoft Excel file type (.xls) by using this view:
def generate_spreadsheet(request):
alumnos = Alumno.objects.all()
response = render_to_response("spreadsheet.html", {'alumnos': alumnos})
filename = "alumnoss.xls"
response['Content-Disposition'] = 'attachment; filename='+filename
response['Content-Type'] = 'application/vnd.ms-excel; charset=utf-16'
return response
As you can see, I define the character set as utf-16 which should include all of the extra characters like áéíóú, etc. But when I open the excel document, instead of reading
Vélez
you read:
Vélez
Any help would be appreciated :)
You can set what charset is going to be used for rendering defining DEFAULT_CHARSET in your settings.py file:
http://docs.djangoproject.com/en/1.3/ref/settings/#default-charset
render_to_response() is probably writing in 'utf-8', not in utf-16.