CRITICAL: Unhandled error in Deferred: - python-2.7

I am developing spider project and I have moved to a new computer. Now I am installing everything and I encounter problem with Twisted. I have read about this bug and I have installed pywin32 and then also WinPython, but it doesn't help. I have tried to update Twisted with this command
pip install Twisted --update
as advised in the forum, but it says that pip install doesn't have --update option. I have also run
python python27\scripts\pywin32_postinstall.py -install
but with no success. This is my error:
G:\Job_vacancies\Python\vacancies>scrapy crawl jobs
2015-10-06 09:12:53 [scrapy] INFO: Scrapy 1.0.3 started (bot: vacancies)
2015-10-06 09:12:53 [scrapy] INFO: Optional features available: ssl, http11
2015-10-06 09:12:53 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'va
cancies.spiders', 'SPIDER_MODULES': ['vacancies.spiders'], 'DEPTH_LIMIT': 3, 'BO
T_NAME': 'vacancies'}
2015-10-06 09:12:53 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsol
e, LogStats, CoreStats, SpiderState
Unhandled error in Deferred:
2015-10-06 09:12:53 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_comm
and
cmd.run(args, opts)
File "c:\python27\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "c:\python27\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1274, in
unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "c:\python27\lib\site-packages\twisted\internet\defer.py", line 1128, in
_inlineCallbacks
result = g.send(result)
File "c:\python27\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.engine = self._create_engine()
File "c:\python27\lib\site-packages\scrapy\crawler.py", line 83, in _create_en
gine
return ExecutionEngine(self, lambda _: self.stop())
File "c:\python27\lib\site-packages\scrapy\core\engine.py", line 66, in __init
__
self.downloader = downloader_cls(crawler)
File "c:\python27\lib\site-packages\scrapy\core\downloader\__init__.py", line
65, in __init__
self.handlers = DownloadHandlers(crawler)
File "c:\python27\lib\site-packages\scrapy\core\downloader\handlers\__init__.p
y", line 23, in __init__
cls = load_object(clspath)
File "c:\python27\lib\site-packages\scrapy\utils\misc.py", line 44, in load_ob
ject
mod = import_module(module)
File "c:\python27\lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "c:\python27\lib\site-packages\scrapy\core\downloader\handlers\s3.py", li
ne 6, in <module>
from .http import HTTPDownloadHandler
File "c:\python27\lib\site-packages\scrapy\core\downloader\handlers\http.py",
line 5, in <module>
from .http11 import HTTP11DownloadHandler as HTTPDownloadHandler
File "c:\python27\lib\site-packages\scrapy\core\downloader\handlers\http11.py"
, line 15, in <module>
from scrapy.xlib.tx import Agent, ProxyAgent, ResponseDone, \
File "c:\python27\lib\site-packages\scrapy\xlib\tx\__init__.py", line 3, in <m
odule>
from twisted.web import client
File "c:\python27\lib\site-packages\twisted\web\client.py", line 42, in <modul
e>
from twisted.internet.endpoints import TCP4ClientEndpoint, SSL4ClientEndpoin
t
File "c:\python27\lib\site-packages\twisted\internet\endpoints.py", line 34, i
n <module>
from twisted.internet.stdio import StandardIO, PipeAddress
File "c:\python27\lib\site-packages\twisted\internet\stdio.py", line 30, in <m
odule>
from twisted.internet import _win32stdio
File "c:\python27\lib\site-packages\twisted\internet\_win32stdio.py", line 7,
in <module>
import win32api
exceptions.ImportError: DLL load failed: The specified module could not be found
.
2015-10-06 09:12:53 [twisted] CRITICAL:
And this is my code:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# encoding=UTF-8
import scrapy, urlparse
from scrapy.http import Request
from scrapy.utils.response import get_base_url
from urlparse import urlparse, urljoin
from vacancies.items import JobItem
#We need that in order to force Slovenian pages instead of English pages. It happened at "http://www.g-gmi.si/gmiweb/" that only English pages were found and no Slovenian.
#from scrapy.conf import settings
#settings.overrides['DEFAULT_REQUEST_HEADERS'] = {'Accept':'text/html,application/xhtml+xml;q=0.9,*/*;q=0.8','Accept-Language':'sl',}
#settings.overrides['DEFAULT_REQUEST_HEADERS'] = {'Accept':'text/html,application/xhtml+xml;q=0.9,*/*;q=0.8','Accept-Language':'sl','en':q=0.8,}
class JobSpider(scrapy.Spider):
name = "jobs"
#Test sample of SLO companies
start_urls = [
"http://www.g-gmi.si/gmiweb/",
]
#Result of the programme is this list of job vacancies webpages.
jobs_urls = []
def parse(self, response):
response.selector.remove_namespaces()
#We take all urls, they are marked by "href". These are either webpages on our website either new websites.
urls = response.xpath('//#href').extract()
#Base url.
base_url = get_base_url(response)
#Loop through all urls on the webpage.
for url in urls:
#If url represents a picture, a document, a compression ... we ignore it. We might have to change that because some companies provide job vacancies information in PDF.
if url.endswith((
#images
'.jpg', '.jpeg', '.png', '.gif', '.eps', '.ico', '.svg', '.tif', '.tiff',
'.JPG', '.JPEG', '.PNG', '.GIF', '.EPS', '.ICO', '.SVG', '.TIF', '.TIFF',
#documents
'.xls', '.ppt', '.doc', '.xlsx', '.pptx', '.docx', '.txt', '.csv', '.pdf', '.pd',
'.XLS', '.PPT', '.DOC', '.XLSX', '.PPTX', '.DOCX', '.TXT', '.CSV', '.PDF', '.PD',
#music and video
'.mp3', '.mp4', '.mpg', '.ai', '.avi', '.swf',
'.MP3', '.MP4', '.MPG', '.AI', '.AVI', '.SWF',
#compressions and other
'.zip', '.rar', '.css', '.flv', '.php',
'.ZIP', '.RAR', '.CSS', '.FLV', '.PHP',
)):
continue
#If url includes characters like ?, %, &, # ... it is LIKELY NOT to be the one we are looking for and we ignore it.
#However in this case we exclude good urls like http://www.mdm.si/company#employment
if any(x in url for x in ['?', '%', '&', '#']):
continue
#Ignore ftp.
if url.startswith("ftp"):
continue
#We need to save original url for xpath, in case we change it later (join it with base_url)
url_xpath = url
#If url doesn't start with "http", it is relative url, and we add base url to get absolute url.
# -- It is true, that we may get some strange urls, but it is fine for now.
if not (url.startswith("http")):
url = urljoin(base_url,url)
#We don't want to go to other websites. We want to stay on our website, so we keep only urls with domain (netloc) of the company we are investigating.
if (urlparse(url).netloc == urlparse(base_url).netloc):
#The main part. We look for webpages, whose urls include one of the employment words as strings.
# -- Instruction.
# -- Users in other languages, please insert employment words in your own language, like jobs, vacancies, career, employment ... --
if any(x in url for x in [
'zaposlovanje',
'Zaposlovanje',
'zaposlitev',
'Zaposlitev',
'zaposlitve',
'Zaposlitve',
'zaposlimo',
'Zaposlimo',
'kariera',
'Kariera',
'delovna-mesta',
'delovna_mesta',
'pridruzi-se',
'pridruzi_se',
'prijava-za-delo',
'prijava_za_delo',
'oglas',
'Oglas',
'iscemo',
'Iscemo',
'careers',
'Careers',
'jobs',
'Jobs',
'employment',
'Employment',
]):
#This is additional filter, suggested by Dan Wu, to improve accuracy. We will check the text of the url as well.
texts = response.xpath('//a[#href="%s"]/text()' % url_xpath).extract()
#1. Texts are empty.
if texts == []:
print "Ni teksta za url: " + str(url)
#We found url that includes one of the magic words and also the text includes a magic word.
#We check url, if we have found it before. If it is new, we add it to the list "jobs_urls".
if url not in self.jobs_urls:
self.jobs_urls.append(url)
item = JobItem()
#item["text"] = text
item["url"] = url
#We return the item.
yield item
# 2. There are texts, one or more.
else:
#For the same partial url several texts are possible.
for text in texts:
if any(x in text for x in [
'zaposlovanje',
'Zaposlovanje',
'zaposlitev',
'Zaposlitev',
'zaposlitve',
'Zaposlitve',
'zaposlimo',
'Zaposlimo',
'ZAPOSLIMO',
'kariera',
'Kariera',
'delovna-mesta',
'delovna_mesta',
'pridruzi-se',
'pridruzi_se',
'oglas',
'Oglas',
'iscemo',
'Iscemo',
'ISCEMO',
'careers',
'Careers',
'jobs',
'Jobs',
'employment',
'Employment',
]):
#We found url that includes one of the magic words and also the text includes a magic word.
#We check url, if we have found it before. If it is new, we add it to the list "jobs_urls".
if url not in self.jobs_urls:
self.jobs_urls.append(url)
item = JobItem()
item["text"] = text
item["url"] = url
#We return the item.
yield item
#We don't put "else" sentence because we want to further explore the employment webpage to find possible new employment webpages.
#We keep looking for employment webpages, until we reach the DEPTH, that we have set in settings.py.
yield Request(url, callback = self.parse)
# We run the programme in the command line with this command:
# scrapy crawl jobs -o jobs.csv -t csv --logfile log.txt
# We get two output files
# 1) jobs.csv
# 2) log.txt
# Then we manually put one of employment urls from jobs.csv into read.py
I would be glad if you could give some advice on how to run this thing. Thank you, Marko

You should always install stuff into a virtualenv. Once you've got a virtualenv and it's active, do:
pip install --upgrade twisted pypiwin32
and you should get the depenendency that makes Twisted support stdio on the Windows platform.
To get all the goodies you might try
pip install --upgrade twisted[windows_platform]
but you may run into problems with gmp.h if you try that, and you don't need most of it to do what you're trying to do.

Related

sqlalchemy showing an error related to os.getenv()

I am working on sqlalchemy and there's command in it engine -
create_engine(os.getenv("DATABASE_URL")).
When I run the program it shows an error saying -
"Could not parse rfc1738 URL from string 'C:\Program Files\ PostgreSQL\10\data"
Code
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
#engine = create_engine('sqlite://')
engine = create_engine(os.getenv("DATABASE_URL"))
db = scoped_session(sessionmaker(bind=engine))
def main():
flights = db.execute("SELECT origin, destinaiton, duration FROM flights").fetchall()
for flights in flights:
print(f"{flight.origin} to {flight.destination}, {flight.duration} minutes.")
if __name__ == "__main__":
main()
Error
Traceback (most recent call last): File "list.py", line 7, in
engine = create_engine(os.getenv("DATABASE_URL")) File "C:\Users\Amber
Bhanarkar\AppData\Local\Programs\Python\Python36\lib\site-packages\sqlalchemy\engine__init__.py",
line 424, in create_engine
return strategy.create(*args, **kwargs) File "C:\Users\Amber Bhanarkar\AppData\Local\Programs\Python\Python36\lib\site-packages\sqlalchemy\engine\strategies.py",
line 50, in create
u = url.make_url(name_or_url) File "C:\Users\Amber Bhanarkar\AppData\Local\Programs\Python\Python36\lib\site-packages\sqlalchemy\engine\url.py",
line 211, in make_url
return _parse_rfc1738_args(name_or_url) File "C:\Users\Amber Bhanarkar\AppData\Local\Programs\Python\Python36\lib\site-packages\sqlalchemy\engine\url.py",
line 270, in _parse_rfc1738_args
"Could not parse rfc1738 URL from string '%s'" % name) sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string
'C:\Program Files\PostgreSQL\10\data'
Please someone help me resolving this issue.
The engine URL shouldn't be a path to your local Postgres installation but a sting that tells SQLAlchemy how to connect to the database. It has the following format:
postgresql://username:password#server:port/databasename
I'm also watching course video of cs50 now. You should try to just use create_engine("your_database_url").
If you still want to use create_engine(os.getenv("DATABASE_URL")) tp process, make sure you can log in using the command line like this psql $DATABASE_URL.
using psql $DATABASE_URL works for me but I'm still getting the same error on create_engine("your_database_url")
(project1) C:\Users\Gregory Goufan\Downloads\CS50s Web Programming with Python and JavaScript\project1\project1\project1>psql postgresql://openpg:openpgpwd#localhost:5432/postgres
psql (9.5.8)
WARNING: Console code page (437) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=#
But it is true that by putting the url direct in create_engine funciton works strange thing because it is indeed a string which is returned
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ' 'postgresql://openpg:openpgpwd#localhost:5432/postgres''

Scrapy download files from FTP

I need to download a group of csv using scrapy from FTP. But first I need to scrape a website(https://www.douglas.co.us/assessor/data-downloads/) in order to get the urls of csv in the ftp.I read about how to download files in the documentation(Downloading and processing files and images)
settings
custom_settings = {
'ITEM_PIPELINES': {
'scrapy.pipelines.files.FilesPipeline': 1,
},
'FILES_STORE' : os.path.dirname(os.path.abspath(__file__))
}
parse
def parse(self, response):
self.logger.info("In parse method!!!")
# Property Ownership
property_ownership = response.xpath("//a[contains(., 'Property Ownership')]/#href").extract_first()
# Property Location
property_location = response.xpath("//a[contains(., 'Property Location')]/#href").extract_first()
# Property Improvements
property_improvements = response.xpath("//a[contains(., 'Property Improvements')]/#href").extract_first()
# Property Value
property_value = response.xpath("//a[contains(., 'Property Value')]/#href").extract_first()
item = FiledownloadItem()
self.insert_keyvalue(item,"file_urls",[property_ownership, property_location, property_improvements, property_value])
yield item
But I got the following error
Traceback (most recent call last): File
"/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py",
line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw) File "/usr/local/lib/python2.7/dist-packages/scrapy/pipelines/media.py",
line 79, in process_item
requests = arg_to_iter(self.get_media_requests(item, info)) File "/usr/local/lib/python2.7/dist-packages/scrapy/pipelines/files.py",
line 382, in get_media_requests
return [Request(x) for x in item.get(self.files_urls_field, [])] File
"/usr/local/lib/python2.7/dist-packages/scrapy/http/request/init.py",
line 25, in init
self._set_url(url) File "/usr/local/lib/python2.7/dist-packages/scrapy/http/request/init.py",
line 58, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url) ValueError: Missing scheme in request url: [
The best explanation to my problem is this answer of this question scrapy error :exceptions.ValueError: Missing scheme in request url:, that explain that the problem is that urls to download are missing the "http://".
What should I do in my case? Can I use FilesPipeline? or I need to do something different?
Thanks in advance.
ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: [
According to the traceback, scrapy thinks your file url is '['.
My best guess is that you have an error in the insert_keyvalue() method.
Also, why have a method for this? Simple assignment should work.

Read XML file while it is being written (in Python)

I have to monitor an XML file being written by a tool running all the day. But the XML file is properly completed and closed only at the end of the day.
Same constraints as XML stream processing:
Parse an incomplete XML file on-the-fly and trigger actions
Keep track of the last position within the file to avoid processing it again from the beginning
On answer of Need to read XML files as a stream using BeautifulSoup in Python, slezica suggests xml.sax, xml.etree.ElementTree and cElementTree. But no success with my attempts to use xml.etree.ElementTree and cElementTree. There are also xml.dom, xml.parsers.expat and lxml but I do not see support for "on-the-fly parsing".
I need more obvious examples...
I am currently using Python 2.7 on Linux, but I will migrate to Python 3.x => please also provide tips on new Python 3.x features. I also use watchdog to detect XML file modifications => Optionally, reuse the watchdog mechanism. Optionally support also Windows.
Please provide easy to understand/maintain solutions. If it is too complex, I may just use tell()/seek() to move within the file, use stupid text search in the raw XML and finally extract the values using basic regex.
XML sample:
<dfxml xmloutputversion='1.0'>
<creator version='1.0'>
<program>TCPFLOW</program>
<version>1.4.6</version>
</creator>
<configuration>
<fileobject>
<filename>file1</filename>
<filesize>288</filesize>
<tcpflow packets='12' srcport='1111' dstport='2222' family='2' />
</fileobject>
<fileobject>
<filename>file2</filename>
<filesize>352</filesize>
<tcpflow packets='12' srcport='3333' dstport='4444' family='2' />
</fileobject>
<fileobject>
<filename>file3</filename>
<filesize>456</filesize>
...
...
First test using SAX failed:
import xml.sax
class StreamHandler(xml.sax.handler.ContentHandler):
def startElement(self, name, attrs):
print 'start: name=', name
def endElement(self, name):
print 'end: name=', name
if name == 'root':
raise StopIteration
if __name__ == '__main__':
parser = xml.sax.make_parser()
parser.setContentHandler(StreamHandler())
with open('f.xml') as f:
parser.parse(f)
Shell:
$ while read line; do echo $line; sleep 1; done <i.xml >f.xml &
...
$ ./test-using-sax.py
start: name= dfxml
start: name= creator
start: name= program
end: name= program
start: name= version
end: name= version
Traceback (most recent call last):
File "./test-using-sax.py", line 17, in <module>
parser.parse(f)
File "/usr/lib64/python2.7/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/lib64/python2.7/xml/sax/xmlreader.py", line 125, in parse
self.close()
File "/usr/lib64/python2.7/xml/sax/expatreader.py", line 220, in close
self.feed("", isFinal = 1)
File "/usr/lib64/python2.7/xml/sax/expatreader.py", line 214, in feed
self._err_handler.fatalError(exc)
File "/usr/lib64/python2.7/xml/sax/handler.py", line 38, in fatalError
raise exception
xml.sax._exceptions.SAXParseException: report.xml:15:0: no element found
Since yesterday I found the Peter Gibson's answer about the undocumented xml.etree.ElementTree.XMLTreeBuilder._parser.EndElementHandler.
This example is similar to the other one but uses xml.etree.ElementTree (and watchdog).
It does not work when ElementTree is replaced by cElementTree :-/
import time
import watchdog.events
import watchdog.observers
import xml.etree.ElementTree
class XmlFileEventHandler(watchdog.events.PatternMatchingEventHandler):
def __init__(self):
watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.xml'])
self.xml_file = None
self.parser = xml.etree.ElementTree.XMLTreeBuilder()
def end_tag_event(tag):
node = self.parser._end(tag)
print 'tag=', tag, 'node=', node
self.parser._parser.EndElementHandler = end_tag_event
def on_modified(self, event):
if not self.xml_file:
self.xml_file = open(event.src_path)
buffer = self.xml_file.read()
if buffer:
self.parser.feed(buffer)
if __name__ == '__main__':
observer = watchdog.observers.Observer()
event_handler = XmlFileEventHandler()
observer.schedule(event_handler, path='.')
try:
observer.start()
while True:
time.sleep(10)
finally:
observer.stop()
observer.join()
While the script is running, do not forget to touch one XML file, or simulate the on-the-fly writing using this one line script:
while read line; do echo $line; sleep 1; done <in.xml >out.xml &
For information, the xml.etree.ElementTree.iterparse does not seem to support a file being written. My test code:
from __future__ import print_function, division
import xml.etree.ElementTree
if __name__ == '__main__':
context = xml.etree.ElementTree.iterparse('f.xml', events=('end',))
for action, elem in context:
print(action, elem.tag)
My output:
end program
end version
end creator
end filename
end filesize
end tcpflow
end fileobject
end filename
end filesize
end tcpflow
end fileobject
end filename
end filesize
Traceback (most recent call last):
File "./iter.py", line 9, in <module>
for action, elem in context:
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1281, in next
self._root = self._parser.close()
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1654, in close
self._raiseerror(v)
File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: no element found: line 20, column 0
Three hours after posting my question, no answer received. But I have finally implemented the simple example I was looking for.
My inspiration is from saaj's answer and is based on xml.sax and watchdog.
from __future__ import print_function, division
import time
import watchdog.events
import watchdog.observers
import xml.sax
class XmlStreamHandler(xml.sax.handler.ContentHandler):
def startElement(self, tag, attributes):
print(tag, 'attributes=', attributes.items())
self.tag = tag
def characters(self, content):
print(self.tag, 'content=', content)
class XmlFileEventHandler(watchdog.events.PatternMatchingEventHandler):
def __init__(self):
watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.xml'])
self.file = None
self.parser = xml.sax.make_parser()
self.parser.setContentHandler(XmlStreamHandler())
def on_modified(self, event):
if not self.file:
self.file = open(event.src_path)
self.parser.feed(self.file.read())
if __name__ == '__main__':
observer = watchdog.observers.Observer()
event_handler = XmlFileEventHandler()
observer.schedule(event_handler, path='.')
try:
observer.start()
while True:
time.sleep(10)
finally:
observer.stop()
observer.join()
While the script is running, do not forget to touch one XML file, or simulate the on-the-fly writing using the following command:
while read line; do echo $line; sleep 1; done <in.xml >out.xml &

Python Scrapy - Run Spider

Running Python27 on a Windows machine ... Attempting to use Scrapy
following the basic Scrapy tutorial # http://doc.scrapy.org/en/latest/intro/overview.html
I've created the following spider and saved it as Test2 # C:\Python27\Scrapy
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract_first(),
'votes': response.css('.question .vote-count-post::text').extract_first(),
'body': response.css('.question .post-text').extract_first(),
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
The next step tells me to run the spider using
scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.json
But I have no idea where to run that line of code.
I am used to running a print or a store to csv command at the end of my python file in order to retrieve results.
Sure this is an easy resolve but I'm not getting it .. Thanks in advance.
You will need to execute the runspider command in whatever command line utility you are using, e.g. Cygwin, cmd etc.
That command will crate a file called top-stackoverflow-questions.json in the directory in which you run the command.

Django v1.6 debug-toolbar Middleware Error No .rsplit()

I am trying to use django-debug-toolbar with my django application and it worked for django v1.5. However, I am trying to migrate the system to django v1.6 and it is generating the following error when I try to load any page.
python manage.py runserver
Validating models...
0 errors found
December 21, 2013 - 22:53:18
Django version 1.6.1, using settings 'MySite.settings'
Starting development server at http://XXX.XXX.XXX.XXX:XXXX/
Quit the server with CONTROL-C.
Internal Server Error: /
Traceback (most recent call last):
File "/home/user/django-env/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 90, in get_response
response = middleware_method(request)
File "/home/user/django-env/local/lib/python2.7/site-packages/debug_toolbar/middleware.py", line 45, in process_request
mod_path, func_name = func_path.rsplit('.', 1)
AttributeError: 'function' object has no attribute 'rsplit'
[21/Dec/2013 22:53:21] "GET / HTTP/1.1" 500 58172
The source has this note, but I'm not sure what they mean precisely (import_by_path):
def process_request(self, request):
# Decide whether the toolbar is active for this request.
func_path = dt_settings.CONFIG['SHOW_TOOLBAR_CALLBACK']
# Replace this with import_by_path in Django >= 1.6.
mod_path, func_name = func_path.rsplit('.', 1)
show_toolbar = getattr(import_module(mod_path), func_name)
if not show_toolbar(request):
return
Also, while we're at it. Any idea what this message means as well?
/home/user/django-env/local/lib/python2.7/site-packages/debug_toolbar/settings.py:68: DeprecationWarning: SHOW_TOOLBAR_CALLBACK is now a dotted path. Update your DEBUG_TOOLBAR_CONFIG setting.
"DEBUG_TOOLBAR_CONFIG setting.", DeprecationWarning)
My settings.py:
def custom_show_toolbar(request):
return True # Always show toolbar, for example purposes only.
DEBUG_TOOLBAR_CONFIG = {
'INTERCEPT_REDIRECTS': False,
'SHOW_TOOLBAR_CALLBACK': custom_show_toolbar,
'INSERT_BEFORE': 'div',
'ENABLE_STACKTRACES' : True,
}
The problem is described by the message you get (which should not be a deprecation warning, rather it should be a TypeError, that's probably a bug in debug_toolbar):
SHOW_TOOLBAR_CALLBACK is now a dotted path. Update your DEBUG_TOOLBAR_CONFIG setting
The SHOW_TOOLBAR_CALLBACK setting used to be a callable, but now it is a dotted path to a callable: a string. The code you quoted tries to rsplit the value of SHOW_TOOLBAR_CALLBACK, but because you can't rsplit a function object, you get an error.
To solve this, put your callback into another python file, for example your_site/toolbar_stuff.py, and adjust the setting to 'SHOW_TOOLBAR_CALLBACK': 'your_site.toolbar_stuff.custom_show_toolbar'. You can test if this works beforehand by trying in the shell:
from your_site.toolbar_stuff import custom_show_toolbar
If that works, your new setting should work, too.