I am trying to get ElasticSearch / Haystack set up on my local dev environment (vagrant VM running Ubuntu 12.04), and I can't work out the re-indexing process.
ES is running, and I have created a new index (I am using elasticsearch-head to view index status in the browser). I can create a new index, and query it, so I know that ES is working.
My problem is with the Haystack rebuild_index command:
(.venv)vagrant#precise32:/app$ foreman run ./manage.py rebuild_index
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
DEBUG Making a request equivalent to this: curl -XDELETE 'http://127.0.0.1:9200/test_app' -d '""'
INFO Starting new HTTP connection (1): 127.0.0.1
DEBUG "DELETE /test_app HTTP/1.1" 200 31
DEBUG response status: 200
DEBUG got response {u'acknowledged': True, u'ok': True}
DEBUG Making a request equivalent to this: curl -XPOST 'http://127.0.0.1:9200/test_app/_refresh' -d '""'
DEBUG "POST /test_app/_refresh HTTP/1.1" 404 66
DEBUG response status: 404
Failed to clear Elasticsearch index: (404, u'IndexMissingException[[test_app] missing]')
ERROR Failed to clear Elasticsearch index: (404, u'IndexMissingException[[test_app] missing]')
All documents removed.
Looking at this loggging - it seems as if Haystack is attempting to refresh an index that it has just deleted - which would always fail.
What am I doing wrong?
[UPDATE 1]
If I split the POST requests I can get this to run:
(.venv)vagrant#precise32:/app$ curl -XPOST 'http://127.0.0.1:9200/test_app/'
{"ok":true,"acknowledged":true}
(.venv)vagrant#precise32:/app$ curl -XPOST 'http://127.0.0.1:9200/test_app/_refresh' -d '""'
{"ok":true,"_shards":{"total":10,"successful":5,"failed":0}}
[UPDATE 2]
Digging in to the code, the ES backend method that is called when running clear_index is:
def clear(self, models=[], commit=True):
[...]
if not models:
self.conn.delete_index(self.index_name)
else:
[...]
if commit:
self.conn.refresh(index=self.index_name)
Which looks wrong as it will call conn.refresh on the index that it has just deleted?
[UPDATE 3]
I think the above errors may be a red herring, as the management commands will ignore the errors and continue, giving this error, which I think is more serious:
(.venv)vagrant#precise32:/app$ foreman run ./manage.py update_index --verbosity=3
Skipping '<class 'django.contrib.auth.models.Permission'>' - no index.
Skipping '<class 'django.contrib.auth.models.Group'>' - no index.
Skipping '<class 'django.contrib.auth.models.User'>' - no index.
Skipping '<class 'django.contrib.contenttypes.models.ContentType'>' - no index.
Skipping '<class 'django.contrib.sessions.models.Session'>' - no index.
Skipping '<class 'django.contrib.sites.models.Site'>' - no index.
Skipping '<class 'django.contrib.admin.models.LogEntry'>' - no index.
Skipping '<class 'django.contrib.flatpages.models.FlatPage'>' - no index.
ERROR Error updating test_app using default
Traceback (most recent call last):
File "/home/vagrant/.venv/src/django-haystack/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/home/vagrant/.venv/src/django-haystack/haystack/management/commands/update_index.py", line 239, in update_backend
end_date=self.end_date)
File "/home/vagrant/.venv/src/django-haystack/haystack/indexes.py", line 157, in build_queryset
index_qs = self.index_queryset(using=using)
TypeError: index_queryset() got an unexpected keyword argument 'using'
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/vagrant/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/home/vagrant/.venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/vagrant/.venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/vagrant/.venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/home/vagrant/.venv/src/django-haystack/haystack/management/commands/update_index.py", line 184, in handle
return super(Command, self).handle(*items, **options)
File "/home/vagrant/.venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 341, in handle
label_output = self.handle_label(label, **options)
File "/home/vagrant/.venv/src/django-haystack/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/home/vagrant/.venv/src/django-haystack/haystack/management/commands/update_index.py", line 239, in update_backend
end_date=self.end_date)
File "/home/vagrant/.venv/src/django-haystack/haystack/indexes.py", line 157, in build_queryset
index_qs = self.index_queryset(using=using)
TypeError: index_queryset() got an unexpected keyword argument 'using'
[UPDATE 4]
OK - so it's my fault, I was using an old search_indexes.py file, and my index_queryset() method was incorrect. I won't close this as it may be useful for others.
Answering this one myself - albeit it's just an admission of my own stupidity in this one.
I carried a search_indexes.py file from the 1.x version of Haystack into a new branch of our project that was using the 2.x version of Haystack, which is configured slightly differently. In the new version, the index_queryset() method now requires a new using parameter (defaults to None). The older version didn't require this.
The new signature should therefore be:
def index_queryset(self, using=None):
pass
Related
I want to install django-mailgun library. Followed this tutorial https://learnbatta.com/blog/django-send-email-using-mailgun-api-94/
I tried
pip install django-mailgun --verbose
Log of command:
Using pip 21.0.1 from /home/alex/.local/lib/python3.6/site-packages/pip (python 3.6)
Defaulting to user installation because normal site-packages is not writeable
Created temporary directory: /tmp/pip-ephem-wheel-cache-r4kdc56j
Created temporary directory: /tmp/pip-req-tracker-s1r35ye8
Initialized build tracking at /tmp/pip-req-tracker-s1r35ye8
Created build tracker: /tmp/pip-req-tracker-s1r35ye8
Entered build tracker: /tmp/pip-req-tracker-s1r35ye8
Created temporary directory: /tmp/pip-install-14dd83we
1 location(s) to search for versions of django-mailgun:
* https://pypi.org/simple/django-mailgun/
Fetching project page and analyzing links: https://pypi.org/simple/django-mailgun/
Getting page https://pypi.org/simple/django-mailgun/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/django-mailgun/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/django-mailgun/ HTTP/1.1" 404 13
Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/django-mailgun/: 404 Client Error: Not Found for url: https://pypi.org/simple/django-mailgun/ - skipping
Given no hashes to check 0 links for project 'django-mailgun': discarding no candidates
ERROR: Could not find a version that satisfies the requirement django-mailgun
ERROR: No matching distribution found for django-mailgun
Exception information:
Traceback (most recent call last):
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 171, in _merge_into_criterion
crit = self.state.criteria[name]
KeyError: 'django-mailgun'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 318, in resolve
name, crit = self._merge_into_criterion(r, parent=None)
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _merge_into_criterion
crit = Criterion.from_requirement(self._p, requirement, parent)
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 83, in from_requirement
raise RequirementsConflicted(criterion)
pip._vendor.resolvelib.resolvers.RequirementsConflicted: Requirements conflict: SpecifierRequirement('django-mailgun')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alex/.local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 122, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep,
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 453, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/alex/.local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 320, in resolve
raise ResolutionImpossible(e.criterion.information)
pip._vendor.resolvelib.resolvers.ResolutionImpossible: [RequirementInformation(requirement=SpecifierRequirement('django-mailgun'), parent=None)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/alex/.local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 189, in _main
status = self.run(options, args)
File "/home/alex/.local/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 178, in wrapper
return func(self, options, args)
File "/home/alex/.local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 317, in run
reqs, check_supported_wheels=not options.target_dir
File "/home/alex/.local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 127, in resolve
six.raise_from(error, e)
File "<string>", line 3, in raise_from
pip._internal.exceptions.DistributionNotFound: No matching distribution found for django-mailgun
Removed build tracker: '/tmp/pip-req-tracker-wgh79mc3'
OS -- Ubuntu 16.04 LTS
That command is wrong, you need to follow the official command: pip install django-mailgun-mime
trying to get a Django project started using cookiecutter-django and can't seem to get it to generate anything.
using Python 3.6, Django 2.0.5, cookiecutter 1.6.0 (then created a virtualenv and entered a new, blank directory)
so I enter this command:
cookiecutter https://github.com/pydanny/cookiecutter-django
and get this error traceback:
Traceback (most recent call last):
File "c:\python\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Python\python36\Scripts\cookiecutter.exe\__main__.py", line 9, in
<module>
File "c:\python\python36\lib\site-packages\click\core.py", line 722, in
__call__
return self.main(*args, **kwargs)
File "c:\python\python36\lib\site-packages\click\core.py", line 697, in main
rv = self.invoke(ctx)
File "c:\python\python36\lib\site-packages\click\core.py", line 895, in
invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\python\python36\lib\site-packages\click\core.py", line 535, in
invoke
return callback(*args, **kwargs)
File "c:\python\python36\lib\site-packages\cookiecutter\cli.py", line 120,
in main
password=os.environ.get('COOKIECUTTER_REPO_PASSWORD')
File "c:\python\python36\lib\site-packages\cookiecutter\main.py", line 63,
in cookiecutter
password=password
File "c:\python\python36\lib\site-packages\cookiecutter\repository.py", line
103, in determine_repo_dir
no_input=no_input,
File "c:\python\python36\lib\site-packages\cookiecutter\vcs.py", line 99, in
clone
stderr=subprocess.STDOUT,
File "c:\python\python36\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "c:\python\python36\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'clone',
'https://github.com/pydanny/cookiecutter-django']' returned non-zero exit
status 128.
OK - figured out how to get this to work.
used Github desktop
from cookiecutter-django repository, right click
open it Git Shell
this opens a Powershell window.
CD to directory where project will be placed in.
cookiecutter https://github.com/pydanny/cookiecutter-django
and it works.
not sure exactly why this works when regular CMD and elevated CMD do not, but this was the only way I could get it to work.
This is a permission issue with github due to the need to setup ssh keys. By the way I'm using ubuntu 12.
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/ - create a key first in your machine using the instructions in the link. Once you have your ssh key, proceed to step 2. (Step 2 is indicated in the first link as last step)
https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account - add the generated ssh key to your github account.
I'm trying to set up a static Ghost blog with Github hosting using the Buster Static generator. I've tried various instructions including:
https://stefanscherer.github.io/setup-ghost-for-github-pages/
http://blog.sunnyg.io/2015/09/24/ghost-with-github/
But when I get to the "buster generate" command I get the following output in terminal.
It is running fine locally.
Can anyone point me in the right direction?
buster generate
--2016-03-07 23:53:11-- http://localhost:2368/
Resolving localhost... ::1, 127.0.0.1
Connecting to localhost|::1|:2368... failed: Connection refused.
Connecting to localhost|127.0.0.1|:2368... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4508 (4.4K) [text/html]
Saving to: '/Users/philip/Development/Node/ghost-0.7.8/static/index.html'
fixing links in /Users/philip/Development/Node/ghost-0.7.8/static/index.html
Traceback (most recent call last):
File "/usr/local/bin/buster", line 9, in <module>
load_entry_point('buster==0.1.3', 'console_scripts', 'buster')()
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/buster/buster.py", line 90, in main
newtext = fixLinks(filetext, parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/buster/buster.py", line 64, in fixLinks
d = PyQuery(bytes(bytearray(text, encoding='utf-8')), parser=parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyquery/pyquery.py", line 226, in __init__
elements = fromstring(context, self.parser)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyquery/pyquery.py", line 90, in fromstring
result = custom_parser(context)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/lxml/html/__init__.py", line 867, in fromstring
doc = document_fromstring(html, parser=parser, base_url=base_url, **kw)
File "/usr/local/Cellar/python/2.7.8_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/lxml/html/__init__.py", line 752, in document_fromstring
value = etree.fromstring(html, parser, **kw)
File "src/lxml/lxml.etree.pyx", line 3213, in lxml.etree.fromstring (src/lxml/lxml.etree.c:82934)
File "src/lxml/parser.pxi", line 1819, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:124533)
File "src/lxml/parser.pxi", line 1707, in lxml.etree._parseDoc (src/lxml/lxml.etree.c:123074)
File "src/lxml/parser.pxi", line 1079, in lxml.etree._BaseParser._parseDoc (src/lxml/lxml.etree.c:117114)
File "src/lxml/parser.pxi", line 573, in lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:110510)
File "src/lxml/parser.pxi", line 683, in lxml.etree._handleParseResult (src/lxml/lxml.etree.c:112276)
File "src/lxml/parser.pxi", line 624, in lxml.etree._raiseParseError (src/lxml/lxml.etree.c:111367)lxml.etree.XMLSyntaxError: None
As stated in the comment, I did get in working however instead of just doing the 'Quick Install' recommended by many, I went the route of the 'Developer Install' guide here https://github.com/TryGhost/Ghost using the Stable branch.
After you run through that (with your local server running), in another terminal, run
$ buster setup
<Enter git repo>
$ buster generate --domain=localhost:2368
$ buster deploy (or as most sane people prefer, just git push)
Full instructions here: http://phil-a.github.io/getting-ghost-running-on-github-with-buster
I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.
I was using Apache Solr for quite some time and only recently started running into some severe issues with it. I'm using it with haystack and a django project. When I do it from manage.py shell i'm getting the below:
>>> from haystack.query import SearchQuerySet
>>> emps = SearchQuerySet().filter(django_ct='web.employer').filter(name__icontains='Mi')[:10]
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/haystack/query.py", line 241, in __getitem__
self._fill_cache(start, bound)
File "/usr/local/lib/python2.7/dist-packages/haystack/query.py", line 140, in _fill_cache
results = self.query.get_results(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/__init__.py", line 469, in get_results
self.run(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/solr_backend.py", line 501, in run
results = self.backend.search(final_query, **search_kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/__init__.py", line 47, in wrapper
return func(obj, query_string, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/solr_backend.py", line 202, in search
raw_results = self.conn.search(query_string, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 578, in search
response = self._select(params)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 308, in _select
return self._send_request('get', path)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 293, in _send_request
error_message = self._extract_error(resp)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 372, in _extract_error
reason, full_html = self._scrape_response(resp.headers, resp.content)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 404, in _scrape_response
p_nodes = body_node.cssselect('p')
AttributeError: 'NoneType' object has no attribute 'cssselect'
I tried reinstalling haystack, lxml, cssselect, pysolr and still i'm getting these errors. Is there anything else I can try for this? Thanks for any help!
I also tried reading few other SO questions including this:
XML error object has no attribute 'cssselect'
Seems like the issue is with pysolr. You might find some help here.
I had the same issue persist even after bringing up pysolr and lxml to latest version.
Turned out it was because I was not using haystack generated schema which has a few additional fields compared to the default solr one.
You can confirm if this is the case by looking at your solr logs.
It is an issue with pysolr. It hasn't been fixed till 3.3.0.
The only alternative would be to override the pysolr code and make adjustments for when Solr returns a reponse status!=200.
You can check if the response has body attribute or not and make adjustments according to that.