Wagtail sitemap produces VariableDoesNotExist error in console - django

We've noticed this error (below ) cropping up when running the Wagtail sitemap.xml, we haven't modified it, and it does work. These errors are just in the console/logs locally. Our server person wants to know if we should worry about these or not?
Exception while resolving variable 'priority' in template 'wagtailsitemaps/sitemap.xml'.
VariableDoesNotExist: Failed lookup for key [priority] in u"{u'lastmod': datetime.datetime(2015, 7, 22, 10, 57, 43, 759421, tzinfo=<UTC>), u'location': u'http://localhost/streamfield-page/news-index/news-1/'}"

Related

Getting BulkwriteError when using MongoDb with djangorestframework-simplejwt?

I am using MongoDB and SimpleJWT in DjangoREST to authenticate and authorize users. I tried to implement user logout, whereby in SimpleJWT it's basically blacklisting a user token. When the first user logs in, everything seems okay and their refresh token is added to the Outstanding token table. But when I try to log in a second user, I get the below error:
raise BulkWriteError(full_result)
pymongo.errors.BulkWriteError: batch op errors occurred, full error: {'writeErrors': [{'index': 0, 'code': 11000, 'keyPattern': {'jti_hex': 1}, 'keyValue': {'jti_hex': None}, 'errmsg': 'E11000 duplicate key error collection: fsm_database.token_blacklist_outstandingtoken index: token_blacklist_outstandingtoken_jti_hex_d9bdf6f7_uniq dup key: { jti_hex: null
}', 'op': {'id': 19, 'user_id': 7, 'jti': '43bccc686fc648f5b60b22df3676b434', 'token': 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY1OTY1NDUzNCwiaWF0IjoxNjU5NTY4MTM0LCJqdGkiOiI0M2JjY2M2ODZmYzY0OGY1YjYwYjIyZGYzNjc2YjQzNCIsInVzZXJfaWQiOjd9.aQmt5xAyncfpv_kDD2pF7iS98Hld98LhG6ng-rCW23M', 'created_at': datetime.datetime(2022,
8, 3, 23, 8, 54, 125539), 'expires_at': datetime.datetime(2022, 8, 4, 23, 8, 54), '_id': ObjectId('62eb00064621b38109bbae16')}}], 'writeConcernErrors': [], 'nInserted': 0, 'nUpserted': 0, 'nMatched': 0, 'nModified': 0, 'nRemoved': 0, 'upserted': []}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\cursor.py", line 51, in execute
self.result = Query(
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\sql2mongo\query.py", line 784, in __init__
self._query = self.parse()
File "C:\Users\MR.Robot\.virtualenvs\fsm-GjGxZg3c\lib\site-packages\djongo\sql2mongo\query.py", line 869, in parse
raise exe from e
djongo.exceptions.SQLDecodeError:
Keyword: None
Sub SQL: None
FAILED SQL: INSERT INTO "token_blacklist_outstandingtoken" ("user_id", "jti", "token", "created_at", "expires_at") VALUES (%(0)s, %(1)s, %(2)s, %(3)s, %(4)s)
Params: [7, '43bccc686fc648f5b60b22df3676b434', 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY1OTY1NDUzNCwiaWF0IjoxNjU5NTY4MTM0LCJqdGkiOiI0M2JjY2M2ODZmYzY0OGY1YjYwYjIyZGYzNjc2YjQzNCIsInVzZXJfaWQiOjd9.aQmt5xAyncfpv_kDD2pF7iS98Hld98LhG6ng-rCW23M', datetime.datetime(2022, 8, 3, 23, 8, 54, 125539), datetime.datetime(2022, 8, 4, 23, 8, 54)]
Version: 1.3.6
MongoDB seems to have a problem inserting the token for the second user in the outstanding table.
How can I fix this?
So I asked the library maintainers and they said that they don't support MongoDB. Check out this issue.

manage.py error after Wagtail 2.15 upgrade

After upgrading to Wagtail 2.15 (or 2.15.1) from 2.14.2 my production website with postgres and database search breaks and commands that are run with manage.py give an error despite me adding the required WAGTAILSEARCH_BACKENDS to settings.
I have two web apps with separate settings running from the same Wagtail version. One of the apps (putkeep) has a search bar and the other (secretgifter) does not. After upgrading Wagtail from 2.14.2 to 2.15, putkeep gives a 404 error but secretgifter does not. If I use pip to switch back to 2.14.2, then the 404 error goes away and the site loads (although results from a search give a 500 error).
If I run makemigrations (or any other command that uses manage.py) for secretgifter it works fine. For putkeep (with the search) it gives the following error:
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
django.setup()
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/django/apps/registry.py", line 122, in populate
app_config.ready()
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/wagtail/search/apps.py", line 21, in ready
set_weights()
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/wagtail/search/backends/database/postgres/weights.py", line 44, in set_weights
BOOSTS_WEIGHTS.extend(determine_boosts_weights())
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/wagtail/search/backends/database/postgres/weights.py", line 32, in determine_boosts_weights
boosts = get_boosts()
File "/home/th-putkeep.net/putkeep/lib/python3.8/site-packages/wagtail/search/backends/database/postgres/weights.py", line 26, in get_boosts
boosts.add(boost)
TypeError: unhashable type: 'list'
As per the docs I've added this to my settings:
WAGTAILSEARCH_BACKENDS = {
'default': {
'BACKEND': 'wagtail.search.backends.database',
}
}
Any suggestions gratefully received.
I have identified some code in my models.py that caused no errors with my site running Wagtail 2.14.2 and below. When commented out, it resolves the error caused by upgrading to Wagtail 2.15 and above. I am posting it here as the answer to my problem because everything else seems to work (including search) without any further modification even though I am not currently sure why it causes the error or if I need it anymore:
search_fields = Page.search_fields + [ # Inherit search_fields from Page
index.SearchField('content'),
index.SearchField('tags', [
index.SearchField('name', partial_match=True, boost=10),
]),
]
I think the issue is with your configuration that is trying to search tag names:
index.SearchField('tags', [
index.SearchField('name', partial_match=True, boost=10),
]),
The default database search doesn't appear to offer an option to search related objects this way (See note under https://docs.wagtail.io/en/stable/topics/search/indexing.html#indexing-callables-and-other-attributes). You can either remove those lines or, for the moment, go back to the postgres_search backend in wagtail/contrib.

ImportError: cannot import name url

I'm upgrading a django app from v1.3 to 1.11.18.
We are running Python v2.7.12 and running an nginx server to serve the pages.
I've been making code changes to account for all of the deprecated methods as a result of the upgrade. So far, so good. After making another run of updates, I ran into this error notice after starting the server:
File "/home/bat/application.com/wsgi.py", line 12, in <module>
application = get_wsgi_application()
File "./django/core/wsgi.py", line 14, in get_wsgi_application
return WSGIHandler()
File "./django/core/handlers/wsgi.py", line 151, in __init__
self.load_middleware()
File "./django/core/handlers/base.py", line 56, in load_middleware
mw_class = import_string(middleware_path)
File "./django/utils/module_loading.py", line 20, in import_string
module = import_module(module_path)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "./django/middleware/locale.py", line 4, in <module>
from django.conf.urls.i18n import is_language_prefix_patterns_used
File "./django/conf/urls/i18n.py", line 2, in <module>
from django.conf.urls import url
ImportError: cannot import name url
I'm not sure why I would be getting this error as the code referenced is all core code. It doesn't appear to be referencing any of the project code at all, except for the opening line.
I've double-checked to be sure we do not have any "left over" code sitting in the core django folder: it's clean. We also rebooted the linux server just for kicks: that didn't help either. Beyond that I'm not really sure what else to try?
Any ideas where I might look for a solution to this one?
So it turns out that the ./django/conf/urls/__init__.py file is actually MISSING the required def url() function. I'm not sure how that didn't get noticed before by anyone, as the core code clearly calls that url function all over the place.
To resolve that issue, I downloaded Django v1.10.x and copied the def url(...) function from the v1.10.x code into the django/conf/urls/__init__.py file and everything worked as expected.
I do realize that I modified a core file, but I wasn't sure how to get around the issue otherwise. This 1.x branch of Django is not under active development, so I figure that's probably okay.

import openlayers3 in ember.js

I am starting up an ember.js app aimed at drawing and displaying maps.
I am using ember.js v1.11.0 and ol3 v3.4.0
I managed to install ol3 via bower and import it using Brocfile.js:
app.import('bower_components/ol3/build/ol.js');
app.import('bower_components/ol3/css/ol.css');
I can use it as well without problem in my views, etc. What I would like if possible is to get rid of the server errors:
views/map.js: line 6, col 22, 'ol' is not defined.
views/map.js: line 7, col 19, 'ol' is not defined.
views/map.js: line 10, col 21, 'ol' is not defined.
views/map.js: line 11, col 19, 'ol' is not defined.
views/map.js: line 14, col 19, 'ol' is not defined.
views/map.js: line 17, col 17, 'ol' is not defined.
And if possible get autocompletion in my Intellij IDEA (make it recognise the ol library)
If somebody could give me a hand, that owuld be much apreciated.
add ol to your .jshintrc file. in the predef array:
"predef": [
"document",
"window",
"-Promise",
"ol"
]
....
And for the intellij-idea you should be able to get your answer from the docs here:
https://www.jetbrains.com/idea/help/configuring-javascript-libraries.html

Why my scrapy does not used all urls in start_urls list?

I have almost 300 urls in my start_urls list, but the scrapy only scrawl about 200 urls. But not all of these listed urls. I do not know why? How I can deal with that. I have to scrawl more items from the website.
Another question I do not understand is: how I can see the log error when the scrapy finishes? From the terminal or I have to write code to see the log error. I think the log is enabled by default.
Thanks for your answers.
updates:
The output is in the following. I do not know why there are only 2829 items scraped. There are 600 urls in my start_urls actually.
But when I only give 400 urls in start_urls, it can scrape 6000 items. I expect to scrape almost the whole website of www.yhd.com. Could anyone give any more suggestions?
2014-12-08 12:11:03-0600 [yhd2] INFO: Closing spider (finished)
2014-12-08 12:11:03-0600 [yhd2] INFO: Stored csv feed (2829 items) in myinfoDec.csv
2014-12-08 12:11:03-0600 [yhd2] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 1,
'downloader/request_bytes': 142586,
'downloader/request_count': 476,
'downloader/request_method_count/GET': 476,
'downloader/response_bytes': 2043856,
'downloader/response_count': 475,
'downloader/response_status_count/200': 474,
'downloader/response_status_count/504': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 12, 8, 18, 11, 3, 607101),
'item_scraped_count': 2829,
'log_count/DEBUG': 3371,
'log_count/ERROR': 1,
'log_count/INFO': 14,
'response_received_count': 474,
'scheduler/dequeued': 476,
'scheduler/dequeued/memory': 476,
'scheduler/enqueued': 476,
'scheduler/enqueued/memory': 476,
'start_time': datetime.datetime(2014, 12, 8, 18, 4, 19, 698727)}
2014-12-08 12:11:03-0600 [yhd2] INFO: Spider closed (finished)
Finally I solved the problem....
First,it does not crawl all url listed in start_urls is because I have a typo in url in start_urls. One of the "http://..." is mistakenly written as "ttp://...", the first 'h' is missing. Then it seems the spider stopped to looked at the rest urls listed after it. Horrifed.
Second, I solved the log file problem by click the configuiration panel of Pycharm, which provides showing log file panel. By the way, my scrapy framework is put into the Pycharm IDE. It works great for me. Not advertisement.
Thanks for all the comments and suggestions.