This seems like an old solved problem here and here and here but Still I am getting this error.I create my db on Docker.And It worked only one time.Before this, I re-created db, did "connect =false",added wait, downgraded pymongo, did previous solutions etc. I stuck.
Python 3.8.0, Pymongo 3.9.0
from pymongo import MongoClient
import pprint
client = MongoClient('mongodb://192.168.1.100:27017/',
username='admin',
password='psw',
authSource='myappdb',
authMechanism='SCRAM-SHA-1',
connect=False)
db = client['myappdb']
serverStatusResult=db.command("serverStatus")
pprint(serverStatusResult)
and I am getting ServerSelectionTimeoutError
Traceback (most recent call last):
File "C:\Users\ME\eclipse2019-workspace\exdjango\exdjango__init__.py",
line 12, in
serverStatusResult=db.command("serverStatus")
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\database.py",
line 610, in command
with self.client._socket_for_reads(
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\contextlib.py",
line 113, in __enter
return next(self.gen)
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\mongo_client.py",
line 1099, in _socket_for_reads
server = topology.select_server(read_preference)
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 222, in select_server
return random.choice(self.select_servers(selector,
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 182, in select_servers
server_descriptions = self._select_servers_loop(
File "C:\Users\ME\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pymongo\topology.py",
line 198, in _select_servers_loop
raise ServerSelectionTimeoutError(
pymongo.errors.ServerSelectionTimeoutError: 192.168.1.100:27017: timed out
Your connection looks a little misconfigured. Firstly you have half connection string, half parameter format. I'd suggest you stick with one or the other.
Your auth database is usually seperate to your actual databases (and it's usually called admin). Check this is correct.
There's no particular need to specify the authMechanism assuming you are using MongoDB 3.0 or later.
The connect=False is likely a red herring.
So I would try one of either:
client = MongoClient('mongodb://admin:psw#192.168.1.100:27017/myappdb?authSource=admin')
or
client = MongoClient(host='192.168.1.100',
port=27017,
username='admin',
password='psw',
authSource='admin')
Related
I've been using Celery successfully with a Django site on Heroku but it's just started generating the error below, which stops it running. It looks like it's having trouble with postgres, but I'm stumped as to how to fix it, given it's Celery rather than my code that's having the problem (I assume...).
I'm using CloudAMPQ as a broker, and my Django settings include:
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Here's the traceback from the Heroku logs:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 323, in __get__
return obj.__dict__[self.__name__]
KeyError: 'scheduler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: SSL SYSCALL error: Bad file descriptor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 553, in run
self.service.start(embedded_process=True)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 470, in start
humanize_seconds(self.scheduler.max_interval))
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 512, in scheduler
return self.get_scheduler()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 507, in get_scheduler
lazy=lazy)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 151, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 185, in __init__
self.setup_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 158, in setup_schedule
self.install_default_entries(self.schedule)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 251, in schedule
self._schedule = self.all_as_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 164, in all_as_schedule
for model in self.Model.objects.enabled():
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/app/.heroku/python/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: SSL SYSCALL error: Bad file descriptor
I've resolved the issue now... there was a line of my Django code which had caused an Internal Server Error in the past -- I think, early on in Django starting up, it was trying to access a model before the migrations that created the model had run.
I'd resolved that but noticed these "SSL SYSCALL error"s started about the same time. So I removed that line of code, and Celery has started up again.
It could be coincidence. And I don't understand why this fixed things.
Ideally I'd still like to understand what the error above actually means so I'd have a better chance of fixing such a thing in the future.
I had the same error and i solved.
In my case the problem was a custom AppConfig in one of my django apps.
my folder
my_django_app/
__init__.py
apps.py
...
I have a file my_django_app/__init__.py like this
default_app_config="my_django_app.apps.MyAppConfig"
and the file my_django_app/apps.py like this
class ChallengeConfig(AppConfig):
name = 'appConfig'
verbose_name = "djangoAppConfig"
def ready(self):
... # custom logic
After removeing the line default_app_config="my_django_app.apps.MyAppConfig" celery works again
If you have something like this, remove your custom AppConfig and use a celery periodic task instead of a custom AppConfig. django_celery_beat do something strange with AppConfig so if you have a custom AppConfig it generate the problem
My python requirements:
Django==1.11.29
celery==4.4.7
django_celery_beat==1.6.0
I know this was asked years ago, but this was about the only question out there which related to my problem.
In my case, it was down to the CONN_MAX_AGE being set at 600. Reduced to 0 so that there are unlimited persistent connections.
From the docs:
Persistent connections avoid the overhead of re-establishing a connection to the database in each request. They’re controlled by the CONN_MAX_AGE parameter which defines the maximum lifetime of a connection. It can be set independently for each database.
The default value is 0, preserving the historical behavior of closing the database connection at the end of each request. To enable persistent connections, set CONN_MAX_AGE to a positive integer of seconds. For unlimited persistent connections, set it to None.
Django opens a connection to the database when it first makes a database query. It keeps this connection open and reuses it in subsequent requests. Django closes the connection once it exceeds the maximum age defined by CONN_MAX_AGE or when it isn’t usable any longer.
In detail, Django automatically opens a connection to the database whenever it needs one and doesn’t have one already — either because this is the first connection, or because the previous connection was closed.
At the beginning of each request, Django closes the connection if it has reached its maximum age. If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server. (This problem may only affect very low traffic sites.)
At the end of each request, Django closes the connection if it has reached its maximum age or if it is in an unrecoverable error state. If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.
I'm trying to get into the new N1QL Queries for Couchbase in Python.
I got my database set up in Couchbase 4.0.0.
My initial try was to retreive all documents like this:
from couchbase.bucket import Bucket
bucket = Bucket('couchbase://localhost/dafault')
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
for row in bucket.n1ql_query('SELECT * FROM default'):
print row
But this produces a OperationNotSupportedError:
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 2357, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1777, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/my_user/python_tests/test_n1ql.py", line 9, in <module>
rv = bucket.n1ql_query('CREATE PRIMARY INDEX ON default').execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 215, in execute
for _ in self:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 235, in __iter__
self._start()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/couchbase/n1ql.py", line 180, in _start
self._mres = self._parent._n1ql_query(self._params.encoded)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule n1ql query, C Source=(src/n1ql.c,82)>
Here the version numbers of everything I use:
Couchbase Server: 4.0.0
couchbase python library: 2.0.2
cbc: 2.5.1
python: 2.7.8
gcc: 4.2.1
Anyone an idea what might have went wrong here? I could not find any solution to this problem up to now.
There was another ticket for node.js where the same issue happened. There was a proposal to enable n1ql for the specific bucket first. Is this also needed in python?
It would seem you didn't configure any cluster nodes with the Query or Index services. As such, the error returned is one that indicates no nodes are available.
I also got similar error while trying to create primary index.
Create a primary index...
Traceback (most recent call last):
File "post-upgrade-test.py", line 45, in <module>
mgr.n1ql_index_create_primary(ignore_exists=True)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 428, in n1ql_index_create_primary
'', defer=defer, primary=True, ignore_exists=ignore_exists)
File "/usr/local/lib/python2.7/dist-packages/couchbase/bucketmanager.py", line 412, in n1ql_index_create
return IxmgmtRequest(self._cb, 'create', info, **options).execute()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 160, in execute
return [x for x in self]
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 144, in __iter__
self._start()
File "/usr/local/lib/python2.7/dist-packages/couchbase/_ixmgmt.py", line 132, in _start
self._cmd, index_to_rawjson(self._index), **self._options)
couchbase.exceptions.NotSupportedError: <RC=0x13[Operation not supported], Couldn't schedule ixmgmt operation, C Source=(src/ixmgmt.c,98)>
Adding query and index node to the cluster solved the issue.
Im trying to send email using the send_mail call from within a django application previously I had used django_ses, but I have hit on an issue.
I know the django_ses library (https://pypi.python.org/pypi/django-ses) isn't maintained actively, though it claims to still be widely in use. I've recently upgraded the boto version on the machine to the latest (2.31) and im getting a certificate error when trying to send an email (Stack trace included below). I've confirmed that returning boto to version 2.1 stops the error, so im guessing the two are incompatable. Has anyone managed to patch around the issue?
thanks
Steve
>>> send_mail('Test subject', 'This is the body', 'donotreply#example.com',['someone#example.com'], fail_silently=False)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/django/core/mail/__init__.py", line 61, in send_mail
File "/usr/local/lib/python2.7/dist-packages/django/core/mail/message.py", line 248, in send
File "/usr/local/lib/python2.7/dist-packages/django_ses-0.6.0-py2.7.egg/django_ses/__init__.py", line 122, in send_messages
rate_limit = self.get_rate_limit()
File "/usr/local/lib/python2.7/dist-packages/django_ses-0.6.0-py2.7.egg/django_ses/__init__.py", line 196, in get_rate_limit
quota_dict = self.connection.get_send_quota()
File "/usr/local/lib/python2.7/dist-packages/boto/ses/connection.py", line 339, in get_send_quota
File "/usr/local/lib/python2.7/dist-packages/boto/ses/connection.py", line 101, in _make_request
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 932, in make_request
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 894, in _mexe
SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
Turns out the problem wasn't a version incomparability, but where my server was located. From the experimenting I've done, django_ses doesn't play well when not on an EC2 machine. I switched to using the smtp library which drops in just as easily and im away.
https://github.com/bancek/django-smtp-ssl
I was using Apache Solr for quite some time and only recently started running into some severe issues with it. I'm using it with haystack and a django project. When I do it from manage.py shell i'm getting the below:
>>> from haystack.query import SearchQuerySet
>>> emps = SearchQuerySet().filter(django_ct='web.employer').filter(name__icontains='Mi')[:10]
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/haystack/query.py", line 241, in __getitem__
self._fill_cache(start, bound)
File "/usr/local/lib/python2.7/dist-packages/haystack/query.py", line 140, in _fill_cache
results = self.query.get_results(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/__init__.py", line 469, in get_results
self.run(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/solr_backend.py", line 501, in run
results = self.backend.search(final_query, **search_kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/__init__.py", line 47, in wrapper
return func(obj, query_string, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/solr_backend.py", line 202, in search
raw_results = self.conn.search(query_string, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 578, in search
response = self._select(params)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 308, in _select
return self._send_request('get', path)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 293, in _send_request
error_message = self._extract_error(resp)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 372, in _extract_error
reason, full_html = self._scrape_response(resp.headers, resp.content)
File "/usr/local/lib/python2.7/dist-packages/pysolr.py", line 404, in _scrape_response
p_nodes = body_node.cssselect('p')
AttributeError: 'NoneType' object has no attribute 'cssselect'
I tried reinstalling haystack, lxml, cssselect, pysolr and still i'm getting these errors. Is there anything else I can try for this? Thanks for any help!
I also tried reading few other SO questions including this:
XML error object has no attribute 'cssselect'
Seems like the issue is with pysolr. You might find some help here.
I had the same issue persist even after bringing up pysolr and lxml to latest version.
Turned out it was because I was not using haystack generated schema which has a few additional fields compared to the default solr one.
You can confirm if this is the case by looking at your solr logs.
It is an issue with pysolr. It hasn't been fixed till 3.3.0.
The only alternative would be to override the pysolr code and make adjustments for when Solr returns a reponse status!=200.
You can check if the response has body attribute or not and make adjustments according to that.
I have a function that will read data from a website, process it, and then load it into MongoDB. When I run this without threading it works fine but as soon as I set up celery tasks that just call this one function I frequently get the following error: "OperationFailure: database error: unauthorized db:dbname lock type:-1"
It's somewhat odd because if I run the non-celery version on multiple terminals, I do not get this error at all.
I suspect it has something to do with there not being an open connection to Mongo although in my code I'm opening one up right before every Mongo call.
The exact exception is below:
Task twitter[a974bfcc-d6ca-4baf-b36f-cae9143ce2d9] raised exception: OperationFailure(u'database error: unauthorized db:data lock type:-1 client:68.193.49.9',)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/execute/trace.py", line 36, in trace
return cls(states.SUCCESS, retval=fun(*args, **kwargs))
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/app/task/__init__.py", line 232, in __call__
return self.run(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/celery/app/__init__.py", line 172, in run
return fun(*args, **kwargs)
File "/djangoblog/network/tasks.py", line 40, in twitter
n_twitter.GetTweetsTwitter(user)
File "/djangoblog/network/twitter.py", line 255, in GetTweetsTwitter
id = SaveTweet(user, network, tweet)
File "/djangoblog/network/twitter.py", line 150, in SaveTweet
if mmo.Moment.objects(user=user.id,source_id=id,network=network.id).count() == 0:
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 933, in count
return self._cursor.count(with_limit_and_skip=True)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 563, in _cursor
self._cursor_obj = self._collection.find(self._query,
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/mongoengine/queryset.py", line 493, in _collection
if self._collection_obj.name not in db.collection_names():
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/database.py", line 361, in collection_names
names = [r["name"] for r in results]
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 703, in next
if len(self.__data) or self._refresh():
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 666, in _refresh
self.__uuid_subtype))
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/cursor.py", line 628, in __send_message self.__tz_aware)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/pymongo/helpers.py", line 101, in _unpack_response error_object["$err"])
OperationFailure: database error: unauthorized db:data lock type:-1 client:68.193.49.9
Sorry for the formatting but if you look at the line that starts with mmo.Moment there's a connection being opened right before that's called.
Doing a bit of research it looks as if it has something to do with the way threading is handled in PyMongo - http://api.mongodb.org/python/1.5.1/faq.html#how-does-connection-pooling-work-in-pymongo - I may need to start closing the connections but I'd expect MongoEngine to be doing this..
This is likely due to the fact that you are not calling db.authenticate() when you start the new connection and are using auth on MongoDB.
Regarding the closing of threads, I would recommend making sure you are using connection pooling and letting the driver manage the pools (calling close() or similar manually can lead to a lot of pain).
For more info see the note in the pymongo documentation about using authenticate() in a multi-threaded environment.