We are currently testing a django based project that uses MongoEngine as the persistence layer. MongoEngine is based on pymongo and we're using version 1.6 and we are running a single instance setup of mongo.
What we have noticed is that occasionally, and for about 5 minutes, connections cannot be established to the mongo instance. Has anyone come across such behavior? any tips on how to improve reliability?
We had an issue with AutoReconnect which sounds similar to what you are describing. I ended up monkeypatching pymongo in my <project>/__init__.py file:
from pymongo.cursor import Cursor
from pymongo.errors import AutoReconnect
from time import sleep
import sys
AUTO_RECONNECT_ATTEMPTS = 10
AUTO_RECONNECT_DELAY = 0.1
def auto_reconnect(func):
"""
Function wrapper to automatically reconnect if AutoReconnect is raised.
If still failing after AUTO_RECONNECT_ATTEMPTS, raise the exception after
all. Technically this should be handled everytime a mongo query is
executed so you can gracefully handle the failure appropriately, but this
intermediary should handle 99% of cases and avoid having to put
reconnection code all over the place.
"""
def retry_function(*args, **kwargs):
attempts = 0
while True:
try:
return func(*args, **kwargs)
except AutoReconnect, e:
attempts += 1
if attempts > AUTO_RECONNECT_ATTEMPTS:
raise
sys.stderr.write(
'%s raised [%s] -- AutoReconnecting (#%d)...\n' % (
func.__name__, e, attempts))
sleep(AUTO_RECONNECT_DELAY)
return retry_function
# monkeypatch: wrap Cursor.__send_message (name-mangled)
Cursor._Cursor__send_message = auto_reconnect(Cursor._Cursor__send_message)
# (may need to wrap some other methods also, we'll see...)
This resolved the issue for us, but you might be describing something different?
Here is another solution that uses subclassing instead of monkey patching, and handles errors that might be raised while establishing the initial connection or while accessing a database. I just subclassed Connection/ReplicasetConnection, and handled raised AutoReconnect errors during instantiation and any method invocations. You can specify the number of retries and sleep time between retries in the constructor.
You can see the gist here: https://gist.github.com/2777345
Related
This is what I have:
import youtube_dl # in case this matters
class ErrorCatchingTask(Task):
# Request = CustomRequest
def on_failure(self, exc, task_id, args, kwargs, einfo):
# If I comment this out, all is well
r = requests.post(server + "/error_status/")
....
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
def process(self, param_1, param_2, param_3):
...
raise IndexError
...
The worker will throw exception and then seemingly spawn a new task with a different task id Received task: process[{task_id}
Here are a couple of things I've tried:
Importing from celery.worker.request import Request and overriding on_failure and on_success functions there instead.
app.conf.broker_transport_options = {'visibility_timeout': 99999999999}
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
Turn off DEBUG mode
Set logging to info
Set CELERY_IGNORE_RESULT to false (Can I use Python requests with celery?)
import requests as apicall to rule out namespace conflict
Money patch requests Celery + Eventlet + non blocking requests
Move ErrorCatchingTask into a separate file
If I don't use any of the hook functions, the worker will just throw the exception and stay idle until the next task is scheduled, which what I expect even when I use the hooks. Is this a bug? I searched through and through on github issues, but couldn't find the same problem. How do you debug a problem like this?
Django 1.11.16
celery 4.2.1
My problem was resolved after I used grequests
In my case, celery worker would reschedule as soon as conn.urlopen() was being called in requests/adapters.py. Another behavior I observed was if I had another worker from another project open in the same machine, sometimes infinite rescheduling would stop. This probably was some locking mechanism that was originally intended for other purpose kicking in.
So this led me to suspect that this is indeed threading issue and after researching whether requests library was thread safe, I found some people suggesting different things.. In theory, monkey patching should have a similar effect as using grequests, but it is not the same, so just use grequests or erequests library instead.
Celery Debugging instruction is here
We developed a REST API using Django & mongoDB (PyMongo driver). The problem is that, on some requests to the API endpoints, PyMongo cursor returns a partial response which contains less documents than it should (but it’s a completely valid JSON document).
Let me explain it with an example of one of our views:
def get_data(key):
return collection.find({'key': key}, limit=24)
def my_view(request):
key = request.POST.get('key')
query = get_data(key)
res = [app for app in query]
return JsonResponse({'list': res})
We're sure that there is more than 8000 documents matching the query, but in
some calls we get less than 24 results (even zero). The first problem we've
investigated was that we had more than one MongoClient definition in our code. By resolving this, the number of occurrences of the problem decreased, but we still had it in a lot of calls.
After all of these investigations, we've designed a test in which we made 16 asynchronous requests at the same time to the server. With this approach, we could reproduce the problem. On each of these 16 requests, 6-8 of them had partial results. After running this test we reduced uWsgi’s number of processes to 6 and restarted the server. All results were good but after applying another heavy load on the server, the problem began again. At this point, we restarted uwsgi service and again everything was OK. With this last experiment we have a clue now that when the uwsgi service starts running, everything is working correctly but after a period of time and heavy load, the server begins to return partial or empty results again.
The latest investigation we had was to run the API using python manage.py with DEBUG=False, and we had the problem again after a period of time in this situation.
We can't figure out what the problem is and how to solve it. One reason that we can think of is that Django closes pymongo’s connections before completion. Because the returned result is a valid JSON.
Our stack is:
nginx (with no cache enabled)
uWsgi
MemCached (disabled during debugging procedure)
Django (v1.8 on python 3)
PyMongo (v3.0.3)
Your help is really appreciated.
Update:
Mongo version:
db version v3.0.7
git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd
We are running single mongod instance. No sharding/replicating.
We are creating connection using this snippet:
con = MongoClient('localhost', 27017)
Update 2
Subject thread in Pymongo issue tracker.
Pymongo cursors are not thread safe elements. So using them like what I did in a multi-threaded environment will cause what I've described in question. On the other hand Python's list operations are mostly thread safe, and changing snippet like this will solve the problem:
def get_data(key):
return list(collection.find({'key': key}, limit=24))
def my_view(request):
key = request.POST.get('key')
query = get_data(key)
res = [app for app in query]
return JsonResponse({'list': res})
My very speculative guess is that you are reusing a cursor somewhere in your code. Make sure you are initializing your collection within the view stack itself, and not outside of it.
For example, as written, if you are doing something like:
import ...
import con
collection = con.documents
# blah blah code
def my_view(request):
key = request.POST.get('key')
query = collection.find({'key': key}, limit=24)
res = [app for app in query]
return JsonResponse({'list': res})
You could end us reusing a cursor. Better to do something like
import ...
import con
# blah blah code
def my_view(request):
collection = con.documents
key = request.POST.get('key')
query = collection.find({'key': key}, limit=24)
res = [app for app in query]
return JsonResponse({'list': res})
EDIT at asker's request for clarification:
The reason you need to define the collection within the view stack and not when the file loads is that the collection variable has a cursor, which is basically how the database and your application talk to each other. Cursors do things like keep track of where you are in a long list of data, in addition to a bunch of other stuff, but thats the important part.
When you create the collection cursor outside the view method, it will re-use the cursor on each request if it exists. So, if you make one request, and then another really, really fast right after that (like what happened when you applied high load), the cursor might only be half way through talking to the database, and so some of your data goes to the first request, and some to the second. The reason you would get NO data in a request would be if a cursor finished fetching data but hadn't been closed yet, so the next request tried to fetch data from the cursor, and there was none left to fetch in the query.
By moving the collection definition (and by association, the cursor definition) into the view stack, you will ALWAYS get a new cursor when you process a new request. You wont get any cross talking between your cursors and different requests, as each request cycle will have its own.
I'm using the djkombu transport for my local development, but I will probably be using amqp (rabbit) in production.
I'd like to be able to iterate over failures of a particular type and resubmit. This would be in the case of something failing on a server or some edge case bug triggered by some new variation in data.
So I could be resubmitting jobs up to 12 hours later after some bug is fixed or a third party site is back up.
My question is: Is there a way to access old failed jobs via the result backend and simply resubmit them with the same params etc?
You can probably access old jobs using:
CELERY_RESULT_BACKEND = "database"
and in your code:
from djcelery.models import TaskMeta
task = TaskMeta.objects.filter(task_id='af3185c9-4174-4bca-0101-860ce6621234')[0]
but I'm not sure you can find the arguments that the task is being started with ... Maybe something with TaskState...
I've never used it this way. But you might want to consider the task.retry feature?
An example from celery docs:
#task()
def task(*args):
try:
some_work()
except SomeException, exc:
# Retry in 24 hours.
raise task.retry(*args, countdown=60 * 60 * 24, exc=exc)
From IRC
<asksol> dpn`: task args and kwargs are not stored with the result
<asksol> dpn`: but you can create your own model and store it there
(for example using the task_sent signal)
<asksol> we don't store anything when the task is sent, only send a
message. but it's very easy to do yourself
This was what I was expecting, but hoped to avoid.
At least I have an answer now :)
I'm working on my first Django project.
I need to connect to a pre-existing key value store (in this case it is Kyoto Tycoon) for a one off task. i.e. I am not talking about the main database used by django.
Currently, I have something that works, but I don't know if what I'm doing is sensible/optimal.
views.py
from django.http import HttpResponse
from pykt import KyotoTycoon
def get_from_kv(user_input):
kt=KyotoTycoon()
kt.open('127.0.0.1',1978)
# some code to define the required key
# my_key = ...
my_value = kt.get(my_key)
kt.close()
return HttpResponse(my_value)
i.e. it opens a new connection to the database every time a user makes a query, then closes the connection again after it has finished.
Or, would something like this be better?
views.py
from django.http import HttpResponse
from pykt import KyotoTycoon
kt=KyotoTycoon()
kt.open('127.0.0.1',1978)
def get_from_kv(user_input):
# some code to define the required key
# my_key = ...
my_value = kt.get(my_key)
return HttpResponse(my_value)
In the second approach, will Django only open the connection once when the app is first started? i.e. will all users share the same connection?
Which approach is best?
Opening a connection when it is required is likely to be the better solution. Otherwise, there is the potential that the connection is no longer open. Thus, you would need to test that the connection is still open, and restart it if it isn't before continuing anyway.
This means you could run the queries within a context manager block, which would auto-close the connection for you, even if an unhanded exception occurs.
Alternatively, you could have a pool of connections, and just grab one that is not currently in use (I don't know if this would be an issue in this case).
It all depends just how expensive creating connections is, and if it makes sense to be able to re-use them.
Introduction
I have the following code which checks to see if a similar model exists in the database, and if it does not it creates the new model:
class BookProfile():
# ...
def save(self, *args, **kwargs):
uniqueConstraint = {'book_instance': self.book_instance, 'collection': self.collection}
# Test for other objects with identical values
profiles = BookProfile.objects.filter(Q(**uniqueConstraint) & ~Q(pk=self.pk))
# If none are found create the object, else fail.
if len(profiles) == 0:
super(BookProfile, self).save(*args, **kwargs)
else:
raise ValidationError('A Book Profile for that book instance in that collection already exists')
I first build my constraints, then search for a model with those values which I am enforcing must be unique Q(**uniqueConstraint). In addition I ensure that if the save method is updating and not inserting, that we do not find this object when looking for other similar objects ~Q(pk=self.pk).
I should mention that I ham implementing soft delete (with a modified objects manager which only shows non-deleted objects) which is why I must check for myself rather then relying on unique_together errors.
Problem
Right thats the introduction out of the way. My problem is that when multiple identical objects are saved in quick (or as near as simultaneous) succession, sometimes both get added even though the first being added should prevent the second.
I have tested the code in the shell and it succeeds every time I run it. Thus my assumption is if say we have two objects being added Object A and Object B. Object A runs its check upon save() being called. Then the process saving Object B gets some time on the processor. Object B runs that same test, but Object A has not yet been added so Object B is added to the database. Then Object A regains control of the processor, and has allready run its test, even though identical Object B is in the database, it adds it regardless.
My Thoughts
The reason I fear multiprogramming could be involved is that each Object A and Object is being added through an API save view, so a request to the view is made for each save, thus not a single request with multiple sequential saves on objects.
It might be the case that Apache is creating a process for each request, and thus causing the problems I think I am seeing. As you would expect, the problem only occurs sometimes, which is characteristic of multiprogramming or multiprocessing errors.
If this is the case, is there a way to make the test and set parts of the save() method a critical section, so that a process switch cannot happen between the test and the set?
Based on what you've described, it seems reasonable to assume that multiple Apache processes are a source of problems. Are you able to replicate if you limit Apache to a single worker process?
Maybe the suggestions in this thread will help: How to lock a critical section in Django?
An alternative approach could be utilizing a queue. You'd just stick your objects to be saved into the queue and have another process doing the actual save. That way you could guarantee that objects were processed sequentially. This wouldn't work well if your application depends on having the object saved by the time the response is returned unless you also had the request processes wait on the result (watching a finished queue for example).
Updated
You may find this info useful. Mr. Dumpleton does a much better job of laying out the considerations than I could attempt to summarize here:
http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
http://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines especially the Defining Process Groups section.
http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide Delegation to Daemon Process section
http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango
Find the section of text toward the bottom of the page that begins with:
Now, traditional wisdom in respect of
Django has been that it should
perferably only be used on single
threaded servers. This would mean for
Apache using the single threaded
'prefork' MPM on UNIX systems and
avoiding the multithreaded 'worker'
MPM.
and read until the end of the page.
I have found a solution that I think might work:
import threading
def save(self, *args, **kwargs):
lock = threading.Lock()
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
It doesn't seam to break the save method like that decorator and thus far I have not seen the error again.
Unless anyone can say that this is not a correct solution, I think this works.
Update
The accepted answer was the inspiration for this change.
I seams I was under the impressions that locks were some sort of special voodoo that were exempt by normal logic. Here the lock = threading.Lock() is run each time, thus instantiating a new unlocked lock which may always be acquired.
I needed a single central lock for the purpose, but were could that go unless I had a thread running all the time holding the lock? The answer seamed to be to use file locks explained in this answer to the StackOverflow question mentioned in the accepted answer.
The following is that solution modified to suit my situation:
The Code
Th following is my modified DjangoLock. I wished to keep locks relative to the Django root, to do this I put a custom variable into the settings.py file.
# locks.py
import os
import fcntl
from django.conf import settings
class DjangoLock:
def __init__(self, filename):
self.filename = os.path.join(settings.LOCK_DIR, filename)
self.handle = open(self.filename, 'w')
def acquire(self):
fcntl.flock(self.handle, fcntl.LOCK_EX)
def release(self):
fcntl.flock(self.handle, fcntl.LOCK_UN)
def __del__(self):
self.handle.close()
And now the additional LOCK_DIR settings variable:
# settings.py
import os
PATH = os.path.abspath(os.path.dirname(__file__))
# ...
LOCK_DIR = os.path.join(PATH, 'locks')
That will now put locks in a folder named locks relative to the root of the Django project. Just make sure you give apache write access, in my case:
sudo chown www-data locks
And finally the usage is much the same as before:
import locks
def save(self, *args, **kwargs):
lock = locks.DjangoLock('ClassName')
lock.acquire()
try:
# Test and Set Code
finally:
lock.release()
This is now the implementation I am using and it seams to be working really well. Thanks to all who have contributed to the process of arriving at this end.
You need to use synchronization on the save method. I haven't tried this yet, but here's a decorator that can be used to do so.