Django celery task keep global state - django

I am currently developing a Django application based on django-tenants-schema. You don't need to look into the actual code of the module, but the idea is that it has a global setting for the current database connection defining which schema to use for the application tenant, e.g.
tenant = tenants_schema.get_tenant()
And for setting
tenants_schema.set_tenant(xxx)
For some of the tasks I would like them to remember the current global tenant selected during the instantiation, e.g. in theory:
class AbstractTask(Task):
'''
Run this method before returning the task future
'''
def before_submit(self):
self.run_args['tenant'] = tenants_schema.get_tenant()
'''
This method is run before related .run() task method
'''
def before_run(self):
tenants_schema.set_tenant(self.run_args['tenant'])
Is there an elegant way of doing it in celery?

Celery (as of 3.1) has signals you can hook into to do this. You can alter the kwargs that were passed in, and on the other side, undo your alterations before they're given to the actual task:
from celery import shared_task
from celery.signals import before_task_publish, task_prerun, task_postrun
from threading import local
current_tenant = local()
#before_task_publish.connect
def add_tenant_to_task(body=None, **unused):
body['kwargs']['tenant_middleware.tenant'] = getattr(current_tenant, 'id', None)
print 'sending tenant: {t}'.format(t=current_tenant.id)
#task_prerun.connect
def extract_tenant_from_task(kwargs=None, **unused):
tenant_id = kwargs.pop('tenant_middleware.tenant', None)
current_tenant.id = tenant_id
print 'current_tenant.id set to {t}'.format(t=tenant_id)
#task_postrun.connect
def cleanup_tenant(**kwargs):
current_tenant.id = None
print 'cleaned current_tenant.id'
#shared_task
def get_current_tenant():
# Here is where you would do work that relied on current_tenant.id being set.
import time
time.sleep(1)
return current_tenant.id
And if you run the task (not showing logging from the worker):
In [1]: current_tenant.id = 1234; ct = get_current_tenant.delay(); current_tenant.id = 5678; ct.get()
sending tenant: 1234
Out[1]: 1234
In [2]: current_tenant.id
Out[2]: 5678
The signals are not called if no message is sent (when you call the task function directly, without delay() or apply_async()). If you want to filter on the task name, it is available as body['task'] in the before_task_publish signal handler, and the task object itself is available in the task_prerun and task_postrun handlers.
I am a Celery newbie, so I can't really tell if this is the "blessed" way of doing "middleware"-type stuff in Celery, but I think it will work for me.

I'm not sure what you mean here, is before_submit executed before the task is called by a client?
In that case I would rather use a with statement here:
from contextlib import contextmanager
#contextmanager
def set_tenant_db(tenant):
prev_tenant = tenants_schema.get_tenant()
try:
tenants_scheme.set_tenant(tenant)
yield
finally:
tenants_schema.set_tenant(prev_tenant)
#app.task
def tenant_task(tenant=None):
with set_tenant_db(tenant):
do_actions_here()
tenant_task.delay(tenant=tenants_scheme.get_tenant())
You can of course create a base task that does this automatically,
you can apply the context in Task.__call__ for example, but I'm not sure
if that saves you much if you can just use the with statement explicitly.

Related

Why are integration tests failing on updates to model objects when the function is run on Django q-cluster?

I am running some django integration tests of code that passes some functions to Django-Q for processing. This code essentially updates some fields on a model object.
The app uses signals.py to listen for a post_save change of the status field of the Foo object. Assuming it's not a newly created object and it has the correct status, the Foo.prepare_foo() function is called. This is handled by services.py which hands it off to q-cluster. Assuming this executes without error hooks.py changes the status field of Foo to published, or keeps it at prepare if it fails. If it passes then it also sets prepared to True. (I know this sounds convoluted and with overlapping variables - part of the desire to get tests running is to be able to refactor).
The code runs correctly in all environments, but when I try to run tests to assert that the fields have been updated, they fail.
(If I tweak the code to bypass the cluster and have the functions run in-memory then the tests pass.)
Given this, I'm assuming the issue lies in how I've written the tests, but I cannot diagnose the issue.
tests_prepare_foo.py
import time
from django.test import TestCase, override_settings, TransactionTestCase
from app.models import Foo
class CreateCourseTest(TransactionTestCase):
reset_sequences = True
#classmethod
def setUp(cls):
cls.foo = Foo(status='draft',
prepared=False,
)
cls.foo.save()
def test_foo_prepared(self):
self.foo.status = 'prepare'
self.foo.save()
time.sleep(15) # to allow the cluster to finish processing the request
self.assertEquals(self.foo.prepared, True)
models.py
import uuid
from django.db import models
from model_utils.fields import StatusField
from model_utils import Choices
class Foo(models.Model):
ref = models.UUIDField(default=uuid.uuid4,
editable=False,
)
STATUS = Choices('draft', 'prepare', 'published', 'archived')
status = StatusField(null=True,
blank=True,
)
prepared = models.BooleanField(default=False)
def prepare_foo(self):
""""
...
do stuff
"""
signals.py
from django.dispatch import receiver
from django.db.models.signals import post_save
from django_q.tasks import async_task
from app.models import Foo
#receiver(post_save, sender=Foo)
def make_foo(sender, instance, **kwargs):
if not kwargs.get('created', False) and instance.status == 'prepare' and not instance.prepared:
async_task('app.services.prepare_foo',
instance,
hook='app.hooks.check_foo_prepared',
)
services.py
def prepare_foo(foo):
foo.prepare_foo()
hooks.py
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def check_foo_prepared(task):
foo = task.args[0]
if task.success:
logger.info("Foo Prepared: Successful")
foo.status = 'published'
foo.prepared = True
foo.save()
logger.info("Foo Status: %s", foo.status)
logger.info("Foo Prepared: %s", foo.prepared)
else:
logger.info("Foo Prepared: Unsuccessful")
foo.status = 'draft'
foo.save()
Finally - logs when the test is run with the cluster on:
q-cluster server
INFO:app.hooks:Foo Prepared: Successful
INFO:app.hooks: Foo Status: published
INFO:app.hooks: Foo Prepared: True
django server
h:m:s [Q] INFO Enqueued 1
F
======
FAIL
self.assertEquals(self.foo.prepared, True)
AssertionError: False != True
I think I'm either missing something obvious in my tests, or something really subtle, but I can't work out which. I've tried setting the cluster to run synchronously (sync=True), and in my test reloading Foo just before the assertion:
self.foo.save()
time.sleep(15)
test_foo = Foo.objects.get(pk=1)
self.assertEquals(test_foo.prepared, True)
But this also fails with
self.assertEquals(test_foo.prepared, True)
AssertionError: False != True
Which leads me to believe that the cluster is not updating the object under test (unlikely), or the assertion is being checked before the cluster has updated the object (more likely).
This is the first time I've written tests that require hand-offs to a cluster, so any pointers, suggestions gratefully received!

Getting celery task id

I have made something like that
#app.task
def some_task()
logger.info(app.current_task.request.id)
some_func()
def some_func()
logger.info(app.current_task.request.id)
So I receive normal id inside some_task, but it equals to None inside some_func. How can I get real task id?
You could bind the task and pass the request around rather than relying on a global.
#app.task(bind=True)
def some_task(self)
logger.info(self.request.id)
some_func(self.request)
def some_func(celery_request=None)
# celery_request is optional assuming you're using it elsewhere.
if celery_request:
logger.info(celery_request.id)

Psycopg2 & Flask - tying connection to before_request & teardown_appcontext

Cheers guys,
refactoring my Flask app I got stuck at tying the db connection to #app.before_request and closing it at #app.teardown_appcontext. I am using plain Psycopg2 and the app factory pattern.
First I created a function to call wihtin the app factory so I could use #app as suggested by Miguel Grinberg here:
def create_app(test_config=None):
app = Flask(__name__, instance_relative_config=True)
--
from shop.db import connect_and_close_db
connect_and_close_db(app)
--
return app
Then I tried this pattern suggested on http://flask.pocoo.org/docs/1.0/appcontext/#storing-data:
def connect_and_close_db(app):
#app.before_request
def get_db_test():
conn_string = "dbname=testdb user=testuser password=test host=localhost"
if 'db' not in g:
g.db = psycopg2.connect(conn_string)
return g.db
#app.teardown_appcontext
def close_connection(exception):
db = g.pop('db', None)
if db is not None:
db.close()
It resulted in:
TypeError: 'psycopg2.extensions.connection' object is not callable
Anyone has an idea what happend and how to make it work?
Furthermore I wonder how I would access the connection object for creating a cursor once its creation is tied to before_request?
This solution is probably far from perfect, and it's not really DRY. I'd welcome comments, or other answers that build on this.
To implement for raw psycopg2 support, you probably need to take a look at the connection pooler. There's also a good guide on how to implement this outwith Flask.
The basic idea is to create your connection pool first. You want this to be established when the flask application initializes (This could within the python interpreter or via gunicorn worker of which there may be several - in which case each worker has its own connection pool). I chose to store the returned pool in the config:
from flask import Flask, g, jsonify
import psycopg2
from psycopg2 import pool
app = Flask(__name__)
app.config['postgreSQL_pool'] = psycopg2.pool.SimpleConnectionPool(1, 20,
user = "postgres",
password = "very_secret",
host = "127.0.0.1",
port = "5432",
database = "postgres")
Note the first two arguments to SimpleConnectionPool are the min & max connections. That's the number of connections going to your database server, bwtween 1 & 20 in this case.
Next define a get_db function:
def get_db():
if 'db' not in g:
g.db = app.config['postgreSQL_pool'].getconn()
return g.db
The SimpleConnectionPool.getconn() method used here simply returns a connection from the pool, which we assign to g.db and return. This means when we call get_db() anywhere in the code it returns the same connection, or creates a connection if not present. There's no need for a before.context decorator.
Do define your teardown function:
#app.teardown_appcontext
def close_conn(e):
db = g.pop('db', None)
if db is not None:
app.config['postgreSQL_pool'].putconn(db)
This runs when the application context is destroyed, and uses SimpleConnectionPool.putconn() to put away the connection.
Finally define a route:
#app.route('/')
def index():
db = get_db()
cursor = db.cursor()
cursor.execute("select 1;")
result = cursor.fetchall()
print (result)
cursor.close()
return jsonify(result)
This code works for me tested against postgres runnning in a docker container. A few things which probably should be improved:
This view isn't very DRY. Perhaps you could move some of this into the get_db function so it returns a cursor. (!!!)
When the python interpreter exits, you should also find away to close the connection with app.config['postgreSQL_pool'].closeall
Although tested some kind of way to monitor the pool would be good, so that you could watch pool/db connections under load and make sure the pooler behaves as expected.
(!!!)In another land, the sqlalchemy.scoped_session documentation explains more things relating to this, with some theory on how its 'sessions' work in relation to requests. They have implemented it in such a way that you can call Session.query('SELECT 1') and it will create the session if it doesn't already exist.
EDIT: Here's a gist with your app factory pattern, and sample usage in the comment.
Currently I am using this pattern:
(I ll edit this answer eventually if I came up with better solution)
This is main script in which we use database. It uses two functions from config: get_db() to get connection from pool and put_db() to return connection into pool:
from config import get_db, put_db
from threading import Thread
from time import sleep
def select():
db = get_db()
sleep(1)
cursor = db.cursor()
# Print select result and db connection address in memory
# To see if it gets connection from another addreess on second thread
cursor.execute("SELECT 'It works %s'", (id(db),))
print(cursor.fetchone())
cursor.close()
put_db(db)
Thread(target=select).start()
Thread(target=select).start()
print('Main thread')
This is config.py:
import sys
import os
import psycopg2
from psycopg2 import pool
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
def get_db(key=None):
return getattr(get_db, 'pool').getconn(key)
def put_db(conn, key=None):
getattr(get_db, 'pool').putconn(conn, key=key)
# So we here need to init connection pool in main thread in order everything to work
# Pool is initialized under function object get_db
try:
setattr(get_db, 'pool', psycopg2.pool.ThreadedConnectionPool(1, 20, os.getenv("DB")))
print(color.red('Initialized db'))
except psycopg2.OperationalError as e:
print(e)
sys.exit(0)
And also if you are curious there is an .env file containing db connection string in DB env variable:
DB="dbname=postgres user=postgres password=1234 host=127.0.0.1 port=5433"
(.env file is loaded using dotenv module in config.py)

How to clear Django RQ jobs from a queue?

I feel a bit stupid for asking, but it doesn't appear to be in the documentation for RQ. I have a 'failed' queue with thousands of items in it and I want to clear it using the Django admin interface. The admin interface lists them and allows me to delete and re-queue them individually but I can't believe that I have to dive into the django shell to do it in bulk.
What have I missed?
The Queue class has an empty() method that can be accessed like:
import django_rq
q = django_rq.get_failed_queue()
q.empty()
However, in my tests, that only cleared the failed list key in Redis, not the job keys itself. So your thousands of jobs would still occupy Redis memory. To prevent that from happening, you must remove the jobs individually:
import django_rq
q = django_rq.get_failed_queue()
while True:
job = q.dequeue()
if not job:
break
job.delete() # Will delete key from Redis
As for having a button in the admin interface, you'd have to change django-rq/templates/django-rq/jobs.html template, who extends admin/base_site.html, and doesn't seem to give any room for customizing.
The redis-cli allows FLUSHDB, great for my local environment as I generate a bizzallion jobs.
With a working Django integration I will update. Just adding $0.02.
You can empty any queue by name using following code sample:
import django_rq
queue = "default"
q = django_rq.get_queue(queue)
q.empty()
or even have Django Command for that:
import django_rq
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument("-q", "--queue", type=str)
def handle(self, *args, **options):
q = django_rq.get_queue(options.get("queue"))
q.empty()
As #augusto-men method seems not to work anymore, here is another solution:
You can use the the raw connection to delete the failed jobs. Just iterate over rq:job keys and check the job status.
from django_rq import get_connection
from rq.job import Job
# delete failed jobs
con = get_connection('default')
for key in con.keys('rq:job:*'):
job_id = key.decode().replace('rq:job:', '')
job = Job.fetch(job_id, connection=con)
if job.get_status() == 'failed':
con.delete(key)
con.delete('rq:failed:default') # reset failed jobs registry
The other answers are outdated with the RQ updates implementing Registries.
Now, you need to do this to loop through and delete failed jobs. This would work any particular Registry as well.
import django_rq
from rq.registry import FailedJobRegistry
failed_registry = FailedJobRegistry('default', connection=django_rq.get_connection())
for job_id in failed_registry.get_job_ids():
try:
failed_registry.remove(job_id, delete_job=True)
except:
# failed jobs expire in the queue. There's a
# chance this will raise NoSuchJobError
pass
Source
You can empty a queue from the command line with:
rq empty [queue-name]
Running rq info will list all the queues.

django/celery: Best practices to run tasks on 150k Django objects?

I have to run tasks on approximately 150k Django objects. What is the best way to do this? I am using the Django ORM as the Broker. The database backend is MySQL and chokes and dies during the task.delay() of all the tasks. Related, I was also wanting to kick this off from the submission of a form, but the resulting request produced a very long response time that timed out.
I would also consider using something other than using the database as the "broker". It really isn't suitable for this kind of work.
Though, you can move some of this overhead out of the request/response cycle by launching a task to create the other tasks:
from celery.task import TaskSet, task
from myapp.models import MyModel
#task
def process_object(pk):
obj = MyModel.objects.get(pk)
# do something with obj
#task
def process_lots_of_items(ids_to_process):
return TaskSet(process_object.subtask((id, ))
for id in ids_to_process).apply_async()
Also, since you probably don't have 15000 processors to process all of these objects
in parallel, you could split the objects in chunks of say 100's or 1000's:
from itertools import islice
from celery.task import TaskSet, task
from myapp.models import MyModel
def chunks(it, n):
for first in it:
yield [first] + list(islice(it, n - 1))
#task
def process_chunk(pks):
objs = MyModel.objects.filter(pk__in=pks)
for obj in objs:
# do something with obj
#task
def process_lots_of_items(ids_to_process):
return TaskSet(process_chunk.subtask((chunk, ))
for chunk in chunks(iter(ids_to_process),
1000)).apply_async()
Try using RabbitMQ instead.
RabbitMQ is used in a lot of bigger companies and people really rely on it, since it's such a great broker.
Here is a great tutorial on how to get you started with it.
I use beanstalkd ( http://kr.github.com/beanstalkd/ ) as the engine. Adding a worker and a task is pretty straightforward for Django if you use django-beanstalkd : https://github.com/jonasvp/django-beanstalkd/
It’s very reliable for my usage.
Example of worker :
import os
import time
from django_beanstalkd import beanstalk_job
#beanstalk_job
def background_counting(arg):
"""
Do some incredibly useful counting to the value of arg
"""
value = int(arg)
pid = os.getpid()
print "[%s] Counting from 1 to %d." % (pid, value)
for i in range(1, value+1):
print '[%s] %d' % (pid, i)
time.sleep(1)
To launch a job/worker/task :
from django_beanstalkd import BeanstalkClient
client = BeanstalkClient()
client.call('beanstalk_example.background_counting', '5')
(source extracted from example app of django-beanstalkd)
Enjoy !