Issue with using django celery when django signals is being used to sent email? - django

I have used the default Django admin panel as my backend. I have a Blogpost model. What I am trying to do is whenever an admin user saves a blogpost object on Django admin, I need to send an email to the newsletter subscribers notifying them that there is a new blog on the website.
I have to send mass emails so I am using django-celery. Also, I am using django signals to trigger the send email function.
But Right now, I am sending without using celery but it is too slow.
class Subscribers(models.Model):
email = models.EmailField(unique=True)
date_subscribed = models.DateField(auto_now_add=True)
def __str__(self):
return self.email
class Meta:
verbose_name_plural = "Newsletter Subscribers"
# binding signal:
#receiver(post_save,sender=BlogPost)
def send_mails(sender,instance,created,**kwargs):
subscribers = Subscribers.objects.all()
if created:
blog = BlogPost.objects.latest('date_created')
for abc in subscribers:
emailad = abc.email
send_mail('New Blog Post ', f" Checkout our new blog with title {blog.title} ",
EMAIL_HOST_USER, [emailad],
fail_silently=False)
else:
return
Using celery documentation i have written following files.
My celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE','travel_crm.settings')
app = Celery('travel_crm')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Mu init file:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
Tasks file from docs:
def my_first_task(duration):
subject= 'Celery'
message= 'My task done successfully'
receiver= 'receiver_mail#gmail.com'
is_task_completed= False
error=''
try:
sleep(duration)
is_task_completed= True
except Exception as err:
error= str(err)
logger.error(error)
if is_task_completed:
send_mail_to(subject,message,receivers)
else:
send_mail_to(subject,error,receivers)
return('first_task_done')
This task doesn't work because I am using Django signal to trigger the send email function, How to employ this into tasks.py

I think I understand your question ... I was recently faced with similar challenge which included the complexity of a multi-tenant [schema] database [proved to be an issue with Redis]. I also tried django-celery, but it is dependent on a much older version of Celery. In addition, I wanted to send mass mail initiated by model signal post_save ... using EmailMultiAlternatives with 'bcc' and 'reply-to' features.
So now I am using the latest [as of this post] Django, the latest Celery with Redis ... running on macOS localhost with Poetry virtual env & package manager. The following worked for me:
Celery: I spent several hours net searching for tutorials and advice ... among others, this one added value for me Celery w Django. Good practice to dig deeper into Celery anyway if you have not done it already.
Redis: This will depend on your OS and if you are developing local or remote. The Redis website will guide you to set up. I also tried RabbitMQ but found it [personally] more complex to set up.
The code fractions: There are 4 fractions ... myapp/signals.py, myapp/tasks.py, myproj/celery.py, myproj/settings.py Disclaimer: I'm a hobby programmer ... more experienced engineers may well improve on my code ... I've done some minor testing and all seems to work.
# myapp/signals.py
#receiver(post_save, sender=MyModel)
def post_save_handler(sender, instance, **kwargs):
if instance.some_field == True:
recipient_list = list(get_user_model().objects.filter('some filters'))
from_email = SomeModel.objects.first().site_email
to_email = SomeModel.first().admin_email
# call async task Celery
task_send_mail.delay(instance.some_field, instance.someother_field, from_email, to_email, recipient_list)
# myapp/tasks.py
#shared_task(name='task_sendmail')
def task_send_mail(instance.some_field, instance.someother_field, from_email, to_email, recipient_list):
template_name = 'emails/sometemplate.html'
html_message = render_to_string(template_name, {'body': instance.some_field,}) # These variables are added to the email template
plain_message = strip_tags(html_message)
subject = f'Do Not Reply : {instance.someother_field}'
connection = get_connection()
connection.open()
message = EmailMultiAlternatives(
subject,
plain_message,
from_email,
[to_email],
bcc=recipient_list,
reply_to=[to_email],
)
message.attach_alternative(html_message, "text/html")
try:
message.send()
connection.close()
except SMTPException as e:
print('There was an error sending email: ', e)
connection.close()
# myproj/celery.py
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproj.settings')
app = Celery('myproj')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
# myproj/settings.py
...
##Celery Configuration Options
CELERY_BROKER_URL = 'redis://localhost:6379//'
CELERY_TIMEZONE = "Africa/SomeCity"
CELERY_TASK_TRACK_STARTED = True
CELERY_TASK_TIME_LIMIT = 30 * 60

Related

Flask-Migrate - 'Please edit configuratio/connection/logging settings' on 'flask db init'

Have spent literally the past three days trying to figure this out and just can't seem to get it. Have reviewed other posts here and elsewhere regarding the same issue (see here, here, here, among others) countless times and am apparently just too dense to figure it out myself. So here we are, any help would be greatly appreciated.
Project structure:
/project-path/
/app.py
/config.py
/.flaskenv
/app/__init__.py
/app/models.py
/app/routes.py
config.py
import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
class Config(object):
SECRET_KEY = os.environ.get('SECRET_KEY') or 'bleh'
SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or 'sqlite:///' + os.path.join(basedir, 'app.db')
SQLALCHEMY_TRACK_MODIFICATIONS = False
app.py
from app import create_app
app = create_app()
.flaskenv
FLASK_APP=app.py
/app/init.py
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from flask_marshmallow import Marshmallow
from config import Config
db = SQLAlchemy()
migrate = Migrate()
ma = Marshmallow()
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
from app.models import Borrower
db.init_app(app)
migrate.init_app(app, db)
from app.routes import borrowers_api
app.register_blueprint(borrowers_api)
return app
/app/models.py
from app import db, ma
from marshmallow_sqlalchemy import SQLAlchemyAutoSchema
class Borrower(db.Model):
__tablename__ = 'borrowers'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
borrower_name = db.Column(db.String(100), nullable=False, unique=True, index=True)
def __init__(self, borrower_name):
self.borrower_name = borrower_name
class BorrowerSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Borrower
include_relationships = True
load_instance = True #Optional: deserialize to model instances
borrower_schema = BorrowerSchema()
borrowers_schema = BorrowerSchema(many=True)
/app/routes.py
from flask import Blueprint, request, jsonify
from app.models import Borrower, borrower_schema, borrowers_schema
from app import db
borrowers_api = Blueprint('borrowers_api', __name__)
# Add a borrower
#borrowers_api.route('/borrower', methods=['POST'])
def add_borrower():
borrower_name = request.get_json(force=True)['borrower_name']
new_borrower = Borrower(borrower_name)
db.session.add(new_borrower)
db.session.commit()
return borrower_schema.jsonify(new_borrower)
# Get single borrower by id
#borrowers_api.route('/borrower/<id>', methods=['GET'])
def get_borrower(id):
borrower = Borrower.query.get(id)
return borrower_schema.jsonify(borrower)
Yet all I continue to run into the following:
(env-name) C:\Users\...\flask-scratchwork\flask-restapi-tryagain>flask db init
Creating directory C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations ... done
Creating directory C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations\versions ... done
Generating C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations\alembic.ini ... done
Generating C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations\env.py ... done
Generating C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations\README ... done
Generating C:\Users\...\flask-scratchwork\flask-restapi-tryagain\migrations\script.py.mako ... done
Please edit configuration/connection/logging settings in 'C:\\Users\\...\\flask-scratchwork\\flask-restapi-tryagain\\migrations\\alembic.ini' before proceeding.
Where am I going wrong?
Nevermind, think the answer was actually just proceed with flask db migrate despite:
Please edit configuration/connection/logging settings in 'C:\\Users\\...\\flask-scratchwork\\flask-restapi-tryagain\\migrations\\alembic.ini' before proceeding.
So for anyone else spending hours trying to discern precisely what type of edits flask-migrate wants you to make to to the "configuration/connection/logging settings" in alembic.ini...the answer is seemingly to just proceed with flask db migrate

Rabbitmq listener using pika in django

I have a django application and I want to consume messages from a rabbit mq. I want the listener to start consuming when I start the django server.I am using pika library to connect to rabbitmq.Proving some code example will really help.
First you need to somehow run your application at the start of the django project
https://docs.djangoproject.com/en/2.0/ref/applications/#django.apps.AppConfig.ready
def ready(self):
if not settings.IS_ACCEPTANCE_TESTING and not settings.IS_UNITTESTING:
consumer = AMQPConsuming()
consumer.daemon = True
consumer.start()
Further in any convenient place
import threading
import pika
from django.conf import settings
class AMQPConsuming(threading.Thread):
def callback(self, ch, method, properties, body):
# do something
pass
#staticmethod
def _get_connection():
parameters = pika.URLParameters(settings.RABBIT_URL)
return pika.BlockingConnection(parameters)
def run(self):
connection = self._get_connection()
channel = connection.channel()
channel.queue_declare(queue='task_queue6')
print('Hello world! :)')
channel.basic_qos(prefetch_count=1)
channel.basic_consume(self.callback, queue='queue')
channel.start_consuming()
This will help
http://www.rabbitmq.com/tutorials/tutorial-six-python.html

How to get the "full" async result in Celery link_error callback

I have Celery 3.1.18 running with Django 1.6.11 and RabbitMQ 3.5.4, and trying to test my async task in a failure state (CELERY_ALWAYS_EAGER=True). However, I cannot get the proper "result" in the error callback. The example in the Celery docs shows:
#app.task(bind=True)
def error_handler(self, uuid):
result = self.app.AsyncResult(uuid)
print('Task {0} raised exception: {1!r}\n{2!r}'.format(
uuid, result.result, result.traceback))
When I do this, my result is still "PENDING", result.result = '', and result.traceback=''. But the actual result returned by my .apply_async call has the right "FAILURE" state and traceback.
My code (basically a Django Rest Framework RESTful endpoint that parses a .tar.gz file, and then sends a notification back to the user, when the file is done parsing):
views.py:
from producer_main.celery import app as celery_app
#celery_app.task()
def _upload_error_simple(uuid):
print uuid
result = celery_app.AsyncResult(uuid)
print result.backend
print result.state
print result.result
print result.traceback
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class UploadNewFile(APIView):
def post(self, request, repository_id, format=None):
try:
uploaded_file = self.data['files'][self.data['files'].keys()[0]]
self.path = default_storage.save('{0}/{1}'.format(settings.MEDIA_ROOT,
uploaded_file.name),
uploaded_file)
print type(import_file)
self.async_result = import_file.apply_async((self.path, request.user),
link_error=_upload_error_simple.s())
print 'results from self.async_result:'
print self.async_result.id
print self.async_result.backend
print self.async_result.state
print self.async_result.result
print self.async_result.traceback
return Response()
except (PermissionDenied, InvalidArgument, NotFound, KeyError) as ex:
gutils.handle_exceptions(ex)
tasks.py:
from producer_main.celery import app
from utilities.general import upload_class
#app.task
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
celery.py:
"""
As described in
http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
"""
from __future__ import absolute_import
import os
import logging
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'producer_main.settings')
from django.conf import settings
log = logging.getLogger(__name__)
app = Celery('producer') # pylint: disable=invalid-name
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # pragma: no cover
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
My backend is configured as such:
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = False
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'redis://localhost'
CELERY_RESULT_PERSISTENT = True
CELERY_IGNORE_RESULT = False
When I run my unittest for the link_error state, I get:
Creating test database for alias 'default'...
<class 'celery.local.PromiseProxy'>
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
<celery.backends.redis.RedisBackend object at 0x10aa2e110>
PENDING
None
None
results from self.async_result:
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
None
FAILURE
Non .zip / .tar.gz file passed in.
Traceback (most recent call last):
So the task results are not available in my _upload_error_simple() method, but they are available from the self.async_result returned variable...
I could not get the link and link_error callbacks to work, so I finally had to use the on_failure and on_success task methods described in the docs and this SO question. My tasks.py then looks like:
class ErrorHandlingTask(Task):
abstract = True
def on_failure(self, exc, task_id, targs, tkwargs, einfo):
msg = 'Import of {0} raised exception: {1!r}'.format(targs[0].split('/')[-1],
str(exc))
def on_success(self, retval, task_id, targs, tkwargs):
msg = "Upload successful. You may now view your course."
#app.task(base=ErrorHandlingTask)
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
You appear to have _upload_error() as a bound method of your class - this is probably not what you want. try making it a stand-along task:
#celery_app.task(bind=True)
def _upload_error(self, uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class Whatever(object):
....
self.async_result = import_file.apply_async((self.path, request.user),
link=self._upload_success.s(
"Upload finished."),
link_error=_upload_error.s())
in fact there's no need for the self paramater since it's not used so you could just do this:
#celery_app.task()
def _upload_error(uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
note the absence of bind=True and self
Be careful with UUID instance!
If you will try to get status of a task with id not string type but UUID type, you will only get PENDING status.
from uuid import UUID
from celery.result import AsyncResult
task_id = UUID('d4337c01-4402-48e9-9e9c-6e9919d5e282')
print(AsyncResult(task_id).state)
# PENDING
print(AsyncResult(str(task_id)).state)
# SUCCESS

Why does Flask-Security Cause a new KVSession Record for Each Request?

I'm trying out using Flask-KVSession as an alternative session implementation for a Flask web site. I've created a test website (see Code 1 below). When I run this, I can use the browser to store values into the session by navigating between the various resources in my web browser. This works correctly. Also, when I look at the sessions table in the resulting SQLite database, I see a single record that was being used to store this session the entire time.
Then I try to add Flask-Security to this (see Code 2 below). After running this site (making sure to first delete the existing test.db sqlite file), I am brought to the login prompt and I log in. Then I proceed to do the same thing of jumping back and forth between the resources. I get the same results.
The problem is that when I look in the sqlitebrowser sessions table, there are 8 records. It turns out a new session record was created for EACH request that was made.
Why does a new session record get created for each request when using Flask-Security? Why isn't the existing session updated like it was before?
Code 1 (KVSession without Flask-Security)
import os
from flask import Flask, session
app = Flask(__name__)
app.secret_key = os.urandom(64)
#############
# SQLAlchemy
#############
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)
DB_DIR = os.path.dirname(os.path.abspath(__file__))
DB_URI = 'sqlite:////{0}/test.db'.format(DB_DIR)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URI
#app.before_first_request
def create_user():
db.create_all()
############
# KVSession
############
from simplekv.db.sql import SQLAlchemyStore
from flask.ext.kvsession import KVSessionExtension
store = SQLAlchemyStore(db.engine, db.metadata, 'sessions')
kvsession = KVSessionExtension(store, app)
#app.route('/a')
def a():
session['last'] = 'b'
return 'Thank you for visiting A!'
#app.route('/b')
def b():
session['last'] = 'b'
return 'Thank you for visiting B!'
#app.route('/c')
def c():
return 'You last visited "{0}"'.format(session['last'])
app.run(debug=True)
Code 2 (KVSession WITH Flask-Security)
import os
from flask import Flask, session
app = Flask(__name__)
app.secret_key = os.urandom(64)
#############
# SQLAlchemy
#############
from flask.ext.sqlalchemy import SQLAlchemy
db = SQLAlchemy(app)
DB_DIR = os.path.dirname(os.path.abspath(__file__))
DB_URI = 'sqlite:////{0}/test.db'.format(DB_DIR)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URI
###########
# Security
###########
# This import needs to happen after SQLAlchemy db is created above
from flask.ext.security import (
Security, SQLAlchemyUserDatastore, current_user,
UserMixin, RoleMixin, login_required
)
# Define models
roles_users = db.Table('roles_users',
db.Column('user_id', db.Integer(), db.ForeignKey('user.id')),
db.Column('role_id', db.Integer(), db.ForeignKey('role.id')))
class Role(db.Model, RoleMixin):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(80), unique=True)
description = db.Column(db.String(255))
class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(255), unique=True)
password = db.Column(db.String(255))
active = db.Column(db.Boolean())
confirmed_at = db.Column(db.DateTime())
roles = db.relationship('Role', secondary=roles_users,
backref=db.backref('users', lazy='dynamic'))
user_datastore = SQLAlchemyUserDatastore(db, User, Role)
security = Security(app, user_datastore)
#app.before_first_request
def create_user():
db.create_all()
user_datastore.create_user(email='test#example.com', password='password')
db.session.commit()
############
# KVSession
############
from simplekv.db.sql import SQLAlchemyStore
from flask.ext.kvsession import KVSessionExtension
store = SQLAlchemyStore(db.engine, db.metadata, 'sessions')
kvsession = KVSessionExtension(store, app)
#app.route('/a')
#login_required
def a():
session['last'] = 'b'
return 'Thank you for visiting A!'
#app.route('/b')
#login_required
def b():
session['last'] = 'b'
return 'Thank you for visiting B!'
#app.route('/c')
#login_required
def c():
return 'You last visited "{0}"'.format(session['last'])
app.run(debug=True)
Version Info
Python 2.7.3
Flask==0.9
Flask==0.9
Flask-KVSession==0.3.2
Flask-Login==0.1.3
Flask-Mail==0.8.2
Flask-Principal==0.3.5
Flask-SQLAlchemy==0.16
Flask-Security==1.6.3
SQLAlchemy==0.8.1
Turns out this is related to a known problem with flask-login (which flask-security uses) when flask-login is used with a session storage library like KVSession.
Basically, KVSession needs to update the database with the new session information whenever data in the session is created or modified. And in the sample above, this happens correctly: the first time I hit a page, the session is created. After that, the existing session is updated.
However, in the background the browser sends a cookie-less request to my web server looking for my favicon. Therefore, flask is handling a request to /favicon.ico. This request (or any other request that would 404) is still handled by flask. This means that flask-login will look at the request and try to do its magic.
It so happens that flask-login doesn't TRY to put anything into the session, but it still LOOKS like the session has been modified as far as KVSession is concerned. Because it LOOKS like the session is modified, KVSession updates the database. The following is code from flask-login:
def _update_remember_cookie(self, response):
operation = session.pop("remember", None)
...
The _update_remember_cookie method is called during the request lifecycle. Although session.pop will not change the session if the session doesn't have the "remember" key (which in this case it doesn't), KVSession still sees a pop and assumes that the session changes.
The issue for flask-login provides the simple bug fix, but it has not been pushed into flask-login. It appears that the maintainer is looking for a complete rewrite, and will implement it there.

Django Celerybeat PeriodicTask running far more than expected

I'm struggling with Django, Celery, djcelery & PeriodicTasks.
I've created a task to pull a report for Adsense to generate a live stat report. Here is my task:
import datetime
import httplib2
import logging
from apiclient.discovery import build
from celery.task import PeriodicTask
from django.contrib.auth.models import User
from oauth2client.django_orm import Storage
from .models import Credential, Revenue
logger = logging.getLogger(__name__)
class GetReportTask(PeriodicTask):
run_every = datetime.timedelta(minutes=2)
def run(self, *args, **kwargs):
scraper = Scraper()
scraper.get_report()
class Scraper(object):
TODAY = datetime.date.today()
YESTERDAY = TODAY - datetime.timedelta(days=1)
def get_report(self, start_date=YESTERDAY, end_date=TODAY):
logger.info('Scraping Adsense report from {0} to {1}.'.format(
start_date, end_date))
user = User.objects.get(pk=1)
storage = Storage(Credential, 'id', user, 'credential')
credential = storage.get()
if not credential is None and credential.invalid is False:
http = httplib2.Http()
http = credential.authorize(http)
service = build('adsense', 'v1.2', http=http)
reports = service.reports()
report = reports.generate(
startDate=start_date.strftime('%Y-%m-%d'),
endDate=end_date.strftime('%Y-%m-%d'),
dimension='DATE',
metric='EARNINGS',
)
data = report.execute()
for row in data['rows']:
date = row[0]
revenue = row[1]
try:
record = Revenue.objects.get(date=date)
except Revenue.DoesNotExist:
record = Revenue()
record.date = date
record.revenue = revenue
record.save()
else:
logger.error('Invalid Adsense Credentials')
I'm using Celery & RabbitMQ. Here are my settings:
# Celery/RabbitMQ
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "myuser"
BROKER_PASSWORD = "****"
BROKER_VHOST = "myvhost"
CELERYD_CONCURRENCY = 1
CELERYD_NODES = "w1"
CELERY_RESULT_BACKEND = "amqp"
CELERY_TIMEZONE = 'America/Denver'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
import djcelery
djcelery.setup_loader()
On first glance everything seems to work, but after turning on the logger and watching it run I have found that it is running the task at least four times in a row - sometimes more. It also seems to be running every minute instead of every two minutes. I've tried changing the run_every to use a crontab but I get the same results.
I'm starting celerybeat using supervisor. Here is the command I use:
python manage.py celeryd -B -E -c 1
Any ideas as to why its not working as expected?
Oh, and one more thing, after the day changes, it continues to use the date range it first ran with. So as days progress it continues to get stats for the day the task started running - unless I run the task manually at some point then it changes to the date I last ran it manually. Can someone tell me why this happens?
Consider creating a separate queue with one worker process and fixed rate for this type of tasks and just add the tasks in this new queue instead of running them in directly from celerybeat. I hope that could help you to figure out what is wrong with your code, is it problem with celerybeat or your tasks are running longer than expected.
#task(queue='create_report', rate_limit='0.5/m')
def create_report():
scraper = Scraper()
scraper.get_report()
class GetReportTask(PeriodicTask):
run_every = datetime.timedelta(minutes=2)
def run(self, *args, **kwargs):
create_report.delay()
in settings.py
CELERY_ROUTES = {
'myapp.tasks.create_report': {'queue': 'create_report'},
}
start additional celery worker with that would handle tasks in your queue
celery worker -c 1 -Q create_report -n create_report.local
Problem 2. Your YESTERDAY and TODAY variables are set at class level, so within one thread they are set only once.