I have an api with django rest framework.
The api is for user signup. After signup api will send verification email to the user but sending email takes a little time so for this purpose i want to send email in background.
For this requirement what should be the approach ?
This should be your approach to achieve the task which you want to execute.
Install celery
create a celery.py file in your project folder, where your settings.py file is located(recommended but not necessary)
and paste the following code into your file.
Replace example with your project name
from celery import Celery
import os
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'example.settings')
app = Celery('example')
app.config_from_object(settings, namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
Add these lines in your settings:
CELERY_BROKER_URL = 'redis://{}:{}'.format(REDIS_SERVER_HOST, REDIS_SERVER_PORT)
CELERY_RESULT_BACKEND = 'redis://{}:{}'.format(REDIS_SERVER_HOST, REDIS_SERVER_PORT)
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
Make sure to start the Redis server and assign values to REDIS_SERVER_HOST and REDIS_SERVER_PORT variables with appropriate values.
Open init.py file of your project directory and paste this code
from .celery import app as celery_app
__all__ = ['celery_app']
create a task.py file in your app directory and write a function that sends an email
For ex:
from example import celery_app
from django.core.mail import send_mail
#celery_app.task
def send_celery_email(self, recipient_list):
# your actual mail function
send_mail("subject", "message", from_email = 'test#gmail.com', recipient_list = [recipient_list])
Start your celery server using
celery -A project worker --loglevel=info
call this_function from the views as a normal function and pass the required arguments.
from .task import send_celery_email
send_celery_email.delay(recipient_list = [])
Note: This is just a roadmap of the workflow, actual code may vary according to your requirements and version of celery.
Also checkout the documentation
Related
I'm running a Flask app that runs several Celery tasks (with Redis as the backend) and sometimes caches API calls with Flask-Caching. It will run on Heroku, although at the moment I'm running it locally. I'm trying to figure out if there's a way to reuse my various config variables for Redis access. Mainly in case Heroku changes the credentials, moves Redis to another server, etc. Currently I'm reusing the same Redis credentials in several ways.
From my .env file:
CACHE_REDIS_URL = "redis://127.0.0.1:6379/1"
REDBEAT_REDIS_URL = "redis://127.0.0.1:6379/1"
CELERY_BROKER_URL = "redis://127.0.0.1:6379/1"
RESULT_BACKEND = "redis://127.0.0.1:6379/1"
From my config.py file:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
# non redis values are above and below these items
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "redis://127.0.0.1:6379/0")
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", "redis://127.0.0.1:6379/0")
CELERY_RESULT_BACKEND = RESULT_BACKEND # because of the deprecated value
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", "redis://127.0.0.1:6379/0")
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", "redis://127.0.0.1:6379/0")
In extensions.py:
from celery import Celery
from src.cache import cache
celery = Celery()
def register_extensions(app, worker=False):
cache.init_app(app)
# load celery config
celery.config_from_object(app.config)
if not worker:
# register celery irrelevant extensions
pass
In my __init__.py:
import os
from flask import Flask, jsonify, request, current_app
from src.extensions import register_extensions
from config import Config
def create_worker_app(config_class=Config):
"""Minimal App without routes for celery worker."""
app = Flask(__name__)
app.config.from_object(config_class)
register_extensions(app, worker=True)
return app
from my worker.py file:
from celery import Celery
from celery.schedules import schedule
from redbeat import RedBeatSchedulerEntry as Entry
from . import create_worker_app
# load several tasks from other files here
def create_celery(app):
celery = Celery(
app.import_name,
backend=app.config["RESULT_BACKEND"],
broker=app.config["CELERY_BROKER_URL"],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"],
)
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
flask_app = create_worker_app()
celery = create_celery(flask_app)
# call the tasks, passing app=celery as a parameter
This all works fine, locally (I've tried to remove code that isn't relevant to the Celery configuration). I haven't finished deploying to Heroku yet because I remembered that when I install Heroku Data for Redis, it creates a REDIS_URL setting that I'd like to use.
I've been trying to change my config.py values to use REDIS_URL instead of the other things they use, but every time I try to run my celery tasks the connection fails unless I have distinct env values as shown in my config.py above.
What I'd like to have in config.py would be this:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
REDIS_URL = os.environ.get("REDIS_URL", "redis://127.0.0.1:6379/0")
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", REDIS_URL)
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", REDIS_URL)
CELERY_RESULT_BACKEND = RESULT_BACKEND
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", REDIS_URL)
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", REDIS_URL)
When I try this, and when I remove all of the values from .env except for REDIS_URL and then try to run one of my Celery tasks, the task never runs. The Celery worker appears to run correctly, and the Flask-Caching requests run correctly (these run directly within the application rather than using the worker). It never appears as a received task in the worker's debug logs, and eventually the server request times out.
Is there anything I can do to reuse Redis_URL with Celery in this way? If I can't, is there anything Heroku does expect me to do to maintain the credentials/server path/etc for where it is serving Redis for Celery, when I'm using the same instance of Redis for several purposes like this?
By running my Celery worker with the -E flag, as in celery -A src.worker:celery worker -S redbeat.RedBeatScheduler --loglevel=INFO -E, I was able to figure out that my error was happening because Flask's instance of Celery, in gunicorn, was not able to access the config values for Celery that the worker was using.
What I've done to try to resolve this appears to have worked.
In extensions.py, instead of configuring Celery, I've done this, removing all other mentions of Celery:
from celery import Celery
celery = Celery('scraper') # a temporary name
Then, on the same level, I created a celery.py:
from celery import Celery
from flask import Flask
from src import extensions
def configure_celery(app):
TaskBase = extensions.celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
extensions.celery.conf.update(
broker_url=app.config['CELERY_BROKER_URL'],
result_backend=app.config['RESULT_BACKEND'],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"]
)
extensions.celery.Task = ContextTask
return extensions.celery
In worker.py, I'm doing:
from celery import Celery
from celery.schedules import schedule
from src.celery import configure_celery
flask_app = create_worker_app()
celery = configure_celery(flask_app)
I'm doing a similar thing in app.py:
from src.celery import configure_celery
app = create_app()
configure_celery(app)
As far as I can tell, this doesn't change how the worker behaves at all, but it allows me to access the tasks, via blueprint endpoints, in the browser.
I found this technique in this article and its accompanying GitHub repo
I am using Celery beat to perform a task that is supposed to be executed at on specific time. I was trying to excute it now by changing the time just to see if it works correctly. What I have noticed is it sends the task correctly when I run a fresh command that is celery -A jgs beat -l INFO but then suppose I change the time in the schedule section from two minutes or three minutes from now and then again run the above command, beat does not send the task. Then I noticed something strange. If I go to the admin area and delete all the other old tasks that were created in the crontab table, and then run the command again it sends the task again to the worker.
The tasks are being traced by the worker correctly and also the celery worker is working correctly. Below are the codes that I wrote to perform the task.
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
from celery.schedules import crontab
from django.utils import timezone
from datetime import timezone
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'jgs.settings')
app = Celery('jgs')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.conf.enable_utc = False
app.conf.update(timezone = 'Asia/Kolkata')
# app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
# CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
app.config_from_object('django.conf:settings', namespace='CELERY')
# Celery beat settings
app.conf.beat_schedule = {
'send-expiry-email-everyday': {
'task': 'control.tasks.send_expiry_mail',
'schedule': crontab(hour=1, minute=5),
}
}
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
control/tasks.py
from celery import shared_task
from django.core.mail import message, send_mail, EmailMessage
from django.conf import settings
from django.template.loader import render_to_string
from datetime import datetime, timedelta
from account.models import CustomUser
from home.models import Contract
#shared_task
def send_expiry_mail():
template = render_to_string('expiry_email.html')
email = EmailMessage(
'Registration Successfull', #subject
template, # body
settings.EMAIL_HOST_USER,
['emaiid#gmail.com'], # sender email
)
email.fail_silently = False
email.content_subtype = 'html' # WITHOUT THIS THE HTML WILL GET RENDERED AS PLAIN TEXT
email.send()
return "Done"
settings.py
############# CELERY SETTINGS #######################
CELERY_BROKER_URL = 'redis://127.0.0.1:6379'
# CELERY_BROKER_URL = os.environ['REDIS_URL']
CELERY_ACCEPT_CONTENT =['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/Kolkata'
CELERY_RESULT_BACKEND = 'django-db'
# CELERY BEAT CONFIGURATIONS
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
commands that I am using
for worker
celery -A jgs.celery worker --pool=solo -l info
for beat
celery -A jgs beat -l INFO
Please correct me where I going wrong or what I am writing wrong, I completely in beginer phase in this async part.
I am really sorry if my sentences were confusing above.
I am trying to deploy an app on Heroku and I am using Celery and Redis to manage background tasks. I currently have a background task that collects data via FTP and puts it in the database. I also have a loading page that periodically refreshes until the task completes. However, I cannot retrieve the list of active tasks (inspect from celery.task.control returns None). I tried running this locally, and I can see that Celery receives the task (in the terminal). I can also see that Celery connects to Redis at the correct port during startup.
I have tried reinstalling several libraries, and ensuring that all variables in the settings.py file were set properly. I also tried checking the value of os.environ['REDIS_URL'], and it is correct.
relevant code from settings.py
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": os.environ['REDIS_URL'],
}
}
CELERY_BROKER_URL = os.environ['REDIS_URL']
CELERY_RESULT_BACKEND = os.environ['REDIS_URL']
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
celery.py:
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'genome.settings')
os.environ.setdefault('REDIS_URL', 'redis://localhost:6379/0')
app = Celery('genome_app')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
(in the app's views.py)
from celery.task.control import inspect
...
i = inspect()
active_tasks = list(i.active().values())[0]
AttributeError: 'NoneType' object has no attribute 'values'
from celery.task.control import inspect
i = inspect()
dictfile = i.active()
details={}
properties=[]
if you want to get the args,taskid as new dict
for dictele in dictfile:
for dictloop in dictfile[dictele]:
jobid=dictloop['args']
taskid= dictloop['id']
jobid=jobid.replace("('","")
jobid=jobid.replace("',)",'')
details["jobid"]=jobid
details["taskid"]=taskid
properties.append(details)
print(properties)
You can create your own task manager list by using above details.
I have been having the same problem for a while now. It seems though that the devs are aware of it (https://github.com/celery/kombu/issues/1081). I have found that by trying to force it to install an older version of kombu (4.5.0 seems to now work for me) it works again for the time being.
I'm trying to create my first Celery task. The task will send the same e-mail every one minute to the same person.
According to the documentation, I create my first task in my project.
from __future__ import absolute_import, unicode_literals
from celery import shared_task
from django.core.mail import send_mail
#shared_task
def send_message():
to = ['test#test.com', ]
send_mail('TEST TOPIC',
'TEST MESSAGE',
'test#test.com',
to)
Then, in my project's ja folder, I add the celery.py file, which looks like this:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings
from celery.schedules import crontab
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app_rama.settings')
app = Celery('app_rama')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks(settings.INSTALLED_APPS)
app.conf.beat_schedule = {
'send-message-every-single-minute': {
'task': 'app.tasks.send_message',
'schedule': crontab(), # change to `crontab(minute=0, hour=0)` if you want it to run daily at midnight
},
}
Then in the __int__.py file of my project I added:
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
__all__ = ('celery_app',)
And the last thing I try to do is run the command:
celery -A app_rama worker -l info
And then I receive the following error:
[2019-06-27 16:01:26,750: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [WinError 10061]
I tried many solutions from the forum, but I did not find the correct one.
I was also not helped by adding the following settings to my settings.py file:
CELERY_BROKER_URL = 'amqp://guest:guest#localhost:5672//'
How can I solve this error so that my task works in the background of the application?
Your Celery broker is probably misconfigured. Read the "Using RabbitMQ" document to find out how to setup RabbitMQ properly (I assumed you want to use RabbitMQ as you had "amqp" protocol in your example).
I recommend learning Celery with Redis, as it is easier to setup and manage. Then once you learn the basics you may decide to move to RabbitMQ or some other supported broker...
Also, verify that your RabbitMQ server is running properly. If you use Windows, make sure some software on it does not prevent user processes to connect to the localhost:5672.
I am using django celery(4.2) and I will add some task from my django view, also I want to get the task result async in a separate process, but when I try to get the result, I got some errors.
My full steps is as following:
django celery config:
proj/settings/celery.py
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Nuwa.settings.development')
app = Celery('Nuwa')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
proj/settings.py
CELERY_BROKER_URL = f'redis://{REDIS["HOST"]}:{REDIS["PORT"]}'
CELERY_RESULT_BACKEND = f'redis://{REDIS["HOST"]}:{REDIS["PORT"]}'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
proj/settings/init.py
from .celery import app as celery_app
__all__ = ('celery_app', )
call celery task in one django view:
result = port_scan.delay(target)
redis_conn.sadd(celery_task_set_key, result.task_id)
in this step, I store the task_id in redis set for future use.
get the task result
redis_conn = redis.Redis(host=REDIS_HOST, port=REDIS_PORT)
celery_tasks = redis_conn.smembers('celery-tasks')
for task_id in celery_tasks:
print(task_id)
celery_result = AsyncResult(task_id)
print(celery_result.state)
when I try to get the result, it have error:
AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
I try some solutions by search google and so, it doesn't work.
I had same issue when we upgraded from Celery 3.x to 4.x. Tried various solutions but the simplest is just to use the set_default. So, in celery.py, need to add this:
app.set_default()
This makes sure that calls to AsyncResult(task_id) will use the fully configured/bootstrapped version of Celery app (i.e. uses your set CELERY_RESULT_BACKEND), instead of the original/default version (i.e. uses the DisabledBackend).