Django. Simple Celery task not working - django

I'm new to Celery. I have a task that is not working adn I don't know why. Im using rabbitmq Here is my code:
In settings.py:
BROKER_URL = "amqp://guest#localhost//"
tasks.py:
from celery.decorators import task
from celery.utils.log import get_task_logger
from hisoka.models import FeralSpirit, Fireball
logger = get_task_logger(__name__)
#task
def test_task():
fireball = Fireball.objects.last()
feral_spirit = FeralSpirit.objects.filter(fireball=fireball).last()
counters = feral_spirit.increase_counter()
logger.info(feral_spirit + "counters: " + counters)
The task is just a test, it is designed to increase a counter that is a field of the FeralSpirit model. It works correctly if I don't call the function with delay()
views.py
class FireballDetail(ListView):
def get_queryset(self, *args, **kwargs):
test_task.delay()
...
I have a rabbitmq server running correctly (or at least it looks like that) on one terminal and the django localhost server on another terminal. Am I missing something obvious? I have a celery.py and a modified __init__ file, exactly following the documentation.

Most probably your celery worker is not running, try
celery -A {project_name} worker --loglevel=info -Q {queue_name}
Substitute the value of project_name and queue_name. Default queue_name is default

Related

Reuse of Celery configuration values for Heroku and local Flask

I'm running a Flask app that runs several Celery tasks (with Redis as the backend) and sometimes caches API calls with Flask-Caching. It will run on Heroku, although at the moment I'm running it locally. I'm trying to figure out if there's a way to reuse my various config variables for Redis access. Mainly in case Heroku changes the credentials, moves Redis to another server, etc. Currently I'm reusing the same Redis credentials in several ways.
From my .env file:
CACHE_REDIS_URL = "redis://127.0.0.1:6379/1"
REDBEAT_REDIS_URL = "redis://127.0.0.1:6379/1"
CELERY_BROKER_URL = "redis://127.0.0.1:6379/1"
RESULT_BACKEND = "redis://127.0.0.1:6379/1"
From my config.py file:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
# non redis values are above and below these items
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "redis://127.0.0.1:6379/0")
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", "redis://127.0.0.1:6379/0")
CELERY_RESULT_BACKEND = RESULT_BACKEND # because of the deprecated value
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", "redis://127.0.0.1:6379/0")
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", "redis://127.0.0.1:6379/0")
In extensions.py:
from celery import Celery
from src.cache import cache
celery = Celery()
def register_extensions(app, worker=False):
cache.init_app(app)
# load celery config
celery.config_from_object(app.config)
if not worker:
# register celery irrelevant extensions
pass
In my __init__.py:
import os
from flask import Flask, jsonify, request, current_app
from src.extensions import register_extensions
from config import Config
def create_worker_app(config_class=Config):
"""Minimal App without routes for celery worker."""
app = Flask(__name__)
app.config.from_object(config_class)
register_extensions(app, worker=True)
return app
from my worker.py file:
from celery import Celery
from celery.schedules import schedule
from redbeat import RedBeatSchedulerEntry as Entry
from . import create_worker_app
# load several tasks from other files here
def create_celery(app):
celery = Celery(
app.import_name,
backend=app.config["RESULT_BACKEND"],
broker=app.config["CELERY_BROKER_URL"],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"],
)
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
flask_app = create_worker_app()
celery = create_celery(flask_app)
# call the tasks, passing app=celery as a parameter
This all works fine, locally (I've tried to remove code that isn't relevant to the Celery configuration). I haven't finished deploying to Heroku yet because I remembered that when I install Heroku Data for Redis, it creates a REDIS_URL setting that I'd like to use.
I've been trying to change my config.py values to use REDIS_URL instead of the other things they use, but every time I try to run my celery tasks the connection fails unless I have distinct env values as shown in my config.py above.
What I'd like to have in config.py would be this:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
REDIS_URL = os.environ.get("REDIS_URL", "redis://127.0.0.1:6379/0")
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", REDIS_URL)
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", REDIS_URL)
CELERY_RESULT_BACKEND = RESULT_BACKEND
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", REDIS_URL)
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", REDIS_URL)
When I try this, and when I remove all of the values from .env except for REDIS_URL and then try to run one of my Celery tasks, the task never runs. The Celery worker appears to run correctly, and the Flask-Caching requests run correctly (these run directly within the application rather than using the worker). It never appears as a received task in the worker's debug logs, and eventually the server request times out.
Is there anything I can do to reuse Redis_URL with Celery in this way? If I can't, is there anything Heroku does expect me to do to maintain the credentials/server path/etc for where it is serving Redis for Celery, when I'm using the same instance of Redis for several purposes like this?
By running my Celery worker with the -E flag, as in celery -A src.worker:celery worker -S redbeat.RedBeatScheduler --loglevel=INFO -E, I was able to figure out that my error was happening because Flask's instance of Celery, in gunicorn, was not able to access the config values for Celery that the worker was using.
What I've done to try to resolve this appears to have worked.
In extensions.py, instead of configuring Celery, I've done this, removing all other mentions of Celery:
from celery import Celery
celery = Celery('scraper') # a temporary name
Then, on the same level, I created a celery.py:
from celery import Celery
from flask import Flask
from src import extensions
def configure_celery(app):
TaskBase = extensions.celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
extensions.celery.conf.update(
broker_url=app.config['CELERY_BROKER_URL'],
result_backend=app.config['RESULT_BACKEND'],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"]
)
extensions.celery.Task = ContextTask
return extensions.celery
In worker.py, I'm doing:
from celery import Celery
from celery.schedules import schedule
from src.celery import configure_celery
flask_app = create_worker_app()
celery = configure_celery(flask_app)
I'm doing a similar thing in app.py:
from src.celery import configure_celery
app = create_app()
configure_celery(app)
As far as I can tell, this doesn't change how the worker behaves at all, but it allows me to access the tasks, via blueprint endpoints, in the browser.
I found this technique in this article and its accompanying GitHub repo

celery beat using flask task issue

I would like to do cron jobs from Flask using Celery but I have an issue regarding celery beat schedule, because it seems that my task is not loaded and I don't know how to check where the issue is.
This is where I define my Flask app in views.py :
from celery.schedules import crontab
app = Flask(__name__)
app.config.update(
CELERY_BROKER_URL='redis://localhost:6379',
CELERY_RESULT_BACKEND='redis://localhost:6379',
CELERY_BEAT_SCHEDULE={
'task-number-one': {
'task': 'app.tasks.test',
'schedule': crontab(minute="*"),
}
},
CELERY_IMPORTS = ('app.tasks'),
CELERY_TASK_RESULT_EXPIRES = 30,
CELERY_TIMEZONE = 'UTC',
)
and this where my celery object is created in tasks.py:
from celery import Celery
from app.views import app
from celery import shared_task
def make_celery(app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery_app = make_celery(app)
#celery_app.task()
def test():
logger = test.get_logger()
logger.info("Hello")
views.py and tasks.py are under the same directories which is called app
This is what I have when launching celery worker (everything seems normal here):
But this is what I have when launching celery beat, it seems that my task is never sent by my schedule but I don't know where to check:
Can you help me on this?
Best
I believe Celery-Beat Tasks need to be configured at least after the #app.on_after_configure.connect signal is sent. You should be able to do the following in your tasks.py file.
celery_app.conf.CELERYBEAT_SCHEDULE = {
"test-every-minue": {
"task": "tasks.test",
"schedule": crontab(minute="*"),
},
}
Alternatively you can use this decorator syntax if your task is defined in the same file as your celery application instance.
#celery_app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(5 , test_two.s(), name='test-every-5')
If your tasks are defined in a separate module you can use the #app.on_after_finalize.connect decorator after importing your tasks.
#app.on_after_finalize.connect
def setup_periodic_tasks(sender, **kwargs):
from app.tasks import test
sender.add_periodic_task(10.0, test.s(), name='test-every-10')
Celery Beat Entry Docs

Celery + Django not working at the same time

I have Django 2.0 project that is working fine, its integrated with Celery 4.1.0, I am using jquery to send ajax request to the backend but I just realized its loading endlessly due to some issues with celery.
Celery Settings (celery.py)
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'converter.settings')
app = Celery('converter', backend='amqp', broker='amqp://guest#localhost//')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Celery Tasks (tasks.py)
from __future__ import absolute_import, unicode_literals
from celery import shared_task
#shared_task(time_limit=300)
def add(number1, number2):
return number1 + number2
Django View (views.py)
class AddAjaxView(JSONResponseMixin, AjaxResponseMixin, View):
def post_ajax(self, request, *args, **kwargs):
url = request.POST.get('number', '')
task = tasks.convert.delay(url, client_ip)
result = AsyncResult(task.id)
data = {
'result': result.get(),
'is_ready': True,
}
if result.successful():
return self.render_json_response(data, status=200)
When I send ajax request to the Django app it is loading endlessly but when terminate Django server, and I run celery -A demoproject worker --loglevel=info that's when my tasks are running.
Question
How do I automate this so that when I run Django project my celery tasks will work automatically when I send ajax request?
If you are on development environment, you have to run manually celery worker as it does not run automatically on the background, in order to process the jobs in the queue. So if you want to have a flawless workflow, you need both Django default server and celery worker running. As stated in the documentation:
In a production environment you’ll want to run the worker in the background as a daemon - see Daemonization - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you’d use Django’s manage.py runserver:
celery -A proj worker -l info
You can read their documentation for daemonization.
http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

Can't start the worker for Running celery with Flask

I am following the example given in the following url to run celery with Flask:
http://flask.pocoo.org/docs/0.12/patterns/celery/
I followed everything word by word. The only difference being, my make_celery function is created under the following hierarchy:
package1|
|------CeleryObjCreator.py
|
CeleryObjectCraetor.py has the make_celery function under CeleryObjectCreatorClass as follows:
from celery import Celery
class CeleryObjectHelper:
def make_celery(self, app):
celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
Now, I am facing problems with starting the celery worker.
In the end of the article, it suggests to start the celery worker as follows:
$ celery -A your_application.celery worker
In my case, I am using <> for your_application string which doesn't work and it gives the following error:
ImportError: No module named 'package1.celery'
So I am not sure what should be the value of your_application string here to start the celery worker.
EDIT
As suggested by Nour Chawich, i did try running the Flask app from the command line. my server does come up successfully.
Also, since app is a directory in my project structure where app.py is, in app.py code i replaced app = Flask(name) with flask_app = Flask(name) to separate out the variable names
But when i try to start the celery worker using command
celery -A app.celery -loglevel=info
it is not able to recognize the following imports that I have in my code
import app.myPackage as myPackage
it throws the following error
ImportError: No module named 'app'
So I am really not sure what is going on here. any ideas ?

How to access the orm with celery tasks?

I'm trying to flip a boolean flag for particular types of objects in my database using sqlalchemy+celery beats. But how do I access my orm from the tasks.py file?
from models import Book
from celery.decorators import periodic_task
from application import create_celery_app
celery = create_celery_app()
# Create celery: http://flask.pocoo.org/docs/0.10/patterns/celery/
# This task works fine
#celery.task
def celery_send_email(to,subject,template):
with current_app.app_context():
msg = Message(
subject,
recipients=[to],
html=template,
sender=current_app.config['MAIL_DEFAULT_SENDER']
)
return mail.send(msg)
#This fails
#periodic_task(name='release_flag',run_every=timedelta(seconds=10))
def release_flag():
with current_app.app_context(): <<< #Fails on this line
books = Book.query.all() <<<< #Fails here too
for book in books:
book.read = True
book.save()
I'm using celery beat command to run this:
celery -A tasks worker -l INFO --beat
But I'm getting the following error:
raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context
Which points back to the with current_app.app_context() line
If I remove the current_app.app_context() line I will get the following error:
RuntimeError: application not registered on db instance and no application bound to current context
Is there a particular way to access the flask-sqlalchemy orm for celery tasks? Or would there be a better approach to what I'm trying to do?
So far the only workaround which works was to add the following line after db.init_app(app) in my application factory pattern:
db.app = app
I was following this repo to create my celery app https://github.com/mattupstate/overholt/blob/master/overholt/factory.py
You're getting that error because current_app requires an app context to work, but you're trying to use it to set up an app context. You need to use the actual app to set up the context, then you can use current_app.
with app.app_context():
# do stuff that requires the app context
Or you can use a pattern such as the one described in the Flask docs to subclass celery.Task so it knows about the app context by default.
from celery import Celery
def make_celery(app):
celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery = make_celery(app)