Reuse of Celery configuration values for Heroku and local Flask - flask

I'm running a Flask app that runs several Celery tasks (with Redis as the backend) and sometimes caches API calls with Flask-Caching. It will run on Heroku, although at the moment I'm running it locally. I'm trying to figure out if there's a way to reuse my various config variables for Redis access. Mainly in case Heroku changes the credentials, moves Redis to another server, etc. Currently I'm reusing the same Redis credentials in several ways.
From my .env file:
CACHE_REDIS_URL = "redis://127.0.0.1:6379/1"
REDBEAT_REDIS_URL = "redis://127.0.0.1:6379/1"
CELERY_BROKER_URL = "redis://127.0.0.1:6379/1"
RESULT_BACKEND = "redis://127.0.0.1:6379/1"
From my config.py file:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
# non redis values are above and below these items
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "redis://127.0.0.1:6379/0")
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", "redis://127.0.0.1:6379/0")
CELERY_RESULT_BACKEND = RESULT_BACKEND # because of the deprecated value
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", "redis://127.0.0.1:6379/0")
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", "redis://127.0.0.1:6379/0")
In extensions.py:
from celery import Celery
from src.cache import cache
celery = Celery()
def register_extensions(app, worker=False):
cache.init_app(app)
# load celery config
celery.config_from_object(app.config)
if not worker:
# register celery irrelevant extensions
pass
In my __init__.py:
import os
from flask import Flask, jsonify, request, current_app
from src.extensions import register_extensions
from config import Config
def create_worker_app(config_class=Config):
"""Minimal App without routes for celery worker."""
app = Flask(__name__)
app.config.from_object(config_class)
register_extensions(app, worker=True)
return app
from my worker.py file:
from celery import Celery
from celery.schedules import schedule
from redbeat import RedBeatSchedulerEntry as Entry
from . import create_worker_app
# load several tasks from other files here
def create_celery(app):
celery = Celery(
app.import_name,
backend=app.config["RESULT_BACKEND"],
broker=app.config["CELERY_BROKER_URL"],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"],
)
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
flask_app = create_worker_app()
celery = create_celery(flask_app)
# call the tasks, passing app=celery as a parameter
This all works fine, locally (I've tried to remove code that isn't relevant to the Celery configuration). I haven't finished deploying to Heroku yet because I remembered that when I install Heroku Data for Redis, it creates a REDIS_URL setting that I'd like to use.
I've been trying to change my config.py values to use REDIS_URL instead of the other things they use, but every time I try to run my celery tasks the connection fails unless I have distinct env values as shown in my config.py above.
What I'd like to have in config.py would be this:
import os
from pathlib import Path
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
REDIS_URL = os.environ.get("REDIS_URL", "redis://127.0.0.1:6379/0")
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", REDIS_URL)
RESULT_BACKEND = os.environ.get("RESULT_BACKEND", REDIS_URL)
CELERY_RESULT_BACKEND = RESULT_BACKEND
CACHE_REDIS_URL = os.environ.get("CACHE_REDIS_URL", REDIS_URL)
REDBEAT_REDIS_URL = os.environ.get("REDBEAT_REDIS_URL", REDIS_URL)
When I try this, and when I remove all of the values from .env except for REDIS_URL and then try to run one of my Celery tasks, the task never runs. The Celery worker appears to run correctly, and the Flask-Caching requests run correctly (these run directly within the application rather than using the worker). It never appears as a received task in the worker's debug logs, and eventually the server request times out.
Is there anything I can do to reuse Redis_URL with Celery in this way? If I can't, is there anything Heroku does expect me to do to maintain the credentials/server path/etc for where it is serving Redis for Celery, when I'm using the same instance of Redis for several purposes like this?

By running my Celery worker with the -E flag, as in celery -A src.worker:celery worker -S redbeat.RedBeatScheduler --loglevel=INFO -E, I was able to figure out that my error was happening because Flask's instance of Celery, in gunicorn, was not able to access the config values for Celery that the worker was using.
What I've done to try to resolve this appears to have worked.
In extensions.py, instead of configuring Celery, I've done this, removing all other mentions of Celery:
from celery import Celery
celery = Celery('scraper') # a temporary name
Then, on the same level, I created a celery.py:
from celery import Celery
from flask import Flask
from src import extensions
def configure_celery(app):
TaskBase = extensions.celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
extensions.celery.conf.update(
broker_url=app.config['CELERY_BROKER_URL'],
result_backend=app.config['RESULT_BACKEND'],
redbeat_redis_url = app.config["REDBEAT_REDIS_URL"]
)
extensions.celery.Task = ContextTask
return extensions.celery
In worker.py, I'm doing:
from celery import Celery
from celery.schedules import schedule
from src.celery import configure_celery
flask_app = create_worker_app()
celery = configure_celery(flask_app)
I'm doing a similar thing in app.py:
from src.celery import configure_celery
app = create_app()
configure_celery(app)
As far as I can tell, this doesn't change how the worker behaves at all, but it allows me to access the tasks, via blueprint endpoints, in the browser.
I found this technique in this article and its accompanying GitHub repo

Related

Correctly Setup Celery with Flask-Application Factory Pattern/Gunicorn/Nginx/Supervisor

I have a task of updating every single row of a MySQL table but it's super slow. I rarely need to do it and only when I change something fundamental, but I thought this would be a great change to learn about multi threading. However all the examples and tutorials online go over some things and not others and I'm struggling to piece all the information together.
I know I need to make a celery process I just don't know if I'm doing it right. A lot of tutorials talk about dockerizing a redis environment without explaining how to do it so I thought I'd come here for some real human-to-human interaction to maybe help me feel less stupid about this.Here's my code so far
/website/__init__.py
from flask import Flask, appcontext_popped, render_template
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, UserMixin, login_user, login_required, logout_user, current_user
from flask_migrate import Migrate
from flask_wtf import CSRFProtect
import logging
import celery
#Path Math
import sys
import os
from . import config
db:SQLAlchemy = SQLAlchemy()
migrate = Migrate()
csrf = CSRFProtect()
celery: celery.Celery
DB_NAME = "main"
def create_app(name):
#Flask Instance
app = Flask(__name__)
app.config.from_object(config.ProdTestConfig)
# logging stuff
#Database
db.init_app(app)
migrate.init_app(app, db)
csrf.init_app(app)
global celery
celery = make_celery(app)
with app.app_context():
db.create_all()
# Models and Blueprints here
from .helper_functions import migration_handling as mgh
#where you will find the thing I need to run async
app.before_first_request(mgh.run_back_check)
# log manager stuff
#error page handling
return app
def make_celery(app):
celery = celery.Celery(
app.import_name,
backend=app.config['CELERY_RESULT_BACKEND'],
broker=app.config['CELERY_BROKER_URL']
)
celery.conf.update(app.config)
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
I've read some other ways seem to fit a bit better like using:
celery = Celery(__name__, broker=Config.CELERY_BROKER_URL, result_backend=Config.RESULT_BACKEND)
Then in create_app() they run celery.conf.update(app.config). The issue with this is that I don't know how to setup a redis server on my linode machine hosting the site and my personal windows machine. I have redis pip installed. This is how the function I'm trying to run async looks:
#celery.task(name='app.tasks.campaign_pay_out_process')
def campaign_pay_out_process():
'''
Process Every Campaigns Pay
'''
campaign: Campaigns
for campaign in Campaigns.query.filter_by():
campaign.process_pay()
db.session.commit()
current_app.logger.info('Done Campaign Pay Out Processing')
I'm running gunicorn off of supervisor because restarting is super easy and ridding my life of super long linux commands to start a process has been great. I know this is the command for celery: celery -A celery_worker.celery worker --pool=solo --loglevel=info and I'd love to know how to include that in my work flow. Here's my supervisor config:
[program:paymentwebapp]
directory=/home/sai/paymentWebApp
command=/home/sai/paymentWebApp/venv/bin/gunicorn --workers 1 --threads 3 wsgi:app
user=sai
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
stderr_logfile=/var/log/paymentwebapp/paymentwebapp.err.log
stdout_logfile=/var/log/paymentwebapp/paymentwebapp.out.log
Here's my flask config right now:
from os import environ, path
from dotenv import load_dotenv
DB_NAME = "main"
class Config:
"""Base config."""
#SESSION_COOKIE_NAME = environ.get('SESSION_COOKIE_NAME')
MAX_CONTENT_LENGTH = 16*1000*1000
RECEIPT_FOLDER = '../uploads/receipts'
IMPORT_FOLDER = 'uploads/imports'
UPLOAD_FOLDER = 'uploads'
EXPORT_FOLDER = '/uploads/exports'
UPLOAD_EXTENSIONS = ['.jpg', '.png', '.pdf', '.csv', '.xls', '.xlsx']
STATIC_FOLDER = 'static'
TEMPLATES_FOLDER = 'templates'
class ProdConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('/home/sai/.env')
env_dict = dict(environ)
FLASK_ENV = 'production'
DEBUG = False
TESTING = False
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
SERVER_NAME = environ.get('SERVER_NAME')
SESSION_COOKIE_SECURE = True
WTF_CSRF_TIME_LIMIT = 600
#Uploads
class DevConfig(Config):
basedir = path.abspath(path.dirname(__file__))
load_dotenv('C:\saiscripts\intercept_branch\Payment Web App Project\.env')
env_dict = dict(environ)
FLASK_ENV = 'development'
DEBUG = True
SQLALCHEMY_DATABASE_URI = environ.get('DEV_DATABASE_URI')
SECRET_KEY = environ.get('SECRET_KEY')
class ProdTestConfig(DevConfig):
'''
Developer config settings but production database server
'''
SQLALCHEMY_DATABASE_URI = environ.get('PROD_DATABASE_URI')
if __name__ == '__main__':
print(environ.get('SQLALCHEMY_DATABASE_URI'))
This is where I copied some code from a tutorial because I'm supposed to make a celery worker:
#!/usr/bin/env python
import os
#from app import create_app, celery
from website import create_app
app = create_app()
app.app_context().push()
from website import celery

Django process running tasks instead of Celery

I'm running an app with Celery+redis for asynchronous tasks.
I managed to get Celery see the list of tasks. However my tasks aren't executed by Celery workers but by Django process instead.
I tried invoking the tasks with .delay() and .apply_async() without success (actually in these cases the call to the task gets blocked indefinitely and nothing is shown in the logs).
I might be missing something very basic but cannot see where.
Relevant settings follow:
settings.py
CELERY_REDIS_DB = os.environ.get("REDIS_DB_CELERY", 0)
CELERY_REDIS_HOST = os.getenv("REDIS_HOSTNAME", "redis")
CELERY_REDIS_PORT = 6379
CELERY_RESULT_BACKEND = BROKER_URL = (
f'redis://{CELERY_REDIS_HOST}:{CELERY_REDIS_PORT}/{CELERY_REDIS_DB}'
)
CELERY_TIMEZONE = TIME_ZONE
CELERY_RESULT_EXPIRES = 5 * 60 * 60
celery.py
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'maat.settings')
app = Celery('maat')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
tasks.py
#shared_task
def remove_task(
environment=None,
payload=None,
**kwargs,
):
LOGGER.info('Received task, now removing')
...
views.py
class MyClass(
LoginRequiredMixin,
PermissionRequiredMixin,
SingleObjectMixin,
APIView
):
return_403 = True
model = models.Environment
slug_field = 'name'
slug_url_kwarg = 'environment'
def delete(self, request, environment):
tasks.remove_task(environment=environment, payload=request.data)
Django 3.2
celery[redis] 5.1.2
redis 3.5.3
django-celery-results 2.0.1
Assuming celery.py lives somewhere that is accessible to views.py, the problem is likely that because you are using a shared task, django does not know how to communicate with the celery broker without more information.
In order to do that, we can follow the advice from our helpful friends at stack overflow and do something like the following in your view:
from path.to.celery import app
app.set_default()

Celery doesn't run task

Help me, please, to understand what I am doing wrong. Celery doesn't run my task.
Settings.py
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
proj/celery.py
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
app = Celery('proj')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
init.py
from __future__ import absolute_import, unicode_literals
from celery import app as celery_app
__all__ = ['celery_app']
Code
#shared_task
def generate(instance, sender, **kwargs):
for i in CK_PROGRAM_NAME:
program_kf = i[0]
ck = instance.dk*program_kf
program_name = i[1]
program_obj = Program.objects.get(name=program_name)
foodprogram_generator(instance, ck, program_kf, program_obj, sender, **kwargs)
return
#receiver(post_save, sender=LeadUser)
def leaduser_foodprogram_post_save(instance, sender, **kwargs):
generate.delay(instance, sender, **kwargs)
return
Worker is run by: celery -A proj worker --loglevel=INFO
The logic is:
after client_object is created, post_save signal starts leaduser_foodprogram_post_save, that adds to a queue generate()
I can see result, so I think it is not run.
Without celery everything works properly.
Thanks for you answers!
A couple of things::
* the config_from_object with namespace might strip that from the variables, so you might not get what u want as a configuration,
* when you see shared task you need to make sure you are calling the task from the configured celery app, as the main point of using shared task is to actually share tasks between different apps. Have a look at the "set_default" function on the celery app object, by just calling that on the celery setup you should see a difference.
Anyway best way to check is to put rdb in there and inspect the celery app, check the configuration and if the broker is not set then the second point explained on my previous comment should get you going
Thanks guys for your answers, there was no specific problem, but I rechecked everything and according to this article run my task:

Celery + Django not working at the same time

I have Django 2.0 project that is working fine, its integrated with Celery 4.1.0, I am using jquery to send ajax request to the backend but I just realized its loading endlessly due to some issues with celery.
Celery Settings (celery.py)
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'converter.settings')
app = Celery('converter', backend='amqp', broker='amqp://guest#localhost//')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
Celery Tasks (tasks.py)
from __future__ import absolute_import, unicode_literals
from celery import shared_task
#shared_task(time_limit=300)
def add(number1, number2):
return number1 + number2
Django View (views.py)
class AddAjaxView(JSONResponseMixin, AjaxResponseMixin, View):
def post_ajax(self, request, *args, **kwargs):
url = request.POST.get('number', '')
task = tasks.convert.delay(url, client_ip)
result = AsyncResult(task.id)
data = {
'result': result.get(),
'is_ready': True,
}
if result.successful():
return self.render_json_response(data, status=200)
When I send ajax request to the Django app it is loading endlessly but when terminate Django server, and I run celery -A demoproject worker --loglevel=info that's when my tasks are running.
Question
How do I automate this so that when I run Django project my celery tasks will work automatically when I send ajax request?
If you are on development environment, you have to run manually celery worker as it does not run automatically on the background, in order to process the jobs in the queue. So if you want to have a flawless workflow, you need both Django default server and celery worker running. As stated in the documentation:
In a production environment you’ll want to run the worker in the background as a daemon - see Daemonization - but for testing and development it is useful to be able to start a worker instance by using the celery worker manage command, much as you’d use Django’s manage.py runserver:
celery -A proj worker -l info
You can read their documentation for daemonization.
http://docs.celeryproject.org/en/latest/userguide/daemonizing.html

Django. Simple Celery task not working

I'm new to Celery. I have a task that is not working adn I don't know why. Im using rabbitmq Here is my code:
In settings.py:
BROKER_URL = "amqp://guest#localhost//"
tasks.py:
from celery.decorators import task
from celery.utils.log import get_task_logger
from hisoka.models import FeralSpirit, Fireball
logger = get_task_logger(__name__)
#task
def test_task():
fireball = Fireball.objects.last()
feral_spirit = FeralSpirit.objects.filter(fireball=fireball).last()
counters = feral_spirit.increase_counter()
logger.info(feral_spirit + "counters: " + counters)
The task is just a test, it is designed to increase a counter that is a field of the FeralSpirit model. It works correctly if I don't call the function with delay()
views.py
class FireballDetail(ListView):
def get_queryset(self, *args, **kwargs):
test_task.delay()
...
I have a rabbitmq server running correctly (or at least it looks like that) on one terminal and the django localhost server on another terminal. Am I missing something obvious? I have a celery.py and a modified __init__ file, exactly following the documentation.
Most probably your celery worker is not running, try
celery -A {project_name} worker --loglevel=info -Q {queue_name}
Substitute the value of project_name and queue_name. Default queue_name is default