Django Celery Beat Import/KeyError Issue - django

I am using Django 1.6 with Celery. I have this task that I have running as a schedule. Everything looks correct, but I get this import error when the Celery(beat) runs:
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'utc': True, 'chord': None, 'args': [], 'retries': 0, 'expires': None, 'task': 'bot_data.tasks.get_unanswered_threads', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': '6143f259-721b-4984-99ae-790c00633271'} (233b)
Traceback (most recent call last):
File "/home/one/.virtualenvs/bot/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'bot_data.tasks.get_unanswered_threads'
base.py:
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
'get-unanswered-threads--every-15-seconds': {
'task': 'bot_data.tasks.get_unanswered_threads',
'schedule': timedelta(seconds=15),
'args': ()
},
}
CELERY_TIMEZONE = 'UTC'
from bot_data/get_unanswered_threads:
#task()
def get_unanswered_threads():
slug = 'forums/threads/unanswered.json?PageSize=100'
thread_batch = []

Nevermind, I had bot_data.tasks.get_unanswered_threads incorrect.

The doc's specifies the the annotation as #shared_task instead of #task.

Related

Django app with celery, tasks are always "PENDING". How can I fix this bug?

I have following celery settings in settings.py
CELERY_BROKER_URL = "amqp://admin:admin2017#localhost"
CELERY_IMPORTS = ("web.task",)
When I use the form to submit a task to celery I see state is always pending
Screen cap of pending tasks
the following code is used in models (I also have a tasks.py)
class AnaysisStatus(models.IntegerChoices):
PENDING = 1
COMPLETED = 2
FAILED = 0
class Analysis(models.Model):
STATUS_CHOICES = ((1,"PENDING"),(2,"COMPLETED"),(0,"FAILED"))
user = models.ForeignKey(User,on_delete=models.CASCADE)
status = models.IntegerField(choices=AnaysisStatus.choices,null= True)
created_at = models.DateTimeField(auto_now_add=True,null=True)
file = models.FileField(null=True)
data = models.JSONField(null=True)
I'm very new to celery and django so any help is greatly appreciated.
Edit: I installed rabittMQ locally and set virtual host permission and started a worker I now see this error:
Thw full contents of the message headers:
{'lang': 'py', 'task': 'web.task.switch', 'id': '250f7475-5186-4f68-a8ac-cb19802221cd', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'group_index': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '250f7475-5186-4f68-a8ac-cb19802221cd', 'parent_id': None, 'argsrepr': "('admin-1652754818.sol', '0.4.24', 24)", 'kwargsrepr': '{}', 'origin': 'gen19316#MacBook-Air.hitronhub.home', 'ignore_result': False}
The delivery info for this task is:
{'consumer_tag': 'None4', 'delivery_tag': 1, 'redelivered': False, 'exchange': '', 'routing_key': 'celery'}
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/celery/worker/consumer/consumer.py", line 591, in on_task_received
strategy = strategies[type_]
KeyError: 'web.task.switch'
error - unregistered task
Edit: anyone that can help with this would greatly appreciate it, I've tried everything over the last 72 hours and getting desperate for any information that might help or point me in right direction.

Celery Django Body Encoding

Hi does anyone know how the body of a celery json is encoded before it is entered in the queue cache (i use Redis in my case).
{'body': 'W1sic2hhd25AdWJ4LnBoIiwge31dLCB7fSwgeyJjYWxsYmFja3MiOiBudWxsLCAiZXJyYmFja3MiOiBudWxsLCAiY2hhaW4iOiBudWxsLCAiY2hvcmQiOiBudWxsfV0=',
'content-encoding': 'utf-8',
'content-type': 'application/json',
'headers': {'lang': 'py',
'task': 'export_users',
'id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'shadow': None,
'eta': None,
'expires': None,
'group': None,
'retries': 0,
'timelimit': [None, None],
'root_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'parent_id': None,
'argsrepr': "('<email#example.com>', {})",
'kwargsrepr': '{}',
'origin': 'gen187209#ubuntu'},
'properties': {'correlation_id': '6e506f75-628e-4aa1-9703-c0185c8b3aaa',
'reply_to': '403f7314-384a-30a3-a518-65911b7cba5c',
'delivery_mode': 2,
'delivery_info': {'exchange': '', 'routing_key': 'celery'},
'priority': 0,
'body_encoding': 'base64',
'delivery_tag': 'dad6b5d3-c667-473e-a62c-0881a7349684'}}
Just a background I have a nodejs project which needs to trigger my celery (django). Background tasks are all in the django app but the trigger and the details will come from a nodejs app.
Thanks in advance
It may just be simpler to use the nodejs celery client
https://github.com/mher/node-celery/blob/master/celery.js
to invoke a celery task from nodejs.

flask env config.py don t find database : unable to get repr for class 'flask_sqlalcwmy.SQLAlchemy'

I create a flask app with a postures db in cloud with .env file. when I run server, it seems it does not find database.
init.py:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from config import Config
def init_app():
"""Construct core Flask application with embedded Dash app."""
app = Flask(__name__, instance_relative_config=False)
app.config.from_object(Config())
db = SQLAlchemy(app)
migrate = Migrate(app, db)
with app.app_context():
# Import parts of our core Flask app
from . import routes
config.py:
import os
from dotenv import dotenv_values
basedir = os.path.abspath(os.path.dirname(__file__))
configuration = dotenv_values(".env")
class Config(object):
DEBUG = False
TESTING = False
CSRF_ENABLED = True
SECRET_KEY = 'this-really-needs-to-be-changed'
SQLALCHEMY_DATABASE_URI = os.environ['DATABASE_URL']
after step by step debug: for app, I have a app.config like this:
<Config {'ENV': 'development', 'DEBUG': False, 'TESTING': False, 'PROPAGATE_EXCEPTIONS': None, 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'SECRET_KEY': 'this-really-needs-to-be-changed', 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'USE_X_SENDFILE': False, 'SERVER_NAME': None, 'APPLICATION_ROOT': '/', 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_SECURE': False, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_REFRESH_EACH_REQUEST': True, 'MAX_CONTENT_LENGTH': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'PREFERRED_URL_SCHEME': 'http', 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS': True, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, 'MAX_COOKIE_SIZE': 4093, 'CSRF_ENABLED': True, 'SQLALCHEMY_DATABASE_URI': 'DATABASE_URL=postgres://mqyl:XXXXXXXXXXXXXXXXX#queenie.db.XXXXXX.com:5432/rulXXXX', 'SQLALCHEMY_BINDS': None, 'SQLALCHEMY_NATIVE_UNICODE': None, 'SQLALCHEMY_ECHO': False, 'SQLALCHEMY_RECORD_QUERIES': None, 'SQLALCHEMY_POOL_SIZE': None, 'SQLALCHEMY_POOL_TIMEOUT': None, 'SQLALCHEMY_POOL_RECYCLE': None, 'SQLALCHEMY_MAX_OVERFLOW': None, 'SQLALCHEMY_COMMIT_ON_TEARDOWN': False, 'SQLALCHEMY_TRACK_MODIFICATIONS': None, 'SQLALCHEMY_ENGINE_OPTIONS': {}}>
WSGI.py:
from application import init_app
app = init_app()
if __name__ == "__main__":
app.run(debug=True)
trace error:
no trace only on debug mode value of db = SQLAlchemy(app) is
db: Unable to get repr for class flask_sqlalchemy.SQLAlchemy
find the solution...
DATABASE_URL=postgres:// is deprecate
I should use DATABASE_URL=postgresql://

Models_committed not firing a function on model commit - FlaskSQLAlchemy

I'm trying to get a signal - models_committed - to fire a function when my models are committed. Currently just does a standard print(), but I can't get the function to fire. Tried the decorator method and models_commited.connect(func, app) method.
What I'm expecting to happen
I commit some data to my database (into a model), then signal_thing() (located in init.py) prints 'hello - is this working' to the flask run console.
What is actually happening
Data is committed to the database (shows up in my web app) but nothing is printed to console, it seems signal_thing() does not fire.
I can't find much information about how to get Signals working properly on Flask?
__init__.py
from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy, models_committed, before_models_committed
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello - is this working')
models_committed.connect(signal_thing, app)
before_models_committed.connect(signal_thing, app)
decorator method
#models_commited.connect(app)
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello this worked')
config
SQLALCHEMY_TRACK_MODIFICATIONS' is set to True.
<Config {'ENV': 'production', 'DEBUG': False, 'TESTING': False, 'PROPAGATE_EXCEPTIONS': None, 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'SECRET_KEY': 'shh', 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'USE_X_SENDFILE': False, 'SERVER_NA
ME': None, 'APPLICATION_ROOT': '/', 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_SECURE': False, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_REFRESH_EACH_REQUEST': True,
'MAX_CONTENT_LENGTH': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'PREFERRED_URL_SCHEME': 'http', 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS':
True, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, 'MAX_COOKIE_SIZE': 4093, 'SQLALCHEMY_DATABASE_URI': 'sqlite:///C:\\Users\\\\ZigBot\\app.db', 'SQLALCHEMY_TRACK_MODIFICATIONS': True}>

Why Celery tasks don't work asynchronously?

I am trying to run asynchronously basic debug_task from celery but it runs always synchronously.
I have created a new project with django-cookiecutter template.
I made sure that redis is working and all env variables are valid.
I launch celery, and when it is ready to receive tasks, I launch the console (shell_plus) and invoke the task asynchronously.
In [1]: from project.taskapp.celery import debug_task
In [2]: debug_task.delay()
Request: <Context: {'id': '87b4d96e-9708-4ab2-873e-0118b30f7a6b', 'retries': 0, 'is_eager': True, 'logfile': None, 'loglevel': 0, 'hostname': 'hostname', 'callbacks': None, 'errbacks': None, 'headers': None, 'delivery_info': {'is_eager': True}, 'args': (), 'called_directly': False, 'kwargs': {}}>
Out[2]: <EagerResult: 87b4d96e-9708-4ab2-873e-0118b30f7a6b>
As you can see param is_eager == True -> so it worked sync.
Also I tried to call task as debug_task.apply_async()
Here are setting from cookiecutter template for celery:
import os
from celery import Celery
from django.apps import apps, AppConfig
from django.conf import settings
if not settings.configured:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
class CeleryAppConfig(AppConfig):
name = 'project.taskapp'
verbose_name = 'Celery Config'
def ready(self):
installed_apps = [app_config.name for app_config in apps.get_app_configs()]
app.autodiscover_tasks(lambda: installed_apps, force=True)
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
As many commenters have pointed out: turn off eager processing when you configure celery:
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.conf.task_always_eager = False