Not sure why it throws this key error.
My project/celery.py:
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
my project/init.py:
from .celery import app as celery_app
__all__ = ('celery_app',)
My app/tasks.py
from celery import shared_task
from celery.schedules import crontab
from project.celery import app
#shared_task
def my_test():
print('Celery from task says Hi')
app.conf.beat_schedule = {
'my-task-every-2-minutes': {
'task': 'my_test',
'schedule': crontab(minute='*/1'),
},
}
when I run the command: celery -A project beat -l info
I can see the trigger every 1 min there
[2022-12-22 12:38:00,005: INFO/MainProcess] Scheduler: Sending due task my-task-every-2-minutes (celery_app.my_test)
when running celery -A project worker -l info
and triggers sends info I get KeyError: my_test
It seems that it cannot find the task with that key
but from my info everything running fine :
-------------- celery#192.168.1.131 v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- macOS-10.13.6-x86_64-i386-64bit 2022-12-22 12:43:22
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: project:0x10c757610
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. project.celery.debug_task
. user_statistic_status.tasks.my_test
I am building a web application that requires some long running tasks to be on AWS ECS using celery as a distributed task queue. The problem I am facing is that my celery worker running on ECS is not receiving tasks from SQS even though it seems to be connected to it.
Following are the logs from ECS task.
/usr/local/lib/python3.8/site-packages/celery/platforms.py:797: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
-------------- celery#ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal v5.0.1 (singularity)
--- ***** -----
-- ******* ---- Linux-4.14.252-195.483.amzn2.x86_64-x86_64-with-glibc2.2.5 2021-12-14 06:39:58
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: emptive_portal:0x7fbfda752310
- ** ---------- .> transport: sqs://XXXXXXXXXXXXXXXX:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> emptive-celery2.fifo exchange=sync(direct) key=emptive-celery.fifo
[tasks]
. import_export_celery.tasks.run_export_job
. import_export_celery.tasks.run_import_job
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
2021-12-14 06:39:58 [INFO] Connected to sqs://XXXXXXXXXXXXXXXXX:**#localhost//
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
2021-12-14 06:39:58 [INFO] celery#ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal ready.
To be noted, I have ran the same container, that I have deployed to ECS, locally on the same machine as the django webserver is on which is sending tasks. That celery worker doesn't have any problem receiving tasks.
I have also tried giving ecsTaskExecutionRole full permissions to SQS but that doesn't seem to affect anything. Any help would be appreciated.
EDIT: Forgot to show my celery broker config in django settings.py
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
# SQS CONFIG
CELERY_BROKER_URL = "sqs://%s:%s#" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
BROKER_TRANSPORT_OPTIONS = {
'region': 'us-east-2',
'polling_interval': 20,
}
CELERY_RESULT_BACKEND = None
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
So I finally fixed this. The issue was really stupid on my part. :) I just had to replace BROKER_TRANSPORT_OPTIONS with CELERY_BROKER_TRANSPORT_OPTIONS in the celery config.
New config:
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
# SQS CONFIG
CELERY_BROKER_URL = "sqs://%s:%s#" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'us-east-2',
'polling_interval': 20,
}
CELERY_RESULT_BACKEND = None
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
I have django 3.2.7, celery 5.2.1, redis 3.5.3
I have next celery settings. (REDIS_PASSWORD) is env variable:
CELERY_BROKER_URL = f'redis://:{REDIS_PASSWORD}#redis:6379/4'
CELERY_BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}
CELERY_RESULT_BACKEND = f'redis://:{REDIS_PASSWORD}#redis:6379/1'
CELERY_ACCEPT_CONTENT = ['application/json']
But when I start my docker-compose app, it shows me
celery | --- ***** -----
celery | -- ******* ---- Linux-5.11.0-34-generic-x86_64-with-glibc2.31 2021-09-16 10:20:11
celery | - *** --- * ---
celery | - ** ---------- [config]
celery | - ** ---------- .> app: project:0x7f5cd0df0880
celery | - ** ---------- .> transport: redis://redis:6379/0 <==== NO CHANGES HERE
celery | - ** ---------- .> results: redis://:**#redis:6379/1
celery | - *** --- * --- .> concurrency: 16 (prefork)
celery | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
celery | --- ***** -----
How can I set it?
Problem was solved by removing environment params from docker-compose file.
I'm trying to setup Celery to talk with AWS SQS. Celery put log as below and after that container is going down and starting up again in loop. Log as below:
trim...
[2020-08-05 09:32:35,715: DEBUG/MainProcess] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
[2020-08-05 09:32:35,798: DEBUG/MainProcess] Changing event name from before-call.apigateway to before-call.api-gateway
[2020-08-05 09:32:35,799: DEBUG/MainProcess] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
[2020-08-05 09:32:35,801: DEBUG/MainProcess] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
[2020-08-05 09:32:35,801: DEBUG/MainProcess] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
[2020-08-05 09:32:35,802: DEBUG/MainProcess] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
[2020-08-05 09:32:35,802: DEBUG/MainProcess] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
[2020-08-05 09:32:35,805: DEBUG/MainProcess] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
[2020-08-05 09:32:35,805: DEBUG/MainProcess] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Setting config variable for region to {'eu-west-1'}
[2020-08-05 09:32:35,808: DEBUG/MainProcess] Loading JSON file: /usr/local/lib/python3.8/site-packages/botocore/data/endpoints.json
[2020-08-05 09:32:35,815: DEBUG/MainProcess] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7fed0d144940>
[2020-08-05 09:32:35,899: DEBUG/MainProcess] Loading JSON file: /usr/local/lib/python3.8/site-packages/botocore/data/sqs/2012-11-05/service-2.json
[2020-08-05 09:32:35,902: DEBUG/MainProcess] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7fed096053a0>
[2020-08-05 09:32:36,907: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
-------------- celery#ip-10-10-12-215.eu-central-1.compute.internal v4.4.6 (cliffs)
--- ***** -----
-- ******* ---- Linux-4.14.177-139.254.amzn2.x86_64-x86_64-with-glibc2.2.5 2020-08-05 09:32:35
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: app_name:0x7fed0d6f18e0
- ** ---------- .> transport: sqs://aws_access_key:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. app_name.tasks.check_new_users
. app_name.tasks.send_mail_for_new_user
. app_name.tasks.test_task
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
Settings in Django are as follows: (variables are taken from SSM)
AWS_REGION = os.getenv("AWS_REGION")
BROKER_URL = os.getenv("BROKER_URL", "amqp://app_name-rabbitmq//")
BROKER_TRANSPORT = 'sqs'
BROKER_TRANSPORT_OPTIONS = {
"region": {AWS_REGION},
"polling_interval": 60,
"queue_name_prefix": "prod-"
}
CELERY_BROKER_URL = BROKER_URL
CELERY_BROKER_TRANSPORT_OPTIONS = BROKER_TRANSPORT_OPTIONS
CELERY_ACCEPT_CONTENT = ['json', 'yaml']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
CELERY_RESULT_BACKEND = None
RESULT_BACKEND = None
CELERY_IMPORTS = ("app_name.tasks",)
CELERYBEAT_SCHEDULE = {
"check_new_users": {"task": "tasks.app_name.check_new_users", "schedule": crontab(hour="9,17", minute=0,)},
}
Does any one had situation like that and could help?
IAM role is SQS full access for time being.
EDIT: If any other detail is needed please let me know.
I trying to make a work django with django-cellery, I created my project on virtualenv and I used template https://github.com/xenith/django-base-template and followings:
Django==1.6.5
celery==3.1.11
django-celery==3.1.10
My celery settings in settings/local.py
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://django:pas****#10.0.1.17:5672/myvhost'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
CELERY_ALWAYS_EAGER = True
I have one Periodic Task "Update_All_Feeds" when I start "celery beat", it seems that everything working fine, task is executed every 10sec.
python manage.py celery beat
celery beat v3.1.11 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://django#10.0.1.17:5672/myvhost
. loader -> djcelery.loaders.DjangoLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2014-05-31 20:53:16,544: INFO/MainProcess] beat: Starting...
[2014-05-31 20:53:16,544: INFO/MainProcess] Writing entries...
[2014-05-31 20:53:16,669: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:19,031: WARNING/MainProcess] /home/phosting/python/django/polskifeed/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py:903: RuntimeWarning: DateTimeField FeedItem.pub_date received a naive datetime (2014-05-31 19:21:49) while time zone support is active.
RuntimeWarning)
[2014-05-31 20:53:19,081: INFO/MainProcess] Writing entries...
[2014-05-31 20:53:26,675: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:36,682: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:46,688: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:56,695: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
But starting this with celeryd is not doing anything
python manage.py celeryd -l DEBUG
-------------- celery#czterykaty v3.1.11 (Cipater)
---- **** -----
--- * *** * -- Linux-2.6.32-bpo.5-xen-amd64-x86_64-with-debian-7.0
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: default:0x13bb510 (djcelery.loaders.DjangoLoader)
- ** ---------- .> transport: amqp://django#10.0.1.17:5672/myvhost
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. update_all_feeds
. update_feeds
[2014-05-31 20:58:55,353: DEBUG/MainProcess] | Worker: Starting Hub
[2014-05-31 20:58:55,354: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,354: DEBUG/MainProcess] | Worker: Starting Pool
[2014-05-31 20:58:55,672: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,675: DEBUG/MainProcess] | Worker: Starting Consumer
[2014-05-31 20:58:55,676: DEBUG/MainProcess] | Consumer: Starting Connection
[2014-05-31 20:58:55,739: DEBUG/MainProcess] Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'per_consumer_qos': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'cluster_name': u'rabbit#czterykaty.luser.nl', u'platform': u'Erlang/OTP', u'version': u'3.3.1'}, mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US']
[2014-05-31 20:58:55,741: DEBUG/MainProcess] Open OK!
[2014-05-31 20:58:55,744: INFO/MainProcess] Connected to amqp://django#10.0.1.17:5672/myvhost
[2014-05-31 20:58:55,744: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,744: DEBUG/MainProcess] | Consumer: Starting Events
[2014-05-31 20:58:55,791: DEBUG/MainProcess] Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'per_consumer_qos': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'cluster_name': u'rabbit#czterykaty.luser.nl', u'platform': u'Erlang/OTP', u'version': u'3.3.1'}, mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US']
[2014-05-31 20:58:55,795: DEBUG/MainProcess] Open OK!
[2014-05-31 20:58:55,797: DEBUG/MainProcess] using channel_id: 1
[2014-05-31 20:58:55,800: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:55,802: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,802: DEBUG/MainProcess] | Consumer: Starting Mingle
[2014-05-31 20:58:55,803: INFO/MainProcess] mingle: searching for neighbors
[2014-05-31 20:58:55,805: DEBUG/MainProcess] using channel_id: 1
[2014-05-31 20:58:55,807: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,266: INFO/MainProcess] mingle: all alone
[2014-05-31 20:58:57,266: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,267: DEBUG/MainProcess] | Consumer: Starting Gossip
[2014-05-31 20:58:57,268: DEBUG/MainProcess] using channel_id: 2
[2014-05-31 20:58:57,270: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,282: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,282: DEBUG/MainProcess] | Consumer: Starting Heart
[2014-05-31 20:58:57,285: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,286: DEBUG/MainProcess] | Consumer: Starting Tasks
[2014-05-31 20:58:57,299: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,299: DEBUG/MainProcess] | Consumer: Starting Control
[2014-05-31 20:58:57,300: DEBUG/MainProcess] using channel_id: 3
[2014-05-31 20:58:57,303: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,311: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,311: DEBUG/MainProcess] | Consumer: Starting event loop
[2014-05-31 20:58:57,315: WARNING/MainProcess] celery#czterykaty ready.
[2014-05-31 20:58:57,316: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2014-05-31 20:58:57,317: DEBUG/MainProcess] basic.qos: prefetch_count->16
Periodic tasks are setup trough djcelery/admin interface, and also tasks list are empty This is my first experience with celery, and djano-celery, so I not sure what is wrong.
I managed this working.
I adjusted my settings in settings/local.py with followings:
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://django:django123#10.0.1.17:5672/myvhost'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Started celeryd with flag -E and -B
python manage.py celeryd -l INFO -E -B
And for monitoring events, was necessary to start celerycam
python manage.py celerycam