Installation
I am using django(1.4) celery(3.0.13) with RabbitMQ(v3.0.4), the backend db is sqlite.
Celery was installed by pip install django-celery
Setting
In setting.py:
# For django-celery
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://user:pwd#sd5:5672/8086'
### and adding 'djcelery' to INSTALLED_APPS
Running
After setup the database with south, I start rabbitmq-server and manage.py celery worker --loglevel=debug
I could see the connection was established:
-------------- celery#sd5 v3.0.16 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://utils#sd5:5672/8086
- ** ---------- . app: default:0x8a5106c (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. utils.weixin.tasks.celery_add
[2013-03-19 19:50:00,460: WARNING/MainProcess] celery#sd5 ready.
[2013-03-19 19:50:00,483: INFO/MainProcess] consumer: Connected to amqp://utils#sd5:5672/8086.
[2013-03-19 19:50:00,498: DEBUG/MainProcess] consumer: Ready to accept tasks!
And in rabbit#sd5.log:
=INFO REPORT==== 19-Mar-2013::19:50:00 ===
accepting AMQP connection <0.1655.0> (127.0.0.1:50087 -> 127.0.0.1:5672)
Problem
Then I run my task utils.weixin.tasks.celery_add in manage.py shell:
>>> from utils.weixin.tasks import celery_add
>>> result = celery_add.delay(1,3)
>>> result.ready()
False
>>> result.get()
hangup here forever...
And, nothing show up in the log of celery worker and log of rabbitmq, not any 'received task' etc.
It seems that calling the task does not comunicate with worker.
Question
What should I do to find out what has been done incorrectly. How should I fix this?
Appriciated!
Related
My OS is windows. Backend django, frontend React, database pgadmin. All containers runs but the backend cannot find the entrypoint, but it is indeed here. I try the instruction on stackoverflow with similar issues but none of them fix it. Any one can help me with this? I attached log file and related fields here.
/app/entrypoint.sh: 4: ./wait-for-postgres.sh: not found
51 static files copied to '/app/static', 142 post-processed.
No changes detected
Operations to perform:
Apply all migrations: accounts, auth, campaigns, cases, common, contacts, contenttypes, django_celery_beat, django_celery_results, django_ses, emails, events, invoices, leads, opportunity, planner, sessions, tasks, teams
Running migrations:
No migrations to apply.
Installed 1 object(s) from 1 fixture(s)
Installed 1 object(s) from 1 fixture(s)
Installed 1 object(s) from 1 fixture(s)
-------------- celery#47bc8292147e v5.2.0b3 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-debian-11.5 2022-11-09 20:04:14
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: crm:0x7fa12453a080
- ** ---------- .> transport: redis://redis:6379//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. accounts.tasks.send_email
. accounts.tasks.send_email_to_assigned_user
. accounts.tasks.send_scheduled_emails
. campaigns.tasks.send_email
. campaigns.tasks.send_email_to_subscribed_contact
. cases.tasks.send_email_to_assigned_user
. common.tasks.resend_activation_link_to_user
. common.tasks.send_email_to_new_user
. common.tasks.send_email_to_reset_password
. common.tasks.send_email_user_delete
. common.tasks.send_email_user_mentions
. common.tasks.send_email_user_status
. contacts.tasks.send_email_to_assigned_user
. events.tasks.send_email
. invoices.tasks.create_invoice_history
. invoices.tasks.send_email
. invoices.tasks.send_invoice_email
. invoices.tasks.send_invoice_email_cancel
. leads.tasks.create_lead_from_file
. leads.tasks.send_email
. leads.tasks.send_email_to_assigned_user
. leads.tasks.send_lead_assigned_emails
. leads.tasks.update_leads_cache
. opportunity.tasks.send_email_to_assigned_user
. send_scheduled_email_campaigns
. teams.tasks.remove_users
. teams.tasks.update_team_users
/usr/local/lib/python3.6/site-packages/celery/platforms.py:830: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
uid=uid, euid=euid, gid=gid, egid=egid,
[2022-11-09 20:04:14,723: INFO/MainProcess] Connected to redis://redis:6379//
[2022-11-09 20:04:14,736: INFO/MainProcess] mingle: searching for neighbors
[2022-11-09 20:04:15,759: INFO/MainProcess] mingle: all alone
[2022-11-09 20:04:15,773: WARNING/MainProcess] /usr/local/lib/python3.6/site-packages/celery/fixups/django.py:204: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
leak, never use this setting in production environments!''')
[2022-11-09 20:04:15,773: INFO/MainProcess] celery#47bc8292147e ready.
/usr/local/lib/python3.6/site-packages/celery/platforms.py:830: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
uid=uid, euid=euid, gid=gid, egid=egid,
[2022-11-09 20:04:15,793: INFO/Beat] beat: Starting...
Tried
change CRLF to LF
double quotes ./wait-for-postgres.sh
remove $PWD: in docker-compose valumes
change #!/bin/sh to #!/bin/bash
I'm using Celery 4.0.0 with RabbitMQ as messages broker within a django 1.9 project, using django-celery-results for results backend. I'm new to Celery and RabbitMQ. The python version is 2.7.5.
After following the instructions in the Celery docs for configuring and using celery with django, and before adding any real tasks, I tried a simple task calling using django shell (manage.py shell), sending the debug_task as defined in the celery docs.
Task is sent OK, and looking at the rabbitmq queue, I can see a new message has arrived to the correct queue on the correct virtual host.
I run the worker and it looks like it starts OK, then it arrives to the event loop and does nothing. No error is presented, not in the worker output or in the rabbitmq logs.
On the other hand, celery status on the same machine returns that there are no active nodes.
I'm probably missing something here, but I don't know what it can be.
Don't know if this is relevant, but when I use 'celery purge' to clear the messages queue, it finds the message and purges it.
Celery configuration settings as added to django settings.py:
CELERY_BROKER_URL = 'amqp://user1:passwd1#rabbithost:5672/exp'
CELERY_TIMEZONE = TIME_ZONE # Using django's TZ
CELERY_TASK_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'django-db'
Task invocation in django shell:
>>> from project.celery import debug_task
>>> debug_task
<#task: project.celery.debug_task of project:0x23cad10>
>>> r = debug_task.delay()
>>> r
<AsyncResult: 33031998-4cd8-4dfe-8e9d-bda9398525bb>
>>> r.status
u'PENDING'
Celery worker invocation:
% celery -A project worker -l info -Q celery
-------------- celery#super9 v4.0.0 (latentcall)
---- **** -----
--- * *** * -- Linux-3.10.0-327.4.5.el7.x86_64-x86_64-with-centos-7.2.1511-Core 2016-11-24 18:15:27
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: project:0x25931d0
- ** ---------- .> transport: amqp://user1:**#rabbithost:5672/exp
- ** ---------- .> results:
- *** --- * --- .> concurrency: 24 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. project.celery.debug_task
[2016-11-24 18:15:28,984: INFO/MainProcess] Connected to amqp://user1:**#rabbithost:5672/exp
[2016-11-24 18:15:29,009: INFO/MainProcess] mingle: searching for neighbors
[2016-11-24 18:15:30,035: INFO/MainProcess] mingle: all alone
/dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,072: WARNING/MainProcess] /dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,073: INFO/MainProcess] celery#super9 ready.
Checking rabbitmq queue:
% rabbitmqctl list_queues -p exp
Listing queues ...
celery 1
Celery status invocation while the worker is "ready":
% celery -A project status
Error: No nodes replied within time constraint.
Thanks.
I've set up the djcelery along with redis.
I can see the logs in redis
redis-cli MONITOR
1454060863.881506 [0 [::1]:59091] "INFO"
1454060863.883295 [0 [::1]:59093] "MULTI"
1454060863.883314 [0 [::1]:59093] "LLEN" "celery"
1454060863.883319 [0 [::1]:59093] "LLEN" "celery\x06\x163"
1454060863.883323 [0 [::1]:59093] "LLEN" "celery\x06\x166"
1454060863.883326 [0 [::1]:59093] "LLEN" "celery\x06\x169"
1454060863.883331 [0 [::1]:59093] "EXEC"
1454060863.883704 [0 [::1]:59093] "SADD" "_kombu.binding.celery" "celery\x06\x16\x06\x16celery"
1454060863.884054 [0 [::1]:59093] "SMEMBERS" "_kombu.binding.celery"
1454060863.884421 [0 [::1]:59093] "LPUSH" "celery" "{\"body\": \"gAJ9cQEoVQdleHBpcmVzcQJOVQN1dGNxA4hVBGFyZ3NxBEsESwSGcQVVBWNob3JkcQZOVQljYWxsYmFja3NxB05VCGVycmJhY2tzcQhOVQd0YXNrc2V0cQlOVQJpZHEKVSQwZGQwZmJmZC0zYTQ0LTQxMDMtOTNiOC01NmI4ZmFjNjE0MDJxC1UHcmV0cmllc3EMSwBVBHRhc2txDVUPdXRpbHMudGFza3MuYWRkcQ5VCXRpbWVsaW1pdHEPTk6GVQNldGFxEE5VBmt3YXJnc3ERfXESdS4=\", \"headers\": {}, \"content-type\": \"application/x-python-serialize\", \"properties\": {\"body_encoding\": \"base64\", \"correlation_id\": \"0dd0fbfd-3a44-4103-93b8-56b8fac61402\", \"reply_to\": \"b9f02cee-e562-3e86-90aa-683d205d060c\", \"delivery_info\": {\"priority\": 0, \"routing_key\": \"celery\", \"exchange\": \"celery\"}, \"delivery_mode\": 2, \"delivery_tag\": \"b44aafc9-e6aa-4cd8-8a70-201d895cbd7f\"}, \"content-encoding\": \"binary\"}"
1454060872.513147 [0 [::1]:59204] "GET" "celery-task-meta-0dd0fbfd-3a44-4103-93b8-56b8fac61402"
But there's no logs in celery
python manage.py celeryd -l debug
[2016-01-29 10:14:51,073: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-01-29 10:14:51,075: DEBUG/MainProcess] | Worker: Building graph...
[2016-01-29 10:14:51,075: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, StateDB, Autoreloader, Beat, Consumer}
[2016-01-29 10:14:51,084: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-01-29 10:14:51,085: DEBUG/MainProcess] | Consumer: Building graph...
[2016-01-29 10:14:51,097: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Events, Heart, Agent, Mingle, Gossip, Tasks, Control, event loop}
-------------- celery#Kumars-MacBook-Pro-2.local v3.1.20 (Cipater)
---- **** -----
--- * *** * -- Darwin-14.5.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: default:0x109177ed0 (djcelery.loaders.DjangoLoader)
- ** ---------- .> transport: redis://localhost:6379/2
- ** ---------- .> results: redis://localhost:6379/2
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. utils.tasks.add
[2016-01-29 10:14:51,113: DEBUG/MainProcess] | Worker: Starting Pool
python manage.py celerycam
-> evcam: Taking snapshots with djcelery.snapshot.Camera (every 1.0 secs.)
[2016-01-29 09:58:10,377: INFO/MainProcess] Connected to redis://localhost:6379/2
This is my settings file and the task
settings.py
# Celery settings
djcelery.setup_loader()
BROKER_URL = 'redis://localhost:6379/2'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/2'
INSTALLED_APP = (
...
'djcelery',
)
tasks.py
from celery import task
#task()
def add(x, y):
return x + y
When I run the task, its state is always 'PENDING'. Any idea on this?
I am trying to configure a Django project to use Celery (I am using Django 1.3 on Debian Squeeze)
I installed django-celery (2.3.3) and then followed these instructions.
My django celery settings are the following:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
When I try to launch the celery worker server with...
$ python manage.py celeryd -l info
I get the following output with a "Consumer: Connection Error: [Errno 111]" at the end :
/home/thomas/virtualenv/ULYSSE/lib/python2.6/site-packages/djcelery/loaders.py:84: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn("Using settings.DEBUG leads to a memory leak, never "
[2011-09-20 12:14:00,645: WARNING/MainProcess]
-------------- celery#debian v2.3.3
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://guest#localhost:5672//
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 1
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. competitions.tasks.add
[2011-09-20 12:14:00,788: INFO/PoolWorker-1] child process calling self.run()
[2011-09-20 12:14:00,795: WARNING/MainProcess] celery#debian has started.
[2011-09-20 12:14:00,809: ERROR/MainProcess] **Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 2 seconds**...
Apparently, my settings are correctly read (cf. Configuration section in the output) and the worker process is correctly started ("celery#debian has started")
I can not figure out why this "Consumer: Connection Error: [Errno 111]" error appends...
Has this to do with the BROKER_USER and BROKER_PASSWORD settings?
I tried different settings for user/password (my account, root account...) but I always get the same error. Does 'BROKER_USER' and 'BROKER_PASSWORD refer to a OS user, a database user, a "broker" user?
How can I get rid of this Connection Error?
Looks like rabbitmq isn't installed or running. Can you check this?
apt-get install rabbitmq-server
on Ubuntu
I am trying to set up Django with Celery so I can send bulk emails in the background.
I am a little confused about how the different components play into Celery. Do I need to use RabbitMQ? Can I just "django-kombu" to run Celery? (http://ask.github.com/celery/tutorials/otherqueues.html#django-database)
I started with "First Steps with Django" in the django-celery docs (http://django-celery.readthedocs.org/en/latest/getting-started/first-steps-with-django.html), but when I get to "Running the celery worker server" this happens:
$ python manage.py celeryd -l info
[2011-09-02 18:35:00,150: WARNING/MainProcess]
-------------- celery#Sauls-MacBook.local v2.3.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqplib://guest#localhost:5672/
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 2
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. tasks.add
[2011-09-02 18:35:00,213: INFO/PoolWorker-2] child process calling self.run()
[2011-09-02 18:35:00,214: INFO/PoolWorker-1] child process calling self.run()
[2011-09-02 18:35:00,229: WARNING/MainProcess] celery#Sauls-MacBook.local has started.
[2011-09-02 18:35:00,276: ERROR/MainProcess] Consumer: Connection Error: [Errno 61} Connection refused. Trying again in 2 seconds...
[2011-09-02 18:35:02,283: ERROR/MainProcess] Consumer: Connection Error: [Errno 61] Connection refused. Trying again in 4 seconds...
Then I have to quit the process...
As I can see from your configuration, you didn't set correctly the transport, in fact celery is trying to use amqplib for a connection to a broker like Rabbit MQ
broker: amqplib://guest#localhost:5672/
You should set, in the settings.py, the broker backend in this way:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"