Celery with Django & Redis not working - django

I've set up the djcelery along with redis.
I can see the logs in redis
redis-cli MONITOR
1454060863.881506 [0 [::1]:59091] "INFO"
1454060863.883295 [0 [::1]:59093] "MULTI"
1454060863.883314 [0 [::1]:59093] "LLEN" "celery"
1454060863.883319 [0 [::1]:59093] "LLEN" "celery\x06\x163"
1454060863.883323 [0 [::1]:59093] "LLEN" "celery\x06\x166"
1454060863.883326 [0 [::1]:59093] "LLEN" "celery\x06\x169"
1454060863.883331 [0 [::1]:59093] "EXEC"
1454060863.883704 [0 [::1]:59093] "SADD" "_kombu.binding.celery" "celery\x06\x16\x06\x16celery"
1454060863.884054 [0 [::1]:59093] "SMEMBERS" "_kombu.binding.celery"
1454060863.884421 [0 [::1]:59093] "LPUSH" "celery" "{\"body\": \"gAJ9cQEoVQdleHBpcmVzcQJOVQN1dGNxA4hVBGFyZ3NxBEsESwSGcQVVBWNob3JkcQZOVQljYWxsYmFja3NxB05VCGVycmJhY2tzcQhOVQd0YXNrc2V0cQlOVQJpZHEKVSQwZGQwZmJmZC0zYTQ0LTQxMDMtOTNiOC01NmI4ZmFjNjE0MDJxC1UHcmV0cmllc3EMSwBVBHRhc2txDVUPdXRpbHMudGFza3MuYWRkcQ5VCXRpbWVsaW1pdHEPTk6GVQNldGFxEE5VBmt3YXJnc3ERfXESdS4=\", \"headers\": {}, \"content-type\": \"application/x-python-serialize\", \"properties\": {\"body_encoding\": \"base64\", \"correlation_id\": \"0dd0fbfd-3a44-4103-93b8-56b8fac61402\", \"reply_to\": \"b9f02cee-e562-3e86-90aa-683d205d060c\", \"delivery_info\": {\"priority\": 0, \"routing_key\": \"celery\", \"exchange\": \"celery\"}, \"delivery_mode\": 2, \"delivery_tag\": \"b44aafc9-e6aa-4cd8-8a70-201d895cbd7f\"}, \"content-encoding\": \"binary\"}"
1454060872.513147 [0 [::1]:59204] "GET" "celery-task-meta-0dd0fbfd-3a44-4103-93b8-56b8fac61402"
But there's no logs in celery
python manage.py celeryd -l debug
[2016-01-29 10:14:51,073: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-01-29 10:14:51,075: DEBUG/MainProcess] | Worker: Building graph...
[2016-01-29 10:14:51,075: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, StateDB, Autoreloader, Beat, Consumer}
[2016-01-29 10:14:51,084: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-01-29 10:14:51,085: DEBUG/MainProcess] | Consumer: Building graph...
[2016-01-29 10:14:51,097: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Events, Heart, Agent, Mingle, Gossip, Tasks, Control, event loop}
-------------- celery#Kumars-MacBook-Pro-2.local v3.1.20 (Cipater)
---- **** -----
--- * *** * -- Darwin-14.5.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: default:0x109177ed0 (djcelery.loaders.DjangoLoader)
- ** ---------- .> transport: redis://localhost:6379/2
- ** ---------- .> results: redis://localhost:6379/2
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. utils.tasks.add
[2016-01-29 10:14:51,113: DEBUG/MainProcess] | Worker: Starting Pool
python manage.py celerycam
-> evcam: Taking snapshots with djcelery.snapshot.Camera (every 1.0 secs.)
[2016-01-29 09:58:10,377: INFO/MainProcess] Connected to redis://localhost:6379/2
This is my settings file and the task
settings.py
# Celery settings
djcelery.setup_loader()
BROKER_URL = 'redis://localhost:6379/2'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/2'
INSTALLED_APP = (
...
'djcelery',
)
tasks.py
from celery import task
#task()
def add(x, y):
return x + y
When I run the task, its state is always 'PENDING'. Any idea on this?

Related

Docker can not find /app/entrypoint.sh: 4: ./wait-for-postgres.sh: not found

My OS is windows. Backend django, frontend React, database pgadmin. All containers runs but the backend cannot find the entrypoint, but it is indeed here. I try the instruction on stackoverflow with similar issues but none of them fix it. Any one can help me with this? I attached log file and related fields here.
/app/entrypoint.sh: 4: ./wait-for-postgres.sh: not found
51 static files copied to '/app/static', 142 post-processed.
No changes detected
Operations to perform:
Apply all migrations: accounts, auth, campaigns, cases, common, contacts, contenttypes, django_celery_beat, django_celery_results, django_ses, emails, events, invoices, leads, opportunity, planner, sessions, tasks, teams
Running migrations:
No migrations to apply.
Installed 1 object(s) from 1 fixture(s)
Installed 1 object(s) from 1 fixture(s)
Installed 1 object(s) from 1 fixture(s)
-------------- celery#47bc8292147e v5.2.0b3 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-debian-11.5 2022-11-09 20:04:14
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: crm:0x7fa12453a080
- ** ---------- .> transport: redis://redis:6379//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. accounts.tasks.send_email
. accounts.tasks.send_email_to_assigned_user
. accounts.tasks.send_scheduled_emails
. campaigns.tasks.send_email
. campaigns.tasks.send_email_to_subscribed_contact
. cases.tasks.send_email_to_assigned_user
. common.tasks.resend_activation_link_to_user
. common.tasks.send_email_to_new_user
. common.tasks.send_email_to_reset_password
. common.tasks.send_email_user_delete
. common.tasks.send_email_user_mentions
. common.tasks.send_email_user_status
. contacts.tasks.send_email_to_assigned_user
. events.tasks.send_email
. invoices.tasks.create_invoice_history
. invoices.tasks.send_email
. invoices.tasks.send_invoice_email
. invoices.tasks.send_invoice_email_cancel
. leads.tasks.create_lead_from_file
. leads.tasks.send_email
. leads.tasks.send_email_to_assigned_user
. leads.tasks.send_lead_assigned_emails
. leads.tasks.update_leads_cache
. opportunity.tasks.send_email_to_assigned_user
. send_scheduled_email_campaigns
. teams.tasks.remove_users
. teams.tasks.update_team_users
/usr/local/lib/python3.6/site-packages/celery/platforms.py:830: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
uid=uid, euid=euid, gid=gid, egid=egid,
[2022-11-09 20:04:14,723: INFO/MainProcess] Connected to redis://redis:6379//
[2022-11-09 20:04:14,736: INFO/MainProcess] mingle: searching for neighbors
[2022-11-09 20:04:15,759: INFO/MainProcess] mingle: all alone
[2022-11-09 20:04:15,773: WARNING/MainProcess] /usr/local/lib/python3.6/site-packages/celery/fixups/django.py:204: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
leak, never use this setting in production environments!''')
[2022-11-09 20:04:15,773: INFO/MainProcess] celery#47bc8292147e ready.
/usr/local/lib/python3.6/site-packages/celery/platforms.py:830: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
uid=uid, euid=euid, gid=gid, egid=egid,
[2022-11-09 20:04:15,793: INFO/Beat] beat: Starting...
Tried
change CRLF to LF
double quotes ./wait-for-postgres.sh
remove $PWD: in docker-compose valumes
change #!/bin/sh to #!/bin/bash

Celery receives tasks from rabbitmq, but not executing them

I have a Django project and have setup Celery + RabbitMQ to do heavy tasks asynchronously. When I call the task, RabbitMQ admin shows the task, Celery prints that the task is received, but the task is not executed.
Here is the task's code:
#app.task
def dummy_task():
print("I'm Here")
User.objects.create(username="User1")
return "User1 Created!"
In this view I send the task to celery:
def task_view(request):
result = dummy_task.delay()
return render(request, 'display_progress.html', context={'task_id': result.task_id})
I run celery with this command:
$ celery -A proj worker -l info --concurrency=2 --without-gossip
This is output of running Celery:
-------------- celery#DESKTOP-8CHJOEG v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Windows-10-10.0.19044-SP0 2022-08-22 10:10:04
*** --- * ---
** ---------- [config]
** ---------- .> app: proj:0x23322847880
** ---------- .> transport: amqp://navid:**#localhost:5672//
** ---------- .> results:
*** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] .proj.celery.debug_task .
entitymatching.tasks.create_and_learn_machine .
entitymatching.tasks.dummy_task
[2022-08-22 10:10:04,068: INFO/MainProcess] Connected to
amqp://navid:**#127.0.0.1:5672// [2022-08-22 10:10:04,096:
INFO/MainProcess] mingle: searching for neighbors [2022-08-22
10:10:04,334: INFO/SpawnPoolWorker-1] child process 6864 calling
self.run() [2022-08-22 10:10:04,335: INFO/SpawnPoolWorker-2] child
process 12420 calling self.run() [2022-08-22 10:10:05,134:
INFO/MainProcess] mingle: all alone [2022-08-22 10:10:05,142:
WARNING/MainProcess]
C:\Users\Navid\PycharmProjects\proj\venv\lib\site-packages\celery\fixups\django.py:203:
UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments! warnings.warn('''Using settings.DEBUG leads to a memory
[2022-08-22 10:10:05,142: INFO/MainProcess] celery#DESKTOP-8CHJOEG
ready. [2022-08-22 10:10:05,143: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[97f8a2eb-0006-4d53-ba6a-7b9f8649c84a]
received [2022-08-22 10:10:05,144: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[17190479-0784-46b1-8dc6-870ead41e9c6]
received [2022-08-22 10:11:36,384: INFO/MainProcess] Task
proj.celery.debug_task[af3d633f-7b9a-4441-b375-9ce217a40ab3]
received
But "I'm Here" is not printed, and User1 is not created.
RabbitMQ shows that there are 3 "unack" messages in the queue:
You did not provide enough info although I think you have problem with your worker pools.
try adding
--pool=solo
at the end of your run command.
it will be like
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=solo
although on production I recommend using gevent as your pool.
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=gevent

Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why does my celery worker die with: WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV)

This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database...
I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows:
CELERYBEAT_SCHEDULE = {
'sync_database': {
'task': 'apps.data.tasks.celery_sync_database',
'schedule': timedelta(minutes=5)
}
}
I have followed the instructions here: http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
I am able to open two new terminal windows and run celery processes as follows:
ONE - the celery beat process which is required for scheduled tasks and will put the task on the queue:
PROMPT> celery -A myproj beat
celery beat v3.1.9 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://myproj#localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2014-02-20 16:15:20,085: INFO/MainProcess] beat: Starting...
[2014-02-20 16:15:20,086: INFO/MainProcess] Writing entries...
[2014-02-20 16:15:20,143: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2014-02-20 16:15:20,143: INFO/MainProcess] Writing entries...
[2014-02-20 16:20:20,143: INFO/MainProcess] Scheduler: Sending due task sync_database (apps.data.tasks.celery_sync_database)
[2014-02-20 16:20:20,161: INFO/MainProcess] Writing entries...
TWO - the celery worker, which should take the task off the queue and run it:
PROMPT> celery -A myproj worker -l info
-------------- celery#Jons-MacBook.local v3.1.9 (Cipater)
---- **** -----
--- * *** * -- Darwin-13.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: myproj:0x1105a1050
- ** ---------- .> transport: amqp://myproj#localhost:5672//
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. apps.data.tasks.celery_sync_database
. myproj.celery.debug_task
[2014-02-20 16:15:29,402: INFO/MainProcess] Connected to amqp://myproj#127.0.0.1:5672//
[2014-02-20 16:15:29,419: INFO/MainProcess] mingle: searching for neighbors
[2014-02-20 16:15:30,440: INFO/MainProcess] mingle: all alone
[2014-02-20 16:15:30,474: WARNING/MainProcess] celery#Jons-MacBook.local ready.
When the task gets sent, however, it appears that about 50% of the time the worker runs the task and the other 50% of the time I get the following error:
[2014-02-20 16:35:20,159: INFO/MainProcess] Received task: apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25]
[2014-02-20 16:36:54,561: ERROR/MainProcess] Process 'Worker-4' pid:19500 exited with exitcode -11
[2014-02-20 16:36:54,580: ERROR/MainProcess] Task apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/Users/jon/dev/vpe/VAN/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
I am developing on a Macbook Pro running Mavericks.
Celery version 3.1.9
RabbitMQ 3.2.3
Django 1.6
Note that I am using django-celery 3.1.9 and have the djcelery app enabled.
When I switched from SQLite to PostgreSQL the problem disappeared.

django-celery consumer can not receive tasks

Installation
I am using django(1.4) celery(3.0.13) with RabbitMQ(v3.0.4), the backend db is sqlite.
Celery was installed by pip install django-celery
Setting
In setting.py:
# For django-celery
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://user:pwd#sd5:5672/8086'
### and adding 'djcelery' to INSTALLED_APPS
Running
After setup the database with south, I start rabbitmq-server and manage.py celery worker --loglevel=debug
I could see the connection was established:
-------------- celery#sd5 v3.0.16 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://utils#sd5:5672/8086
- ** ---------- . app: default:0x8a5106c (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. utils.weixin.tasks.celery_add
[2013-03-19 19:50:00,460: WARNING/MainProcess] celery#sd5 ready.
[2013-03-19 19:50:00,483: INFO/MainProcess] consumer: Connected to amqp://utils#sd5:5672/8086.
[2013-03-19 19:50:00,498: DEBUG/MainProcess] consumer: Ready to accept tasks!
And in rabbit#sd5.log:
=INFO REPORT==== 19-Mar-2013::19:50:00 ===
accepting AMQP connection <0.1655.0> (127.0.0.1:50087 -> 127.0.0.1:5672)
Problem
Then I run my task utils.weixin.tasks.celery_add in manage.py shell:
>>> from utils.weixin.tasks import celery_add
>>> result = celery_add.delay(1,3)
>>> result.ready()
False
>>> result.get()
hangup here forever...
And, nothing show up in the log of celery worker and log of rabbitmq, not any 'received task' etc.
It seems that calling the task does not comunicate with worker.
Question
What should I do to find out what has been done incorrectly. How should I fix this?
Appriciated!

Celery in Django (RabbitMQ vs. Django Database)

I am trying to set up Django with Celery so I can send bulk emails in the background.
I am a little confused about how the different components play into Celery. Do I need to use RabbitMQ? Can I just "django-kombu" to run Celery? (http://ask.github.com/celery/tutorials/otherqueues.html#django-database)
I started with "First Steps with Django" in the django-celery docs (http://django-celery.readthedocs.org/en/latest/getting-started/first-steps-with-django.html), but when I get to "Running the celery worker server" this happens:
$ python manage.py celeryd -l info
[2011-09-02 18:35:00,150: WARNING/MainProcess]
-------------- celery#Sauls-MacBook.local v2.3.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqplib://guest#localhost:5672/
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 2
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. tasks.add
[2011-09-02 18:35:00,213: INFO/PoolWorker-2] child process calling self.run()
[2011-09-02 18:35:00,214: INFO/PoolWorker-1] child process calling self.run()
[2011-09-02 18:35:00,229: WARNING/MainProcess] celery#Sauls-MacBook.local has started.
[2011-09-02 18:35:00,276: ERROR/MainProcess] Consumer: Connection Error: [Errno 61} Connection refused. Trying again in 2 seconds...
[2011-09-02 18:35:02,283: ERROR/MainProcess] Consumer: Connection Error: [Errno 61] Connection refused. Trying again in 4 seconds...
Then I have to quit the process...
As I can see from your configuration, you didn't set correctly the transport, in fact celery is trying to use amqplib for a connection to a broker like Rabbit MQ
broker: amqplib://guest#localhost:5672/
You should set, in the settings.py, the broker backend in this way:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"