Celery receives tasks from rabbitmq, but not executing them - django

I have a Django project and have setup Celery + RabbitMQ to do heavy tasks asynchronously. When I call the task, RabbitMQ admin shows the task, Celery prints that the task is received, but the task is not executed.
Here is the task's code:
#app.task
def dummy_task():
print("I'm Here")
User.objects.create(username="User1")
return "User1 Created!"
In this view I send the task to celery:
def task_view(request):
result = dummy_task.delay()
return render(request, 'display_progress.html', context={'task_id': result.task_id})
I run celery with this command:
$ celery -A proj worker -l info --concurrency=2 --without-gossip
This is output of running Celery:
-------------- celery#DESKTOP-8CHJOEG v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Windows-10-10.0.19044-SP0 2022-08-22 10:10:04
*** --- * ---
** ---------- [config]
** ---------- .> app: proj:0x23322847880
** ---------- .> transport: amqp://navid:**#localhost:5672//
** ---------- .> results:
*** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] .proj.celery.debug_task .
entitymatching.tasks.create_and_learn_machine .
entitymatching.tasks.dummy_task
[2022-08-22 10:10:04,068: INFO/MainProcess] Connected to
amqp://navid:**#127.0.0.1:5672// [2022-08-22 10:10:04,096:
INFO/MainProcess] mingle: searching for neighbors [2022-08-22
10:10:04,334: INFO/SpawnPoolWorker-1] child process 6864 calling
self.run() [2022-08-22 10:10:04,335: INFO/SpawnPoolWorker-2] child
process 12420 calling self.run() [2022-08-22 10:10:05,134:
INFO/MainProcess] mingle: all alone [2022-08-22 10:10:05,142:
WARNING/MainProcess]
C:\Users\Navid\PycharmProjects\proj\venv\lib\site-packages\celery\fixups\django.py:203:
UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments! warnings.warn('''Using settings.DEBUG leads to a memory
[2022-08-22 10:10:05,142: INFO/MainProcess] celery#DESKTOP-8CHJOEG
ready. [2022-08-22 10:10:05,143: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[97f8a2eb-0006-4d53-ba6a-7b9f8649c84a]
received [2022-08-22 10:10:05,144: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[17190479-0784-46b1-8dc6-870ead41e9c6]
received [2022-08-22 10:11:36,384: INFO/MainProcess] Task
proj.celery.debug_task[af3d633f-7b9a-4441-b375-9ce217a40ab3]
received
But "I'm Here" is not printed, and User1 is not created.
RabbitMQ shows that there are 3 "unack" messages in the queue:

You did not provide enough info although I think you have problem with your worker pools.
try adding
--pool=solo
at the end of your run command.
it will be like
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=solo
although on production I recommend using gevent as your pool.
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=gevent

Related

Celery task blocked in Django view with a AWS SQS broker

I am trying to run a celery task in a Django view using my_task.delay(). However, the task is never executed and the code is blocked on that line and the view never renders. I am using AWS SQS as a broker with an IAM user with full access to SQS.
What am I doing wrong?
Running celery and Django
I am running celery like this:
celery -A app worker -l info
And I am starting my Django server locally in another terminal using:
python manage.py runserver
The celery command outputs:
-------------- celery#LAPTOP-02019EM6 v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.16299 2018-02-07 13:48:18
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: app:0x6372c18
- ** ---------- .> transport: sqs://**redacted**:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF
--- ***** -----
-------------- [queues]
.> my-queue exchange=my-queue(direct) key=my-queue
[tasks]
. app.celery.debug_task
. counter.tasks.my_task
[2018-02-07 13:48:19,262: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:19,868: INFO/SpawnPoolWorker-1] child process 20196 calling self.run()
[2018-02-07 13:48:19,918: INFO/SpawnPoolWorker-4] child process 19984 calling self.run()
[2018-02-07 13:48:19,947: INFO/SpawnPoolWorker-3] child process 16024 calling self.run()
[2018-02-07 13:48:20,004: INFO/SpawnPoolWorker-2] child process 19572 calling self.run()
[2018-02-07 13:48:20,815: INFO/MainProcess] Connected to sqs://**redacted**:**#localhost//
[2018-02-07 13:48:20,930: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:21,307: WARNING/MainProcess] c:\users\nicolas\anaconda3\envs\djangocelery\lib\site-packages\celery\fixups\django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2018-02-07 13:48:21,311: INFO/MainProcess] celery#LAPTOP-02019EM6 ready.
views.py
from .tasks import my_task
def index(request):
print('New request') # This is called
my_task.delay()
# Never reaches here
return HttpResponse('test')
tasks.py
...
#shared_task
def my_task():
print('Task ran successfully') # never prints anything
settings.py
My configuration is the following:
import djcelery
djcelery.setup_loader()
CELERY_BROKER_URL = 'sqs://'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'sa-east-1',
}
CELERY_BROKER_USER = '****************'
CELERY_BROKER_PASSWORD = '***************************'
CELERY_TASK_DEFAULT_QUEUE = 'my-queue'
Versions:
I use the following version of Django and Celery:
Django==2.0.2
django-celery==3.2.2
celery==4.1.0
Thanks for your help!
A bit late, but maybe you are still interested. I got Celery with Django and SQS running and don't see any errors in your code. Maybe you missed something in the celery.py file? Here is my code for comparing.
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangoappname.settings')
# do not use namespace because default amqp broker would be called
app = Celery('lsaweb')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()
Have you also checked if SQS is getting messages (try polling in the SQS administration area)?

Celery worker does not consume messages

I'm using Celery 4.0.0 with RabbitMQ as messages broker within a django 1.9 project, using django-celery-results for results backend. I'm new to Celery and RabbitMQ. The python version is 2.7.5.
After following the instructions in the Celery docs for configuring and using celery with django, and before adding any real tasks, I tried a simple task calling using django shell (manage.py shell), sending the debug_task as defined in the celery docs.
Task is sent OK, and looking at the rabbitmq queue, I can see a new message has arrived to the correct queue on the correct virtual host.
I run the worker and it looks like it starts OK, then it arrives to the event loop and does nothing. No error is presented, not in the worker output or in the rabbitmq logs.
On the other hand, celery status on the same machine returns that there are no active nodes.
I'm probably missing something here, but I don't know what it can be.
Don't know if this is relevant, but when I use 'celery purge' to clear the messages queue, it finds the message and purges it.
Celery configuration settings as added to django settings.py:
CELERY_BROKER_URL = 'amqp://user1:passwd1#rabbithost:5672/exp'
CELERY_TIMEZONE = TIME_ZONE # Using django's TZ
CELERY_TASK_TRACK_STARTED = True
CELERY_RESULT_BACKEND = 'django-db'
Task invocation in django shell:
>>> from project.celery import debug_task
>>> debug_task
<#task: project.celery.debug_task of project:0x23cad10>
>>> r = debug_task.delay()
>>> r
<AsyncResult: 33031998-4cd8-4dfe-8e9d-bda9398525bb>
>>> r.status
u'PENDING'
Celery worker invocation:
% celery -A project worker -l info -Q celery
-------------- celery#super9 v4.0.0 (latentcall)
---- **** -----
--- * *** * -- Linux-3.10.0-327.4.5.el7.x86_64-x86_64-with-centos-7.2.1511-Core 2016-11-24 18:15:27
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: project:0x25931d0
- ** ---------- .> transport: amqp://user1:**#rabbithost:5672/exp
- ** ---------- .> results:
- *** --- * --- .> concurrency: 24 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. project.celery.debug_task
[2016-11-24 18:15:28,984: INFO/MainProcess] Connected to amqp://user1:**#rabbithost:5672/exp
[2016-11-24 18:15:29,009: INFO/MainProcess] mingle: searching for neighbors
[2016-11-24 18:15:30,035: INFO/MainProcess] mingle: all alone
/dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,072: WARNING/MainProcess] /dir/project/devel/python/devel-1.9-centos7/lib/python2.7/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2016-11-24 18:15:30,073: INFO/MainProcess] celery#super9 ready.
Checking rabbitmq queue:
% rabbitmqctl list_queues -p exp
Listing queues ...
celery 1
Celery status invocation while the worker is "ready":
% celery -A project status
Error: No nodes replied within time constraint.
Thanks.

Django 1.6 + RabbitMQ 3.2.3 + Celery 3.1.9 - why does my celery worker die with: WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV)

This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database...
I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows:
CELERYBEAT_SCHEDULE = {
'sync_database': {
'task': 'apps.data.tasks.celery_sync_database',
'schedule': timedelta(minutes=5)
}
}
I have followed the instructions here: http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
I am able to open two new terminal windows and run celery processes as follows:
ONE - the celery beat process which is required for scheduled tasks and will put the task on the queue:
PROMPT> celery -A myproj beat
celery beat v3.1.9 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://myproj#localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2014-02-20 16:15:20,085: INFO/MainProcess] beat: Starting...
[2014-02-20 16:15:20,086: INFO/MainProcess] Writing entries...
[2014-02-20 16:15:20,143: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2014-02-20 16:15:20,143: INFO/MainProcess] Writing entries...
[2014-02-20 16:20:20,143: INFO/MainProcess] Scheduler: Sending due task sync_database (apps.data.tasks.celery_sync_database)
[2014-02-20 16:20:20,161: INFO/MainProcess] Writing entries...
TWO - the celery worker, which should take the task off the queue and run it:
PROMPT> celery -A myproj worker -l info
-------------- celery#Jons-MacBook.local v3.1.9 (Cipater)
---- **** -----
--- * *** * -- Darwin-13.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: myproj:0x1105a1050
- ** ---------- .> transport: amqp://myproj#localhost:5672//
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. apps.data.tasks.celery_sync_database
. myproj.celery.debug_task
[2014-02-20 16:15:29,402: INFO/MainProcess] Connected to amqp://myproj#127.0.0.1:5672//
[2014-02-20 16:15:29,419: INFO/MainProcess] mingle: searching for neighbors
[2014-02-20 16:15:30,440: INFO/MainProcess] mingle: all alone
[2014-02-20 16:15:30,474: WARNING/MainProcess] celery#Jons-MacBook.local ready.
When the task gets sent, however, it appears that about 50% of the time the worker runs the task and the other 50% of the time I get the following error:
[2014-02-20 16:35:20,159: INFO/MainProcess] Received task: apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25]
[2014-02-20 16:36:54,561: ERROR/MainProcess] Process 'Worker-4' pid:19500 exited with exitcode -11
[2014-02-20 16:36:54,580: ERROR/MainProcess] Task apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/Users/jon/dev/vpe/VAN/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
I am developing on a Macbook Pro running Mavericks.
Celery version 3.1.9
RabbitMQ 3.2.3
Django 1.6
Note that I am using django-celery 3.1.9 and have the djcelery app enabled.
When I switched from SQLite to PostgreSQL the problem disappeared.

Django/Celery Quickstart example not working (worker is not executing any tasks)

I'm using Django/Celery Quickstart... or, how I learned to stop using cron and love celery, and it seems the jobs are getting queued, but never run.
tasks.py:
from celery.task.schedules import crontab
from celery.decorators import periodic_task
# this will run every minute, see http://celeryproject.org/docs/reference/celery.task.schedules.html#celery.task.schedules.crontab
#periodic_task(run_every=crontab(hour="*", minute="*", day_of_week="*"))
def test():
print "firing test task"
So I run celery:
bash-3.2$ sudo manage.py celeryd -v 2 -B -s celery -E -l INFO
/scratch/software/python/lib/celery/apps/worker.py:166: RuntimeWarning: Running celeryd with superuser privileges is discouraged!
'Running celeryd with superuser privileges is discouraged!'))
-------------- celery#myserver v3.0.12 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: django://localhost//
- ** ---------- . app: default:0x12120290 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: ON
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. GotPatch.tasks.test
[2012-12-12 11:58:37,118: INFO/Beat] Celerybeat: Starting...
[2012-12-12 11:58:37,163: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 11:58:37,249: WARNING/MainProcess] /scratch/software/python/lib/djcelery/loaders.py:132: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn("Using settings.DEBUG leads to a memory leak, never "
[2012-12-12 11:58:37,348: WARNING/MainProcess] celery#myserver ready.
[2012-12-12 11:58:37,352: INFO/MainProcess] consumer: Connected to django://localhost//.
[2012-12-12 11:58:37,700: INFO/MainProcess] child process calling self.run()
[2012-12-12 11:58:37,857: INFO/MainProcess] child process calling self.run()
[2012-12-12 11:59:00,229: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:00:00,017: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:01:00,020: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:02:00,024: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
The tasks are indeed getting queued:
python manage.py shell
>>> from kombu.transport.django.models import Message
>>> Message.objects.count()
234
And the count increases over time:
>>> Message.objects.count()
477
There are no lines in the log file that seem to indicate the task is being executed. I'm expecting something like:
[... INFO/MainProcess] Task myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] succeeded in 0.00423407554626s: None
Any suggestions how to diagnose / debug this?
I'm new to celery as well, but from the comments on the link you provided, it looks like there was an error in the tutorial. One of the comments points out:
At this command
sudo ./manage.py celeryd -v 2 -B -s celery -E -l INFO
You must add "-I tasks" to load tasks.py file ...
Did you try that?
You should check that you specify BROKER_URL parameter inside django's settyngs.py.
BROKER_URL = 'django://'
And you should check that your timezones in django, mysql and celery is equal.
It helped me.
P.s.:
[... INFO/MainProcess] Task myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] succeeded in 0.00423407554626s: None
This line means that your task was scheduled (!not executed!)
Please check your config and i hope that it helps you.
I hope someone could learn from my experience in hacking this.
After setting everything up according to the tutorial I noticed that when I call
add.delay(4,5)
nothing happens. the worker did not receive the task (nothing was printed on stderr).
The problem was with the rabbitmq installation. It turns out the default free disk size requirements is 1GB which was way too much for my VM.
what put me on track was to read the rabbitmq log file.
to find it I had to stop and start the rabbitmq server
sudo rabbitmqctl stop
sudo rabbitmq-server
rabbitmq dumps the log file location to the screen. in the file I noticed this:
=WARNING REPORT==== 14-Mar-2017::13:57:41 ===
disk resource limit alarm set on node rabbit#supporttip.
**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************
I then followed the instruction here in order to reduce the free disk limit
Rabbitmq ignores configuration on Ubuntu 12
As a baseline I used the config file from git
https://github.com/rabbitmq/rabbitmq-server/blob/stable/docs/rabbitmq.config.example
The change itself:
{disk_free_limit, "50MB"}

Celery in Django (RabbitMQ vs. Django Database)

I am trying to set up Django with Celery so I can send bulk emails in the background.
I am a little confused about how the different components play into Celery. Do I need to use RabbitMQ? Can I just "django-kombu" to run Celery? (http://ask.github.com/celery/tutorials/otherqueues.html#django-database)
I started with "First Steps with Django" in the django-celery docs (http://django-celery.readthedocs.org/en/latest/getting-started/first-steps-with-django.html), but when I get to "Running the celery worker server" this happens:
$ python manage.py celeryd -l info
[2011-09-02 18:35:00,150: WARNING/MainProcess]
-------------- celery#Sauls-MacBook.local v2.3.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqplib://guest#localhost:5672/
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 2
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. tasks.add
[2011-09-02 18:35:00,213: INFO/PoolWorker-2] child process calling self.run()
[2011-09-02 18:35:00,214: INFO/PoolWorker-1] child process calling self.run()
[2011-09-02 18:35:00,229: WARNING/MainProcess] celery#Sauls-MacBook.local has started.
[2011-09-02 18:35:00,276: ERROR/MainProcess] Consumer: Connection Error: [Errno 61} Connection refused. Trying again in 2 seconds...
[2011-09-02 18:35:02,283: ERROR/MainProcess] Consumer: Connection Error: [Errno 61] Connection refused. Trying again in 4 seconds...
Then I have to quit the process...
As I can see from your configuration, you didn't set correctly the transport, in fact celery is trying to use amqplib for a connection to a broker like Rabbit MQ
broker: amqplib://guest#localhost:5672/
You should set, in the settings.py, the broker backend in this way:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"