I am trying to set up Django with Celery so I can send bulk emails in the background.
I am a little confused about how the different components play into Celery. Do I need to use RabbitMQ? Can I just "django-kombu" to run Celery? (http://ask.github.com/celery/tutorials/otherqueues.html#django-database)
I started with "First Steps with Django" in the django-celery docs (http://django-celery.readthedocs.org/en/latest/getting-started/first-steps-with-django.html), but when I get to "Running the celery worker server" this happens:
$ python manage.py celeryd -l info
[2011-09-02 18:35:00,150: WARNING/MainProcess]
-------------- celery#Sauls-MacBook.local v2.3.1
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqplib://guest#localhost:5672/
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 2
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. tasks.add
[2011-09-02 18:35:00,213: INFO/PoolWorker-2] child process calling self.run()
[2011-09-02 18:35:00,214: INFO/PoolWorker-1] child process calling self.run()
[2011-09-02 18:35:00,229: WARNING/MainProcess] celery#Sauls-MacBook.local has started.
[2011-09-02 18:35:00,276: ERROR/MainProcess] Consumer: Connection Error: [Errno 61} Connection refused. Trying again in 2 seconds...
[2011-09-02 18:35:02,283: ERROR/MainProcess] Consumer: Connection Error: [Errno 61] Connection refused. Trying again in 4 seconds...
Then I have to quit the process...
As I can see from your configuration, you didn't set correctly the transport, in fact celery is trying to use amqplib for a connection to a broker like Rabbit MQ
broker: amqplib://guest#localhost:5672/
You should set, in the settings.py, the broker backend in this way:
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
Related
I have a Django project and have setup Celery + RabbitMQ to do heavy tasks asynchronously. When I call the task, RabbitMQ admin shows the task, Celery prints that the task is received, but the task is not executed.
Here is the task's code:
#app.task
def dummy_task():
print("I'm Here")
User.objects.create(username="User1")
return "User1 Created!"
In this view I send the task to celery:
def task_view(request):
result = dummy_task.delay()
return render(request, 'display_progress.html', context={'task_id': result.task_id})
I run celery with this command:
$ celery -A proj worker -l info --concurrency=2 --without-gossip
This is output of running Celery:
-------------- celery#DESKTOP-8CHJOEG v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Windows-10-10.0.19044-SP0 2022-08-22 10:10:04
*** --- * ---
** ---------- [config]
** ---------- .> app: proj:0x23322847880
** ---------- .> transport: amqp://navid:**#localhost:5672//
** ---------- .> results:
*** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- -------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks] .proj.celery.debug_task .
entitymatching.tasks.create_and_learn_machine .
entitymatching.tasks.dummy_task
[2022-08-22 10:10:04,068: INFO/MainProcess] Connected to
amqp://navid:**#127.0.0.1:5672// [2022-08-22 10:10:04,096:
INFO/MainProcess] mingle: searching for neighbors [2022-08-22
10:10:04,334: INFO/SpawnPoolWorker-1] child process 6864 calling
self.run() [2022-08-22 10:10:04,335: INFO/SpawnPoolWorker-2] child
process 12420 calling self.run() [2022-08-22 10:10:05,134:
INFO/MainProcess] mingle: all alone [2022-08-22 10:10:05,142:
WARNING/MainProcess]
C:\Users\Navid\PycharmProjects\proj\venv\lib\site-packages\celery\fixups\django.py:203:
UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments! warnings.warn('''Using settings.DEBUG leads to a memory
[2022-08-22 10:10:05,142: INFO/MainProcess] celery#DESKTOP-8CHJOEG
ready. [2022-08-22 10:10:05,143: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[97f8a2eb-0006-4d53-ba6a-7b9f8649c84a]
received [2022-08-22 10:10:05,144: INFO/MainProcess] Task
entitymatching.tasks.dummy_task[17190479-0784-46b1-8dc6-870ead41e9c6]
received [2022-08-22 10:11:36,384: INFO/MainProcess] Task
proj.celery.debug_task[af3d633f-7b9a-4441-b375-9ce217a40ab3]
received
But "I'm Here" is not printed, and User1 is not created.
RabbitMQ shows that there are 3 "unack" messages in the queue:
You did not provide enough info although I think you have problem with your worker pools.
try adding
--pool=solo
at the end of your run command.
it will be like
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=solo
although on production I recommend using gevent as your pool.
celery -A proj worker -l info --concurrency=2 --without-gossip --pool=gevent
I am trying to run a celery task in a Django view using my_task.delay(). However, the task is never executed and the code is blocked on that line and the view never renders. I am using AWS SQS as a broker with an IAM user with full access to SQS.
What am I doing wrong?
Running celery and Django
I am running celery like this:
celery -A app worker -l info
And I am starting my Django server locally in another terminal using:
python manage.py runserver
The celery command outputs:
-------------- celery#LAPTOP-02019EM6 v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.16299 2018-02-07 13:48:18
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: app:0x6372c18
- ** ---------- .> transport: sqs://**redacted**:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF
--- ***** -----
-------------- [queues]
.> my-queue exchange=my-queue(direct) key=my-queue
[tasks]
. app.celery.debug_task
. counter.tasks.my_task
[2018-02-07 13:48:19,262: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:19,868: INFO/SpawnPoolWorker-1] child process 20196 calling self.run()
[2018-02-07 13:48:19,918: INFO/SpawnPoolWorker-4] child process 19984 calling self.run()
[2018-02-07 13:48:19,947: INFO/SpawnPoolWorker-3] child process 16024 calling self.run()
[2018-02-07 13:48:20,004: INFO/SpawnPoolWorker-2] child process 19572 calling self.run()
[2018-02-07 13:48:20,815: INFO/MainProcess] Connected to sqs://**redacted**:**#localhost//
[2018-02-07 13:48:20,930: INFO/MainProcess] Starting new HTTPS connection (1): sa-east-1.queue.amazonaws.com
[2018-02-07 13:48:21,307: WARNING/MainProcess] c:\users\nicolas\anaconda3\envs\djangocelery\lib\site-packages\celery\fixups\django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2018-02-07 13:48:21,311: INFO/MainProcess] celery#LAPTOP-02019EM6 ready.
views.py
from .tasks import my_task
def index(request):
print('New request') # This is called
my_task.delay()
# Never reaches here
return HttpResponse('test')
tasks.py
...
#shared_task
def my_task():
print('Task ran successfully') # never prints anything
settings.py
My configuration is the following:
import djcelery
djcelery.setup_loader()
CELERY_BROKER_URL = 'sqs://'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'sa-east-1',
}
CELERY_BROKER_USER = '****************'
CELERY_BROKER_PASSWORD = '***************************'
CELERY_TASK_DEFAULT_QUEUE = 'my-queue'
Versions:
I use the following version of Django and Celery:
Django==2.0.2
django-celery==3.2.2
celery==4.1.0
Thanks for your help!
A bit late, but maybe you are still interested. I got Celery with Django and SQS running and don't see any errors in your code. Maybe you missed something in the celery.py file? Here is my code for comparing.
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangoappname.settings')
# do not use namespace because default amqp broker would be called
app = Celery('lsaweb')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()
Have you also checked if SQS is getting messages (try polling in the SQS administration area)?
This seems to address a very similar issue, but doesn't give me quite enough insight: https://github.com/celery/billiard/issues/101 Sounds like it might be a good idea to try a non-SQLite database...
I have a straightforward celery setup with my django app. In my settings.py file I set a task to run as follows:
CELERYBEAT_SCHEDULE = {
'sync_database': {
'task': 'apps.data.tasks.celery_sync_database',
'schedule': timedelta(minutes=5)
}
}
I have followed the instructions here: http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
I am able to open two new terminal windows and run celery processes as follows:
ONE - the celery beat process which is required for scheduled tasks and will put the task on the queue:
PROMPT> celery -A myproj beat
celery beat v3.1.9 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://myproj#localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2014-02-20 16:15:20,085: INFO/MainProcess] beat: Starting...
[2014-02-20 16:15:20,086: INFO/MainProcess] Writing entries...
[2014-02-20 16:15:20,143: INFO/MainProcess] DatabaseScheduler: Schedule changed.
[2014-02-20 16:15:20,143: INFO/MainProcess] Writing entries...
[2014-02-20 16:20:20,143: INFO/MainProcess] Scheduler: Sending due task sync_database (apps.data.tasks.celery_sync_database)
[2014-02-20 16:20:20,161: INFO/MainProcess] Writing entries...
TWO - the celery worker, which should take the task off the queue and run it:
PROMPT> celery -A myproj worker -l info
-------------- celery#Jons-MacBook.local v3.1.9 (Cipater)
---- **** -----
--- * *** * -- Darwin-13.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: myproj:0x1105a1050
- ** ---------- .> transport: amqp://myproj#localhost:5672//
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. apps.data.tasks.celery_sync_database
. myproj.celery.debug_task
[2014-02-20 16:15:29,402: INFO/MainProcess] Connected to amqp://myproj#127.0.0.1:5672//
[2014-02-20 16:15:29,419: INFO/MainProcess] mingle: searching for neighbors
[2014-02-20 16:15:30,440: INFO/MainProcess] mingle: all alone
[2014-02-20 16:15:30,474: WARNING/MainProcess] celery#Jons-MacBook.local ready.
When the task gets sent, however, it appears that about 50% of the time the worker runs the task and the other 50% of the time I get the following error:
[2014-02-20 16:35:20,159: INFO/MainProcess] Received task: apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25]
[2014-02-20 16:36:54,561: ERROR/MainProcess] Process 'Worker-4' pid:19500 exited with exitcode -11
[2014-02-20 16:36:54,580: ERROR/MainProcess] Task apps.data.tasks.celery_sync_database[960bcb6c-d6a5-4e32-8267-cfbe2b411b25] raised unexpected: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)
Traceback (most recent call last):
File "/Users/jon/dev/vpe/VAN/lib/python2.7/site-packages/billiard/pool.py", line 1168, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
I am developing on a Macbook Pro running Mavericks.
Celery version 3.1.9
RabbitMQ 3.2.3
Django 1.6
Note that I am using django-celery 3.1.9 and have the djcelery app enabled.
When I switched from SQLite to PostgreSQL the problem disappeared.
Installation
I am using django(1.4) celery(3.0.13) with RabbitMQ(v3.0.4), the backend db is sqlite.
Celery was installed by pip install django-celery
Setting
In setting.py:
# For django-celery
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://user:pwd#sd5:5672/8086'
### and adding 'djcelery' to INSTALLED_APPS
Running
After setup the database with south, I start rabbitmq-server and manage.py celery worker --loglevel=debug
I could see the connection was established:
-------------- celery#sd5 v3.0.16 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://utils#sd5:5672/8086
- ** ---------- . app: default:0x8a5106c (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. utils.weixin.tasks.celery_add
[2013-03-19 19:50:00,460: WARNING/MainProcess] celery#sd5 ready.
[2013-03-19 19:50:00,483: INFO/MainProcess] consumer: Connected to amqp://utils#sd5:5672/8086.
[2013-03-19 19:50:00,498: DEBUG/MainProcess] consumer: Ready to accept tasks!
And in rabbit#sd5.log:
=INFO REPORT==== 19-Mar-2013::19:50:00 ===
accepting AMQP connection <0.1655.0> (127.0.0.1:50087 -> 127.0.0.1:5672)
Problem
Then I run my task utils.weixin.tasks.celery_add in manage.py shell:
>>> from utils.weixin.tasks import celery_add
>>> result = celery_add.delay(1,3)
>>> result.ready()
False
>>> result.get()
hangup here forever...
And, nothing show up in the log of celery worker and log of rabbitmq, not any 'received task' etc.
It seems that calling the task does not comunicate with worker.
Question
What should I do to find out what has been done incorrectly. How should I fix this?
Appriciated!
I am trying to configure a Django project to use Celery (I am using Django 1.3 on Debian Squeeze)
I installed django-celery (2.3.3) and then followed these instructions.
My django celery settings are the following:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
When I try to launch the celery worker server with...
$ python manage.py celeryd -l info
I get the following output with a "Consumer: Connection Error: [Errno 111]" at the end :
/home/thomas/virtualenv/ULYSSE/lib/python2.6/site-packages/djcelery/loaders.py:84: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn("Using settings.DEBUG leads to a memory leak, never "
[2011-09-20 12:14:00,645: WARNING/MainProcess]
-------------- celery#debian v2.3.3
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: amqp://guest#localhost:5672//
- ** ---------- . loader: djcelery.loaders.DjangoLoader
- ** ---------- . logfile: [stderr]#INFO
- ** ---------- . concurrency: 1
- ** ---------- . events: OFF
- *** --- * --- . beat: OFF
-- ******* ----
--- ***** ----- [Queues]
-------------- . celery: exchange:celery (direct) binding:celery
[Tasks]
. competitions.tasks.add
[2011-09-20 12:14:00,788: INFO/PoolWorker-1] child process calling self.run()
[2011-09-20 12:14:00,795: WARNING/MainProcess] celery#debian has started.
[2011-09-20 12:14:00,809: ERROR/MainProcess] **Consumer: Connection Error: [Errno 111] Connection refused. Trying again in 2 seconds**...
Apparently, my settings are correctly read (cf. Configuration section in the output) and the worker process is correctly started ("celery#debian has started")
I can not figure out why this "Consumer: Connection Error: [Errno 111]" error appends...
Has this to do with the BROKER_USER and BROKER_PASSWORD settings?
I tried different settings for user/password (my account, root account...) but I always get the same error. Does 'BROKER_USER' and 'BROKER_PASSWORD refer to a OS user, a database user, a "broker" user?
How can I get rid of this Connection Error?
Looks like rabbitmq isn't installed or running. Can you check this?
apt-get install rabbitmq-server
on Ubuntu