Here is the way how I start celery periodic tasks. First I execute this command:
celery worker -A my_project.celery
And after that this command:
celery -A my_project beat -l info -S django
After executing these two commands on two different terminal tabs, my celery beat periodic tasks starts running. If I don't run one of the described commands, my periodic tasks do not run. My question is: is there any any way to start the celery with the single command, or even better with runserver command?
Your method of using Celery is correct. You can use parameter -B, --beat to start beat and worker using single command:
# This will start worker AND beat process
celery worker --app=my_project -l=INFO --beat -S django
But do not use this in production, see this note in Celery docs (http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html):
-B is meant to be used for development purposes. For production environment, you need to start celery beat separately.
Few notes: 1) I think there is no way to run the Celery and runserver together (I honestly think it's not a good idea); 2) I see django-celery tag in your question. This is and old and deprecated way of integrating Django and Celery:
THIS PROJECT IS ONLY REQUIRED IF YOU WANT TO USE DJANGO RESULT BACKEND
AND ADMIN INTEGRATION (Source: https://github.com/celery/django-celery)
Related
I am building a django app with celery. I tried composing a docker-compose without a container for the worker. In my Dockerfile for django, an entrypoint running the celery worker and django app:
...
python manage.py migrate
celery -A api worker -l INFO --detach
python manage.py runserver 0.0.0.0:8000
The celery will run using this order but not django runserver. I have seen in tutorials that they separated the django container from woker container or vice-versa. I do not see the explanation for this separation. I also observed that the two python container (django, worker) has the same volume. How can celery add tasks if it has a different environment with django? In my mind there would be two django apps (the same volume) for two containers only 1 running the runserver, and the other one running the celery worker. I do not understand the separation.
You should aim to set up your containers to run only a single foreground process in each container, and no background processes. Even in this simple example, there are two obvious advantages: if the Celery worker fails, you can restart a standalone container, but it's invisible to Docker as a background process; and you can separately read the docker logs of the Web server and background worker without having them intertwined. At larger scale you can imagine wanting to run different numbers of Django and Celery containers depending on your load.
To make this work it's important that the entrypoint script not run the program directly. It is passed the (possibly overridden) container command as arguments, and you can use a special shell construct to run that
#!/bin/sh
./manage.py migrate
exec "$#"
In the Dockerfile, declare both the ENTRYPOINT and a default CMD to run, say, the Web server
ENTRYPOINT ["./entrypoint.sh"] # probably unchanged, must be JSON array syntax
CMD ["./manage.py", "runserver", "0.0.0.0:8000"]
In a Compose setup, you can run multiple containers off the same image, but override the command: for a Celery worker.
version: '3.8'
services:
web:
build: .
ports: ['8000:8000']
environment:
REDIS_HOST: redis
worker:
build: .
command: celery -A api worker -l INFO
environment:
REDIS_HOST: redis
redis:
image: redis
The main application communicates with the worker via a queue in Redis (or another store), so there's no need for them to be in the same container.
As Celery documentation mentions:
Celery communicates via messages, usually using a broker to mediate
between clients and workers. To initiate a task the client adds a
message to the queue, the broker then delivers that message to a
worker.
Meaning the communication between the Client (Django) and Worker (Celery) are done through a message queue. Hence it does not matter if the workers and clients in separate containers or even separate machines. If the Client can access the message queue (for example using Redis or RabbitMQ) and worker can pop tasks from that queue, it will always work.
About the docker-compose part, there is no ideal standard for keeping or separating Celery and Django. You can put them in same container or not, it is up to you and what are the requirements of the project. If you are using two containers, then they need to share volumes because of the source code and any other data which are needed for executing tasks.
I am trying to extend my django app with celery crontab functionality. For this purposes i created celery.py file where i put code as mentioned in official documentation.
Here is my the code from project/project/celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE','project.settings')
app=Celery('project')
app.config_from_object('django.conf::settings',namespace='CELERY')
Than inside my project/settings.py file i specify related to celery configs as follow
CELERY_TIMEZONE = "Europe/Moscow"
CELERYBEAT_SHEDULE = {
'test_beat_tasks':{
'task':'webhooks.tasks.adding',
'schedule':crontab(minute='*/1),
},
}
Than i run worker an celery beat in the same terminal by
celery -A project worker -B
But nothing happened i mean i didnt see that my celery beat task printing any output while i expected that my task webhooks.tasks.adding will execute
Than i decided to check that celery configs are applied. For this purposes in command line **python manage.py shell i checked celery.app.conf object
#i imported app from project.celery.py module
from project import celery
#than examined app configs
celery.app.conf
And inside of huge config's output of celery configs i saw that timezone is set to None
As i understand my problem is that initiated in project/celery.py app is ignoring my project/settings.py CELERY_TIMEZONE and CELERY_BEAT_SCHEDULE configs but why so? What i am doing wrong? Please guide me
After i spent so much time researching to solve this problem i found that my mistake was inside how i run worker and celery beat. While running worker as i did it wouldnt execute task in the terminal. To see is task is executing i should run it as follow celery -A project worker -B -l INFO or instead of INFO if you want more detailed output DEBUG can be added. Hope it will help anyone
I don't think its a very new question. I just could not find the right answer. I am trying to use Celery for background tasks while implementing a backend with the Django Rest Framework. I have a Redis server.
Celery is working as expected with
celery worker -A my_project --loglevel=info
However, it does not work if I sop this command. How do I keep that running? I have found a blog with supervisor. I just want to know what is the standard (as well as easier) to do this.
What you should do is go for docker and use docker-compose for your services. But if you're just testing stuff:
$ nohup celery worker -A my_project --loglevel=info &
& is used to take the process to the background, you can recall it using fg, suspend it to bg using Ctrl + Z, nohup makes sure that celery will remain functioning even if you close the ssh session.
Edit: The only drawback using this method, is that if the process exits, then you'll have to invoke it again. In a production environment, you should go for docker with docker-compose.
I have a Django 1.6 app running on the Heroku cedar-14 platform. I send tasks to be performed by a worker via Celery 3.1.18 with the redis bundle and the Redis To Go (Micro) add-on. I have very few tasks and they complete in 3-5 seconds.
I send my tasks to the worker via Celery's delay() method. Unfortunately, the worker doesn't log reception of the task for 10-15 minutes later -- after which the task completes in the expected 3-5 seconds. What might be causing this delay?
I don't see this delay on my development server and I have been using this app for a couple of years now -- this delay is a recent development. I did recently upgrade to Heroku's cedar-14 from cedar and to a larger Redis To Go plan, but I am uncertain if this delay is associated with these changes.
Update
It looks as if task has to wait for a worker dyno to run the task after some delay. In the past, a worker started running the task immediately when a task was submitted.
This is the Procfile I use:
web: newrelic-admin run-program python manage.py run_gunicorn -b "0.0.0.0:$PORT" -t 30 -w 3
worker: newrelic-admin run-program celery -A myapp.celery:CELERY_APP -E -B --loglevel=INFO worker
So the question becomes how do I return to the behavior where submittal of a celery task causes a task to run immediately?
I supsect this issue applies: https://github.com/celery/celery/issues/2296
I'm developing a web application with Django which uses Celery to process asynchronous tasks, especially for transactional emails.
One on my email task is scheduled with the ETA option but it's executed multiple times in parallel resulting in mail chain, very anoying. I can't figure out exactly why.
I checked twice my Django code and I'm sure that it is publish only one time.
I'm using Redis as a broker/backend result.
My Celery daemon is hosted on Heroku and launched via this command:
python manage.py celeryd -E -B --loglevel=INFO
Thanks for your help.
EDIT: I find a valid solution here thanks to a guy on the #celery IRC channel: http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html
Have you checked the Ensuring a task is only executed one at a time docs?