Celery how to stop tasks being processed on a developnent machine - django

I have my production and development machines setup using Celery in Django. Everything works. My issue is that when I boot up my development machine it starts to consume tasks and if I have a bug (which I often do on my development version) this can give me lots of headaches.
I still want my development machine to send tasks to be consumed, but how do I stop it from processing tasks and leave just my production server for that?
PS, I don't want to have to remove celery setting on my development machine as I still want my dev one to send tasks to be consumed, if that makes sense.
These are my current settings...
Celery
BROKER_URL = 'amqp://admin:xyz#54.111.111.11:5672//'
CELERY_SEND_TASK_ERROR_EMAILS = True
I'm using RabbitMQ.

You can try:
celery purge
And if you have django-celery you can do
python manage.py celery purge

Related

Where to run Celery on AWS

In my django web app I am running a calculation which takes 2-10 minutes, and the AWS server times out with a 504 server error.
It seems that the best way to fix this problem is to implement a worker, and offload the calculations. Through some research, it seems that Celery with a Redis server (maybe AWS SQS instead?) are best for Django.
However, I only see tutorials for instances that are run locally. The Redis server is hosted by Railway and Celery is run in a separate terminal than Django (on the same local machine). I'm curious if/where Celery and Redis should be running on AWS.
This answer says you should try to run celery as a deamon in the background. AWS Elastic Beanstalk uses supervisord already to run some deamon processes. So you can leverage that to run celeryd and avoid creating a custom AMI for this. It works nicely for me. but don't the Celery and Redis servers still need to run somehow?
Where does the Celery server run?
How do I start it?
How can I leverage supervisord to run daemon processes? The documentation isn't helping me very much with AWS integration
You can configure Procfile to run multiple processes like main django app, celery and celery-beat in parallel as documented here:
web: <command to start your django app>
celery: celery -A <path_to_celery_app> worker
celery_beat: celery -A <path_to_celery_app> beat

How to deploy a django project to google cloud with celery workers?

So i have a django project which I installed celery and heroku redis for it and I use google cloud for deployment. Everything works fine at local but i need to run my celery workers on website 24/7. I searched for supervisor and installed it too. I start supervisor from my command line. Celery workers runs since I ran supervisor. But there is a problem. I can not hold my pc open all the time. When I close, supervisor stops too. I did not figured out cloud tasks as well. Lastly, I read some infos about kubernetes and celery. Is it possible to use celery with kubernetes and how can i install kubernetes-celery django setup?
You need to be running your Django server with Gunicorn, your Redis service as a separate service, and your celery worker as a third service.
Alternatively, if you want one single container instance (Pod in k8s) you can set a supervisor to run gunicorn and your celery worker inside the same pod

Django+Celery+RabbitMQ, Does Celery need separate server in production?

I have recently learned to integrate Celery + RabbitMQ in local machine, all three of them were running in separate servers.
During production, should I run celery & RabbitMQ in separate server ?
I have searched for Celery hosting options but couldn't find useful links.
I usually use Pythonanywhere to host my Django app
I found this https://stackoverflow.com/a/46105723/10515390 answer useful but did't get a clarification

Celery tasks are executed by different program binaries

I have a Django web application that executes tasks via celery. It is run with a combination of Apache, uWSGI, and Redis. For some reason, one of the tasks is being executed by the uWSGI server and the other is executed by the Python interpreter. This is causing permissions issues as uWSGI doesn't run as the same user as Python does.
What could cause the tasks to be run by different programs? Is there a setting somewhere?
Turns out, I needed to call the task with .delay() to get the Celery deamon to execute the task instead of uWSGI.

Heroku, Django and celery on RabbitMQ

I'm building a Django project on Heroku.
I understand that gunicorn is recommended as a webserver so I need an event loop type of worker and I use gevent for that.
It seems that monkey patching gevent does most of the work for me so I can have concurrency, but how am I supposed to connect to the RabbitMQ without real threads or jamming the whole loop?
I am baffled by this since Heroku themselves recommend gunicorn, celery and RabbitMQ but I don't see how all of these work together.
Do you understand that celery and gunicorn are used for different purposes?
Gunicorn is the webserver responding to requests made by users, serving them web pages or JSON data.
Celery is an asynchronous task manager, i.e. it lets you run arbitrary python code irrespective of web requests to your server.
Do you understand this distinction?