django-celery process goes to 100 % and tasks are not executed - django

I am using last version of django-celery with RabbitMQ Worker on Ubuntu 12.04 Server. I start to have problem with celery tasks before a month ago and I can not figure out what is the problem. I run celery into production with supervisord, I dont know why but there is moments when some of the process that run celery go to 100% usages of CPU and stay at that point until I restart the celery an kill existing process. When this happen worker does not get more tasks and they are not executed until I restart the celery.
Comman with which I run celery from supervisord is:
django-admin.py celeryd -v 2 -B -s celery -E -l INFO
Thank you for your help.

Related

Starting RabbitMQ and Celery for Django when launching Amazon Linux 2 instance on Elastic Beanstalk

I have been trying to setup celery to run tasks for my Django application on Amazon Linux 2. Everything worked on AL1, but things have changed.
When I SSH into the instance I can get everything running properly - however the commands upon deployment do not work properly.
I have tried this in my .platform/hooks/postdeploy directory:
How to upgrade Django Celery App from Elastic Beanstalk Amazon Linux 1 to Amazon Linux 2
However that is not seeming to work. I have container commands to install epel, erlang and rabbitmq as the broker - they seem to work.
After that answer, #Jota suggests:
"No, ideally it should go in the Procfile file in the root of your repository. Just write celery_worker: celery worker -A my_django_app.settings.celery.app --concurrency=1 --loglevel=INFO -n worker.%%h. Super simple."
However would I include the entire script in the procfile or just the line:
celery_worker: celery worker -A my_django_app.settings.celery.app --concurrency=1 --loglevel=INFO -n worker.%%h
This seems to suggests it would just be the command:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
Then where would the script be if not in the procfile with the above line?

Django: Do i have to restart celery beat, celery worker and Django gunicorn when new changes are uploaded to production server

I have a production server running Django application
Django server is run using gunicorn and nginx
pipenv run gunicorn --workers=1 --threads=50 --bind 0.0.0.0:8888 boiler.wsgi:application
celery worker is run using
pipenv run celery -A boiler worker
celery beat is run using
pipenv run celery -A boiler beat
Now i have updated my model and few views on my production server (i.e pulled some changes using github)
Now inorder to reflect the changes should i restart all celery beat,celery worker and Django server gunicorn
or only celery worker and Django server gunicorn is sufficient
or only Django server gunicorn is sufficient
If you have made changes to any code that in one way or the other affects the celery tasks then yes, you should restart the celery worker. If you are not sure, a safe bet is to restart. And since celery beat tracks the scheduling of periodic tasks you should also restart it if you restart the workers. Of course, you should ensure there are no current tasks running or properly kill them before restarting. You can monitor the tasks using Flower

Executing two celery workers from one line

I am working on a project for my university within a team where I was mostly working on the frontend and some basic django models, so I am not so familiar and eloquent with django-celery and I also did not set it up. At first we were using one celery worker, but I had to add one more so I was able to finish a user story.
I am currently running two workers with one in a terminal each like this:
exec celery -A my_proj --concurrency=1 worker
exec celery -A my_proj --concurrency=1 worker -B -Q notification
While i run those two my project works, but I need these to start from one line. So:
How do I get those two into one line for a script?
So far I've tried around this:
exec celery multi start celery -A my_proj --concurrency=1 notification -A my_proj --concurrency=1 -B -Q notification
But it stops my project from functioning.
Any help is appreciated, thank you!
Solution
celery multi start 2 -A my_proj -c=1 -B:2 -Q:2 notification
The above tells start 2 workers with the 2nd worker to process the notification queue and embed celery beat to it
Explanation
You can run the following to see the commands resulting from this
celery multi show 2 -A my_proj -c=1 -B:2 -Q:2 notification
Output:
celery worker -A my_proj -c=1
celery worker -A my_proj -c=1 -B -Q notification
try
exec celery -A my_proj --concurrency=1 worker && exec celery -A my_proj --concurrency=1 worker -B -Q notification

Celery worker exit with shell

i use Celery with Django on a Debian server.
I connect to the server with putty via ssh and start the celery worker with the following command
celery -A django-project worker
Now i would like to close putty but than the celery worker exit apparently.
Why and what can i do to always run the celery worker?
Start celery daemonized:
celery multi start worker1 -A django-project

Django celery WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL) error

What is the best practice to run celery as a daemon in a production virtualenv? I use the following in the local environment which works perfect and receiving tasks works as expected. But in production always stuck at
WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL) error
I use the following configuration in local and in production:
/etc/default/celeryd:
CELERY_BIN="path/to/celery/bin"
CELERY_APP="myproj"
CELERYD_CHDIR="home/myuser/project/myproj"
CELERYD_OPTS="--time-limit=300 --concurrency=4"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="myuser"
CELERYD_GROUP="myuser"
CELERY_CREATE_DIRS=1
/etc/init.d/celeryd:[celeryd]
Package & OS version info:
Ubuntu == 16.04.2
Celery == 4.1.0
rabbitmq == 3.5.7
django == 2.0.1
I also use these commands while making celery to run as daemon:
sudo chown -R root:root /var/log/celery/
sudo chown -R root:root /var/run/celery/
sudo update-rc.d celeryd defaults
sudo update-rc.d celeryd enable
sudo /etc/init.d/celeryd start
Here is my django settings.py configuration for celery:
CELERY_BROKER_URL = 'amqp://localhost'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_RESULT_BACKEND = 'db+sqlite:///results.sqlite'
CELERY_TASK_SERIALIZER = 'json'
Need expert advise to make the celery daemon to work correctly in production virtualenv. Thanks in advance!
Unless you've created a seperate vhost & user for rabbitmq, set the CELERY_BROKER_URL to amqp://guest#localhost//
Also, rather than root, you should set the owner of /var/log/celery/ and /var/run/celery/ to "myuser" as you have set in your celeryd config
I guess this can be a symptom of OOM. I was deploying a Celery backend in a Docker container - "it worked on my machine" but not in the cluster. I allocated more ram to the task and no longer have the problem.
I got this error due to out of memory error caught in
/var/log/kern.log
I have Tensorflow running in one of my tasks, which needs additional computational power, but my physical memory(RAM) is not sufficient to handle that much load. I weird that there's no log in celery except SigKill 9 error. But the kernel log helped me to fix it.