How to run a celery worker on AWS Elastic Beanstalk? - django

Versions:
Django 1.9.8
celery 3.1.23
django-celery 3.1.17
Python 2.7
I'm trying to run my celery worker on AWS Elastic Beanstalk. I use Amazon SQS as a celery broker.
Here is my settings.py
INSTALLED_APPS += ('djcelery',)
import djcelery
djcelery.setup_loader()
BROKER_URL = "sqs://%s:%s#" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F'))
When I type the line below on terminal, it starts the worker on my local. Also I've created a few tasks and they're executed correctly. How can I do this on AWS EB?
python manage.py celery worker --loglevel=INFO
I've found this question on StackOverflow.
It says I should add a celery config to the .ebextensions folder which executes the script after deployment. But it doesn't work. I'd appreciate any help. After installing supervisor, I didn't do anything with it. Maybe that's what I'm missing.
Here is the script.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuration script
celeryconf="[program:celeryd]
command=/opt/python/run/venv/bin/celery worker --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
; priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
Logs from EB: It looks like it works but still it doesn't execute my tasks.
-------------------------------------
/opt/python/log/supervisord.log
-------------------------------------
2016-08-02 10:45:27,713 CRIT Supervisor running as root (no user in config file)
2016-08-02 10:45:27,733 INFO RPC interface 'supervisor' initialized
2016-08-02 10:45:27,733 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2016-08-02 10:45:27,733 INFO supervisord started with pid 2726
2016-08-02 10:45:28,735 INFO spawned: 'httpd' with pid 2812
2016-08-02 10:45:29,737 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:14,684 INFO stopped: httpd (exit status 0)
2016-08-02 10:47:15,689 INFO spawned: 'httpd' with pid 4092
2016-08-02 10:47:16,727 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:23,701 INFO spawned: 'celeryd' with pid 4208
2016-08-02 10:47:23,854 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:47:24,858 INFO spawned: 'celeryd' with pid 4214
2016-08-02 10:47:35,067 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:52:36,240 INFO stopped: httpd (exit status 0)
2016-08-02 10:52:37,245 INFO spawned: 'httpd' with pid 4460
2016-08-02 10:52:38,278 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:52:45,677 INFO stopped: celeryd (exit status 0)
2016-08-02 10:52:46,682 INFO spawned: 'celeryd' with pid 4514
2016-08-02 10:52:46,860 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:52:47,865 INFO spawned: 'celeryd' with pid 4521
2016-08-02 10:52:58,054 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:55:03,135 INFO stopped: httpd (exit status 0)
2016-08-02 10:55:04,139 INFO spawned: 'httpd' with pid 4745
2016-08-02 10:55:05,173 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:55:13,143 INFO stopped: celeryd (exit status 0)
2016-08-02 10:55:14,147 INFO spawned: 'celeryd' with pid 4857
2016-08-02 10:55:14,316 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:55:15,321 INFO spawned: 'celeryd' with pid 4863
2016-08-02 10:55:25,518 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)

I forgot to add an answer after solving this.
This is how i fixed it.
I've created a new file "99-celery.config" in my .ebextensions folder.
In this file, I've added this code and it works perfectly.
(don't forget the change your project name in line number 16, mine is molocate_eb)
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd]
; Set full path to celery program if using virtualenv
command=/opt/python/current/app/molocate_eb/manage.py celery worker --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
Edit: In case of an supervisor error on AWS, just be sure that;
You're using Python 2 not Python 3 since supervisor doesn't work on Python 3.
Don't forget to add supervisor to your requirements.txt.
If it still gives error(happened to me once), just 'Rebuild Environment' and it'll probably work.

you can use the supervisor to run celery. That will run the celery in demon process.
[program:tornado-8002]
directory: name of the director where django project lies
command: command to run celery // python manage.py celery
stderr_logfile = /var/log/supervisord/tornado-stderr.log
stdout_logfile = /var/log/supervisord/tornado-stdout.log

Related

How to configure Celery to run as systemd service with a Django application served by Gunicorn?

I followed the official Celery documentation regarding how to configure Celery to work with Django (python 3) and RabbitMQ. I already have a systemd service to start my Django Application using Gunicorn, NGINX is used as a reverse reverse-proxy.
Now I need to daemonize Celery itself based on the offical documentation but my current settings doesn't seem to work properly as my application is not recognized, I get error below at Celery systemd service start:
# systemctl start celery-my_project
# journalctl -xe
Error:
Unable to load celery application
The module celery-my_project.celery was not found
Failed to start Celery daemon
As a test, I got a rid of all the systemd/Gunicorn/NGINX and basically started my virtualenv/Django application & Celery worker manually: Celery tasks are properly detected by Celery worker:
celery -A my_project worker -l debug
How to properly configure systemd unit so that I can daemonize Celery?
Application service (systemd unit)
[Unit]
Description=My Django Application
After=network.target
[Service]
User=myuser
Group=mygroup
WorkingDirectory=/opt/my_project/
ExecStart=opt/my_project/venv/bin/gunicorn --workers 3 --log-level debug --bind unix:/opt/my_project/my_project/my_project.sock my_project.wsgi:application
[Install]
WantedBy=multi-user.target
Celery service (systemd unit)
[Unit]
Description=Celery daemon
After=network.target
[Service]
Type=forking
User=celery
Group=mygroup
EnvironmentFile=/etc/celery/celery-my_project.conf
WorkingDirectory=/opt/my_project
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Celery service configuration file (systemd EnvironmentFile)
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/opt/my_project/venv/bin/celery"
# App instance to use
CELERY_APP="my_project"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"
Django project layout
# Project root: /opt/my_project
my_project
manage.py
my_project
__init__.py
settings.py
celery.py
my_app
tasks.py
forms.py
models.py
urls.py
views.py
venv
__init__.py
from .celery import app as celery_app
__all__ = ('celery_app',)
celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE','my_project.settings')
app = Celery('my_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

Running celeryd-worker on elastic-beanstalk

I am trying to get celery set up on elastic-beanstalk to run periodic-tasks. I have gotten everything running except for the celeryd-worker on elastic-beanstalk. It deploys fine and it is registering tasks but when I set up a periodic task either in celery.py or through django-celery-beat it doesn't execute. I pulled the logs from EB and got
/opt/python/log/supervisord.log
-------------------------------------
2019-09-20 16:32:50,831 INFO spawned: 'celeryd-worker' with pid 712
2019-09-20 16:32:53,929 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:32:54,930 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 16:41:43,327 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 16:41:44,333 INFO spawned: 'celeryd-beat' with pid 1664
2019-09-20 16:41:54,442 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 16:41:54,689 INFO spawned: 'celeryd-worker' with pid 1670
2019-09-20 16:41:57,844 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:41:58,847 INFO spawned: 'celeryd-worker' with pid 1676
2019-09-20 16:42:02,037 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:04,045 INFO spawned: 'celeryd-worker' with pid 1711
2019-09-20 16:42:05,271 INFO stopped: httpd (exit status 0)
2019-09-20 16:42:05,275 INFO spawned: 'httpd' with pid 1717
2019-09-20 16:42:07,241 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-09-20 16:42:07,898 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:11,041 INFO spawned: 'celeryd-worker' with pid 1841
2019-09-20 16:42:11,344 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 16:42:12,349 INFO spawned: 'celeryd-beat' with pid 1845
2019-09-20 16:42:14,527 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:15,529 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 16:42:22,426 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 16:42:22,673 INFO spawned: 'celeryd-worker' with pid 1852
2019-09-20 16:42:25,760 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:26,763 INFO spawned: 'celeryd-worker' with pid 1857
2019-09-20 16:42:29,890 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:31,897 INFO spawned: 'celeryd-worker' with pid 1872
2019-09-20 16:42:35,010 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:38,360 INFO spawned: 'celeryd-worker' with pid 1879
2019-09-20 16:42:41,484 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 16:42:42,485 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 17:01:45,639 INFO spawned: 'celeryd-worker' with pid 3075
2019-09-20 17:01:46,392 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 17:01:47,397 INFO spawned: 'celeryd-beat' with pid 3080
2019-09-20 17:01:47,924 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:01:49,695 INFO spawned: 'celeryd-worker' with pid 3085
2019-09-20 17:01:51,339 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:01:53,343 INFO spawned: 'celeryd-worker' with pid 3089
2019-09-20 17:01:54,890 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:01:57,894 INFO spawned: 'celeryd-worker' with pid 3094
2019-09-20 17:01:57,895 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 17:01:58,386 INFO stopped: celeryd-worker (terminated by SIGTERM)
2019-09-20 17:01:59,391 INFO spawned: 'celeryd-worker' with pid 3099
2019-09-20 17:02:00,957 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:02:01,958 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 17:02:03,508 INFO stopped: httpd (exit status 0)
2019-09-20 17:02:03,513 INFO spawned: 'httpd' with pid 3144
2019-09-20 17:02:04,678 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-09-20 17:02:08,612 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 17:02:09,617 INFO spawned: 'celeryd-beat' with pid 3268
2019-09-20 17:02:20,058 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 17:02:20,306 INFO spawned: 'celeryd-worker' with pid 3274
2019-09-20 17:02:21,860 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:02:22,863 INFO spawned: 'celeryd-worker' with pid 3278
2019-09-20 17:02:24,429 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:02:26,437 INFO spawned: 'celeryd-worker' with pid 3291
2019-09-20 17:02:27,995 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:02:31,970 INFO spawned: 'celeryd-worker' with pid 3296
2019-09-20 17:02:33,535 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:02:34,537 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 17:17:05,637 INFO spawned: 'celeryd-worker' with pid 4245
2019-09-20 17:17:07,195 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:17:08,197 INFO spawned: 'celeryd-worker' with pid 4250
2019-09-20 17:17:09,750 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:17:11,757 INFO spawned: 'celeryd-worker' with pid 4268
2019-09-20 17:17:12,996 INFO stopped: httpd (exit status 0)
2019-09-20 17:17:13,667 INFO spawned: 'httpd' with pid 4274
2019-09-20 17:17:14,000 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:17:15,001 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-09-20 17:17:17,749 INFO spawned: 'celeryd-worker' with pid 4356
2019-09-20 17:17:19,939 INFO stopped: celeryd-worker (terminated by SIGTERM)
2019-09-20 17:17:20,944 INFO spawned: 'celeryd-worker' with pid 4400
2019-09-20 17:17:22,486 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:17:23,005 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 17:28:31,137 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 17:28:32,142 INFO spawned: 'celeryd-beat' with pid 5121
2019-09-20 17:28:42,799 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 17:28:43,049 INFO spawned: 'celeryd-worker' with pid 5127
2019-09-20 17:28:44,655 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:28:45,745 INFO spawned: 'celeryd-worker' with pid 5145
2019-09-20 17:28:46,959 INFO stopped: httpd (exit status 0)
2019-09-20 17:28:46,963 INFO spawned: 'httpd' with pid 5151
2019-09-20 17:28:47,955 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:28:48,957 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-09-20 17:28:49,961 INFO spawned: 'celeryd-worker' with pid 5221
2019-09-20 17:28:52,555 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:28:53,967 INFO stopped: celeryd-beat (exit status 0)
2019-09-20 17:28:54,972 INFO spawned: 'celeryd-beat' with pid 5278
2019-09-20 17:28:55,975 INFO spawned: 'celeryd-worker' with pid 5282
2019-09-20 17:28:57,801 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:28:58,803 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
2019-09-20 17:29:05,811 INFO success: celeryd-beat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2019-09-20 17:29:06,058 INFO spawned: 'celeryd-worker' with pid 5304
2019-09-20 17:29:07,616 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:29:08,618 INFO spawned: 'celeryd-worker' with pid 5309
2019-09-20 17:29:10,142 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:29:12,149 INFO spawned: 'celeryd-worker' with pid 5322
2019-09-20 17:29:13,708 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:29:17,450 INFO spawned: 'celeryd-worker' with pid 5326
2019-09-20 17:29:18,998 INFO exited: celeryd-worker (exit status 1; not expected)
2019-09-20 17:29:20,000 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
It appears that the celery beat worker is starting/restarting just fine so the problem must be with the celery worker? I am very novice at this so im not sure where the problem.
.ebextensions/files/celery_configuration.txt
#!/usr/bin/env bash
# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}
# Create celery configuraiton script
celeryconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A backend -P solo --loglevel=INFO -n worker.%%h
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A backend --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /opt/python/etc/celery.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | tee -a /opt/python/etc/supervisord.conf
echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker
02-python.config
packages:
yum:
libcurl-devel: []
container_commands:
01_upgrade_pip_for_venv:
command: "/opt/python/run/venv/bin/pip install --upgrade pip"
04_celery_tasks:
command: "cat .ebextensions/files/celery_configuration.txt > /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh && chmod 744 /opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
05_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.schedules import crontab
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')
app = Celery('backend')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
app.conf.beat_schedule = {
'task_every_5_seconds': { #name of the scheduler
'task': 'add_new_task', # task name which we have created in tasks.py
'schedule': 10.0, # set the period of running
},
}
Here are a couple of things to check:
the start command for celery beat looks wrong if you are using celery v4. According to the docs, you should be using something like:
/opt/python/run/venv/bin/celery beat -A backend --loglevel=INFO --workdir=/tmp --scheduler django_celery_beat.schedulers:DatabaseScheduler --pidfile /tmp/celerybeat.pid
make sure that you only have one beat process running across all of your beanstalk instances (i.e., one beat instance per broker URL).

Errors adding Environment Variables to NodeJS Elastic Beanstalk

My configuration worked up until yesterday. I have added the nginx NodeJS https redirect extension from AWS. Now, when I try to add a new Environment Variable through the Elastic Beanstalk configuration, I get this error:
[Instance: i-0364b59cca36774a0] Command failed on instance. Return code: 137 Output: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 27395 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}. Hook /opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
When I look at the eb-activity.log, I see this error:
[2018-02-18T17:24:58.762Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity...
[2018-02-18T17:24:58.939Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (Executor::NonZeroExitStatus)
What am I doing wrong? And what has changed recently since this worked fine when I changed an Environment Variable a couple months ago.
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
service nginx stop exits with status 137 (Killed).
Your script starts with: #!/bin/bash -xe
The parameter -e makes the script exit immediately whenever something exits with a non-zero status.
If you want to continue the execution, you need to catch the exit status (137).
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
set +e
service nginx stop
exitStatus = $?
if [ $exitStatus -ne 0 ] && [ $exitStatus -ne 137 ]
then
exit $exitStatus
fi
set -e
fi
service nginx start
The order of events looks like this to me:
Create a post-deploy hook to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run a container command to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run the post-deploy hook, which tries to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
So it doesn't seem surprising to me that the post-deploy script fails as the file you are trying to delete probably doesn't exist.
I would try one of two things:
Move the deletion of the temporary conf file from the container command to the 99_kill_default_nginx.sh script, then remove the whole container command section.
Remove the line rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf from the 99_kill_default_nginx.sh script.
/sbin/status nginx seems not to work anymore. I updated the script to use service nginx status:
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=$(service nginx status)
if [[ "$status" =~ "running" ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
And the faulty script is STILL in amazon's docs... I wonder when they are going to fix it. It's been enough time already

How can I set `celeryd` and `celerybeat` in Docker Compose?

I have a task that update per-minute.
This is my Dockerfile for the Django application.
FROM python:3-onbuild
COPY ./ /
EXPOSE 8000
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --noinput
ENTRYPOINT ["python3", "manage.py", "celeryd"]
ENTRYPOINT ["python3", "manage.py", "celerybeat"]
ENTRYPOINT ["/app/start.sh"]
This is my docker-compose.yml.
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx_airport
ports:
- "8080:8080"
volumes:
- ./:/app
- ./nginx:/etc/nginx/conf.d
- ./static:/app/static
depends_on:
- web
rabbit:
hostname: rabbit_airport
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=asdasdasd
ports:
- "5673:5672"
web:
build: ./
container_name: django_airport
volumes:
- ./:/app
- ./static:/app/static
expose:
- "8080"
links:
- rabbit
depends_on:
- rabbit
This is the front-most log of my running container.
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 ===
rabbit_1 | Starting RabbitMQ 3.6.12 on Erlang 19.2.1
rabbit_1 | Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbit_1 | Licensed under the MPL. See http://www.rabbitmq.com/
rabbit_1 |
rabbit_1 | RabbitMQ 3.6.12. Copyright (C) 2007-2017 Pivotal Software, Inc.
rabbit_1 | ## ## Licensed under the MPL. See http://www.rabbitmq.com/
rabbit_1 | ## ##
rabbit_1 | ########## Logs: tty
rabbit_1 | ###### ## tty
rabbit_1 | ##########
rabbit_1 | Starting broker...
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:30 ===
rabbit_1 | node : rabbit#rabbit_airport
rabbit_1 | home dir : /var/lib/rabbitmq
rabbit_1 | config file(s) : /etc/rabbitmq/rabbitmq.config
rabbit_1 | cookie hash : grcK4ii6UVUYiLRYxWUffw==
rabbit_1 | log : tty
rabbit_1 | sasl log : tty
rabbit_1 | database dir : /var/lib/rabbitmq/mnesia/rabbit#rabbit_airport
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Memory high watermark set to 3145 MiB (3298503884 bytes) of 7864 MiB (8246259712 bytes) total
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Enabling free disk space monitoring
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Disk free limit set to 50MB
rabbit_1 |
rabbit_1 | =INFO REPORT==== 29-Sep-2017::11:45:31 ===
rabbit_1 | Limiting to approx 1048476 file handles (943626 sockets)
Everything is okay except my Celery task is not running.
EDIT:
#!/bin/bash
# PENDING: From the source here,
# http://tutos.readthedocs.io/en/latest/source/ndg.html it says that it is a
# common practice to have a specific user to handle the webserver.
SCRIPT=$(readlink -f "$0")
DJANGO_SETTINGS_MODULE=airport.settings
DJANGO_WSGI_MODULE=airport.wsgi
NAME="airport"
NUM_WORKERS=3
if [ "$BASEDIR" = "/" ]
then
BASEDIR=""
else
BASEDIR=$(dirname "$SCRIPT")
fi
if [ "$BASEDIR" = "/" ]
then
VENV_BIN="venv/bin"
SOCKFILE="run/gunicorn.sock"
else
VENV_BIN=${BASEDIR}"/venv/bin"
SOCKFILE=${BASEDIR}"/run/gunicorn.sock"
fi
SOCKFILEDIR="$(dirname "$SOCKFILE")"
VENV_ACTIVATE=${VENV_BIN}"/activate"
VENV_GUNICORN=${VENV_BIN}"/gunicorn"
# Activate the virtual environment.
# Only set this for virtual environment.
#cd $BASEDIR
#source $VENV_ACTIVATE
# Set environment variables.
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$PYTHONPATH:$BASEDIR
# Create run directory if they does not exists.
#test -d $SOCKFILEDIR || mkdir -p $SOCKFILEDIR
# Start Gunicorn!
# Programs meant to be run under supervisor should not daemonize themselves
# (do not use --daemon).
#
# Set this for virtual environment.
#exec ${VENV_GUNICORN} ${DJANGO_WSGI_MODULE}:application \
# --bind=unix:$SOCKFILE \
# --name $NAME \
# --workers $NUM_WORKERS
# For non-virtual environment.
exec gunicorn ${DJANGO_WSGI_MODULE}:application \
--bind=unix:$SOCKFILE \
--name $NAME \
--workers $NUM_WORKERS
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
DetailsDetailsDetailsDetailsDetails
Your entrypoints are overriding one another. The last entrypoint is the only one that will run. You can try to build and run the following to be sure.
FROM alpine
ENTRYPOINT ["echo", "1"]
ENTRYPOINT ["echo", "2"]
As described in the docker docs. To start multiple services per container, you can wrap the starting commands in a wrapper script and run the wrapper script inside CMD in the dockerfile.
wrapper.sh
python3 manage.py celeryd
python3 manage.py celerybeat
./app/start.sh
FROM python:3-onbuild
COPY ./ /
EXPOSE 8000
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --noinput
ADD wrapper.sh wrapper.sh
CMD ./wrapper.sh

How to configure django-celery in heroku server

In my local environment i used celery for schedule task it works in local system i used redis as a worker
now i want to configure django celery in heroku server
i tried to use heroku-redis add-ons in heroku app
added this stuff in my settings.py:
r = redis.from_url(os.environ.get("REDIS_URL"))
BROKER_URL = redis.from_url(os.environ.get("REDIS_URL"))
CELERY_RESULT_BACKEND = os.environ.get('REDIS_URL')
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Canada/Eastern'
redis_url = urlparse.urlparse(os.environ.get('REDIS_URL'))
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "{0}:{1}".format(redis_url.hostname, redis_url.port),
"OPTIONS": {
"PASSWORD": redis_url.password,
"DB": 0,
}
}
}
after in my procfile I added:
web: gunicorn bizbii.wsgi --log-file -
worker : celery workder -A tasks.app -l INFO
python manage.py celeryd -v 2 -B -s celery -E -l INFO
but still task does not run
After that I run command for log so it return:
2016-07-30T08:53:19+00:00 app[heroku-redis]: source=REDIS sample#active-connections=1 sample#load-avg-1m=0.07 sample#load-avg-5m=0.075 sample#load-avg-15m=0.07 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664876.0kB sample#memory-free=13426732.0kB sample#memory-cached=460140kB sample#memory-redis=299616bytes sample#hit-rate=1 sample#evicted-keys=0
after that create dyno with this command:
heroku run bash -a bizbii2
and run following command:
python manage.py celeryd -v 2 -B -s celery -E -l INFO
so it return error like:
[2016-08-03 08:23:26,506: ERROR/Beat] beat: Connection error: [Errno 111] Connection refused. Trying again in 8.0 seconds...
[2016-08-03 08:23:26,843: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 8.00 seconds...
Please give me suggestion how we deploy celery on heroku server
I had this exact problem. I updated my procfile with the following line and the error is gone:
worker: celery -A TASKFILE worker -B --loglevel=info
Replace TASKFILE with for example: proj.celery or proj.tasks. This depends on where you put the tasks.