My goal is to run sudo inside one of the programs:
[program:doStaff]
command=sudo python manage.py doStaff
autostart=true
autorestart=true
stderr_logfile=/var/log/doStaff.err.log
stdout_logfile=/var/log/doStaff.out.log
Here is [unix_http_server] from supervisord.conf:
[unix_http_server]
file=/var/run/supervisor.sock
chmod=0770
Tried to set the supervisord user to root inside supervisord.conf but it didn't help.
Related
I have spent the past few days implementing Channels into my Django app in order to use Channel's websocket support. My Django project is written in Python 3.4 and I am using Daphne and Channel's redis backend.
I have been able to get everything functioning locally by wrapping Supervisord in a Python2 virtualenv and using it to run scripts that start Daphne/Redis/workers inside a Python3 virtualenv, but have had no success deploying to our Elastic Beanstalk (Python 3.4) environment.
Is there any way to set up my EB deployment configs to run Supervisor in a Python2 virtualenv, like I can locally? If not, how can I go about getting Daphne, redis, and my workers up and running on EB deploy? I am willing to switch process managers if necessary, but have found Supervisor's syntax to be easier to understand/implement than Circus, and am not aware of any other viable alternatives.
With this configuration, I am able to successfully deploy to my EB environment and ssh into it, but Supervisor fails to start every process, and if I try to start Supervisor manually to check on the processes supervisorctl status gives me FATAL "Exited too quickly (process log may have details) for everything I try to initialize. The logs are empty.
Channels backend config:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "<app>.routing.channel_routing",
"CONFIG": {
"hosts": [
os.environ.get('REDIS_URL', 'redis://localhost:6379')
],
},
},
}
asgi.py:
import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<app>.settings")
channel_layer = get_channel_layer()
supervisor conf (rest of the conf file was left default):
[program:Redis]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_redis.sh
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/redis.out.log
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_daphne.sh
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_worker.sh
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
.ebextensions/channels.config:
container_commands:
01_start_supervisord:
command: "sh /supervisord/start_supervisor.sh"
start_supervisor.sh:
#!/usr/bin/env bash
virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate
sudo pip install supervisor
sudo /usr/local/bin/supervisord -c
/opt/python/current/app/<app>/supervisord.conf
supervisorctl -c /opt/python/current/app/<app>/supervisord.conf status
start_redis:
#!/usr/bin/env bash
sudo wget http://download.redis.io/releases/redis-3.2.8.tar.gz
sudo tar xzf redis-3.2.8.tar.gz
cd redis-3.2.8
sudo make
source /opt/python/run/venv/bin/activate
sudo src/redis-server
start_daphne:
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <app>.asgi:channel_layer
start_worker:
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate
python manage.py runworker
I was loosely following this guide but since it was written for a python2 EB environment it is really only good for the ALB setup and base supervisor configuration.
Thank you guys for reading this, and please let me know if I can provide anything else by way of code/output etc.
So thanks to the logging advice from Berlin's answer, and an virtual environment tweak suggested by the AWS support team (I forwarded this question along to them), I was finally able to get this to work.
First off, I ended up completely removing Redis from Supervisor, and instead chose to run an ElastiCache Redis instance which I then connected to my EB instance. I don't think this is the only way to handle this, but it was the best route for my implementation.
I then changed from using the pre-existing start_supervisor.sh script and instead added a command to channels.config ebextension to create the script and add it to EB's postdeployment operations. This was necessary because .ebextension config files are run during deployment, but do not live past environment creation (this may not be entirely correct, but for the sake of this solution that is how I think of them), so even though my script was mostly correct the Supervisor process it started up would just die as soon as deployment was done.
so my .ebextensions/channels.config is now:
container_commands:
01_create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
virtualenv -p /usr/bin/python2.7 /tmp/senv
source /tmp/senv/bin/activate && source /opt/python/current/env
python --version > /tmp/version_check.txt
sudo pip install supervisor
/usr/local/bin/supervisord -c /opt/python/current/app/<app>/supervisord.conf
supervisorctl -c /opt/python/current/app/<app>/supervisord.conf status
This alone was enough to get Supervisor running on EB deployment, but I had to make some more changes to get Daphne and my Django workers to stay alive:
start_daphne.sh:
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <app>.asgi:channel_layer
start_worker.sh:
#!/usr/bin/env bash
source /opt/python/run/venv/bin/activate && source /opt/python/current/env
python manage.py runworker
Adding && source /opt/python/current/env to the virtualenv activation command was suggested to me by AWS support, as env variables are not pulled into virtualenvs automatically, which was causing Daphne and workers to die on creation due to Import Errors.
I also made some changes to my supervisord.conf file:
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/<app>/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log
You said logs are empty, so hard to debug, make sure to have the log line on the main supervisor config file /etc/supervisord.conf, see what the errors and share them.
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
And to each of your program on your supervisor conf file add a log error and see what is the errors and share them.
command=sh /opt/python/current/app/<app>/start_redis.sh --log-file /path/to/your/logs/start_redis.log
stdout_logfile=/tmp/redis.out.log
stderr_logfile=/tmp/redis.err.log
I wrote a long and detailed google doc on how to do this linking to it here
https://docs.google.com/document/d/1naZsjO05Pa6uB9D-26eCeUVROZhz_MKkL8GI2knGLeo/edit?usp=sharing
And for any moderators, the reason I'm not typing it all in here is that the guide is long and with a lot of images. Figured it's worth sharing as it will help someone
every one I am trying to deploying django project using nginx gunicorn,,but when I run
sudo service gunicorn start,
it said,
start: Job failed to start,
here it the working path,,
the gunicorn.conf file
/etc/init/gunicorn.conf
description "Gunicorn application server handling myproject"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
setuid user
setgid www-data
chdir /home/parallels/books/mysite
exec myprojectenv/bin/gunicorn --workers 3 --bind unix:/home/parallels/books/mysite/mysite.sock mysite.wsgi:application
when I run
gunicorn --bind 0.0.0.0:8000 mysite.wsgi:application
I can see mysite running
but when I try
sudo service gunicorn start
fail,,,
I just following the instruction here
any one who can reply thank you very much
I know this is too late, but in any case:
In your gunicorn.conf I see setuid user but there is user parallels in your prompt (it depends of your bash settings, but probably it is name of curent user).
There is chdir /home/parallels/books/mysite and then exec myprojectenv/bin/gunicorn and I guess your gunicorn location is not
/home/parallels/books/mysite/myprojectenv/bin/gunicorn. So you need to use appropriate path to run gunicorn.
Check gunicorn log file to find out problems. It should be here /var/log/upstart/gunicorn.log
You need to update setuid user to your user name and also the path.
Check the exec command and its parameters. If you have any problem in that it will fail
In my case I missed bin after venv (virtual env) path.
ie exec myprojectenv/bin/gunicorn --workers 3 --bind ...
I follow Setting up Django and your web server with uWSGI and nginx, and I have ran
sudo uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data
mentioned in the Emperor mode section successfully. Then, I try to run uWSGI like a daemon. Following Running uWSGI via Upstart, I try this routine:
sudo mkdir /etc/init/uwsgi.conf
sudo vim /etc/init/uwsgi.conf
This is the uwsgi.conf file:
description "uWSGI Emperor"
start on runlevel [2345]
stop on runlevel [06]
respawn
exec uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data
I don't know what else I have missed or there are some errors in the configuration file. Anyway, it doesn't work.
Edit /etc/rc.local and add:
/usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data --master
I am starting gunicon by calling python manage.py run_gunicorn (inside a virtualenv).
How can I achieve to restart gunicorn after my Ubuntu 12.04 server rebooted?
You can use supervisor to launch your app on startup and restart on crashes.
Install supervisor and create a config file /etc/supervisor/conf.d/your_app.conf with content like this:
[program:your_app]
directory=/path/to/app/working/dir
command=/path/to/virtualenv_dir/bin/python /path/to/manage_py/manage.py run_gunicorn
user=your_app_user
autostart=true
autorestart=true
Since I'm on Ubuntu and like to work with tools already included in the distro I used Upstart to start gunicorn after booting the machine.
I put the following code into /etc/init/django_sf_flavour.conf :
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/USER/bin/start_gunicorn.sh
Which executes this file (/home/USER/bin/start_gunicorn.sh) after booting:
#!/bin/bash
set -e
cd MY_PROJ_ROOT
source venv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec python MY_PROJ_PATH/manage.py run_gunicorn
this is based on #j7nn7k answer but with a bit changes
1 - create a .conf file in /etc/init dir to run with upstart
cd /etc/init
nano django_sf_flavour.conf
2 - put below lines in django_sf_flavour file and save it.
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/start_gunicorn.sh
3 - create start_gunicorn.sh file in home dir with following lines
cd /home
nano start_gunicorn.sh
4 - put these code in it and save it.
#!/bin/bash
set -e
cd mysite
source myenv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec gunicorn —bind 0.0.0.0:80 mysite.wsgi:application
5 - set runable permission to start_gunicorn.sh
cd /home
chmod 775 start_gunicorn.sh
** extra **
check syntax django_sf_flavour.conf with this command
init-checkconf /etc/init/django_sf_flavour.conf
and answer should be like this:
File /etc/init/django_sf_flavour.conf: syntax ok
you can see problems in upstart log file if needed:
cat /var/log/upstart/django_sf_flavour.log
test your django_sf_flavour.conf file like this without reboot
service django_sf_flavour start
test your bash file "start_gunicorn.sh" like this
cd /home
bash start_gunicorn.sh
check state of django_sf_flavour
initctl list | grep django_sf_flavour
I'm running celery with django and works great in development. But now I want to make it live
on my production server and I am running into some issues.
My setup is as follows:
Ubuntu
Nginx
Vitualenv
Upstart
Gunicorn
Django
I'm not sure how to now start celery with django when starting it with upstart and where does it log to?
Im starting django here:
~$ cd /var/www/webapps/minamobime_app
~$ source ../bin/activate
exec /var/www/webapps/bin/gunicorn_django -w $NUM_WORKERS \
--user=$USER --group=$GROUP --bind=$IP:$PORT --log-level=debug \
--log-file=$LOGFILE 2>>$LOGFILE
how do I start celery?
exec python manage.py celeryd -E -l info -c 2
Consider configuring celery as a daemon. For logging speciy:
CELERYD_LOG_FILE="/var/log/celery/%n.log"
where %s will be replaced by the node name
You can install supervisor using apt-get and then you can add the following to a file named celeryd.conf (or any name you wish) to etc/supervisor/conf.d folder (create the conf.d folder if it is not present)
; ==================================
; celery worker supervisor example
; ==================================
[program:celery]
; Set full path to celery program if using virtualenv
command=/home/<path to env>/env/bin/celery -A <appname> worker -l info
;enter the directory in which you want to run the app
directory=/home/<path to the app>
user=nobody
numprocs=1
stdout_logfile=/home/<path to the log file>/worker.log
stderr_logfile=/home/<path to the log file>/worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 1000
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
Also add the following lines to etc/supervisor/supervisord.conf
[include]
files = /etc/supervisor/conf.d/*.conf
Now start the supervisor by typing supervisord in terminal and celery will start automatically according to the settings you made above.
You can run:
python manage.py celery worker
this will work if you have djcelery in your INSTALLED_APPS