I have an upstart script as
# Ubuntu upstart file at /etc/init/wso2am.conf
#!upstart
description "wso2am"
pre-start script
mkdir -p /var/log/wso2am/
end script
respawn
respawn limit 15 5
start on runlevel [2345]
stop on runlevel [06]
script
# Not sure why $HOME is needed, but we found that it is:
export JAVA_HOME="/usr/lib/jvm/jdk1.8.0_111"
#exec /usr/local/bin/node $JAVA_HOME/node/notify.js 13002 >> /var/log/node.log 2>&1
end script
And my service file also created as
# this is /usr/lib/systemd/system/wso2am.service
# (or /lib/systemd/system/wso2am.service dependent on
# your linux distribution flavor )
[Unit]
Description=wso2am server daemon
Documentation=https://docs.wso2.com/
After==network.target wso2am.service
[Service]
# see man systemd.service
User=tel
Group=tel
TimeoutStartSec=0
Type=simple
KillMode=process
ExecStart= /bin/bash -lc '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh --start'
RemainAfterExit=true
ExecStop = /bin/bash -lc '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh --stop'
StandardOutput=journal
Restart = always
RestartSec=2
[Install]
WantedBy=default.target
I try to kill process (wso2am)
ps -ef | grep wso2am
Kill -9 process_id
But i can't able to find process automatically respawn/restart. How to check auto-respawn mechanism in ubuntu?
You can achieve this through systemd with your wso2am.service file modified as follows.
[Unit]
Description=wso2am server daemon
Documentation=https://docs.wso2.com/
After=network.target
[Service]
ExecStart=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh start'
ExecStop=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh stop'
ExecRestart=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh restart'
PIDFile=/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/wso2carbon.pid
User=tel
Group=tel
Type=forking
Restart=always
RestartSec=2
StartLimitInterval=60s
StartLimitBurst=3
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Now when you search for the wso2am process, use the below command.
ps -ef | grep java
Then pick the PID for the wso2am java process and kill it.
kill -9 <wso2_server_PID>
Immediately run
ps -ef | grep java
again and see that the process is not there now. Then within 2 seconds as we have specified RestartSec=2, you will see the wso2 server process is back up and running with a different PID. Then you can make sure that the wso2 instance has respawned on failure.
Related
This tutorial covers channels 1.x's deployment. However, this does not work with channels 2.x. The failing part is the daemon script, which is as follows:
files:"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
djangoenv=`cat /opt/python/current/env
| tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g'
| sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
djangoenv=${djangoenv%?}
# Create daemon configuraiton script
daemonconf="[program:daphne]
; Set full path to channels program if using virtualenv
command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 <your_project>.asgi:channel_layer
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_daphne.log
stderr_logfile=/var/log/stderr_daphne.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv
[program:worker]
; Set full path to program if using virtualenv
command=/opt/python/run/venv/bin/python manage.py runworker
directory=/opt/python/current/app
user=ec2-user
numprocs=1
stdout_logfile=/var/log/stdout_worker.log
stderr_logfile=/var/log/stderr_worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$djangoenv"
# Create the supervisord conf script
echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf
# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
then
echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
fi
# Reread the supervisord config
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# Update supervisord in cache without restarting all services
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update
# Start/Restart processes through supervisord
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart worker
After deployment, AWS has 2 errors in logs: daphne: No such process and worker: No such process.
How should this script be changed, so that it runs on channels 2.x as well?
Thanks
I had the same error and the reason for mine was the supervisor process that runs these additional scripts wasn't picking up the Daphne process because of this line of code:
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
This checks if the supervisord.conf file has [include] present and adds the daemon process ONLY if there is no [include].
In my case I had a
[include]
celery.conf
in my supervisord file that prevented this Daphne script from adding the daemon.conf.
There are a few things you can do:
If you have another script creating a .conf file, combine these into one using the same include logic
Rewrite the include logic to check for daemon.conf specifically
Manually add daemon.conf to the supervisord.conf by SSH into your EC2 instance
I have the following script that was given to help me turn on uwsgi at emperor mode.
https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-centos-7
I have tweaked it slightly for the below:
[Unit]
Description=uWSGI Emperor service
[Service]
ExecStartPre=/bin/bash -c 'mkdir -p /run/uwsgi; chown vagrant:www-data /run/uwsgi'
ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I needed to rewrite this for ubuntu 14.04 hence I wrote the below into /etc/init/uwsgi.conf
description "uWSGI Emperor service"
## equivalent to wantedby=multi-user.target
start on runlevel [2345]
# create a directory needed by the daemon
pre-start exec /bin/bash -c 'mkdir -p /run/uwsgi; chown vagrant:www-data /run/uwsgi'
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
respawn
kill signal SIGQUIT
expect stop
What was expected:
/run/uwsgi will be created with the ownership as vagrant:www-data
it will have 2 .sock files called djangoonpy2.sock and djangoonpy3.sock
What I did see:
/run/uwsgi will be created with the ownership as vagrant:www-data
it only has 1 .sock file called djangoonpy2.sock
I am runnning ubuntu14.04 using vagrant as a virtual machine.
Please advise.
I am starting gunicon by calling python manage.py run_gunicorn (inside a virtualenv).
How can I achieve to restart gunicorn after my Ubuntu 12.04 server rebooted?
You can use supervisor to launch your app on startup and restart on crashes.
Install supervisor and create a config file /etc/supervisor/conf.d/your_app.conf with content like this:
[program:your_app]
directory=/path/to/app/working/dir
command=/path/to/virtualenv_dir/bin/python /path/to/manage_py/manage.py run_gunicorn
user=your_app_user
autostart=true
autorestart=true
Since I'm on Ubuntu and like to work with tools already included in the distro I used Upstart to start gunicorn after booting the machine.
I put the following code into /etc/init/django_sf_flavour.conf :
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/USER/bin/start_gunicorn.sh
Which executes this file (/home/USER/bin/start_gunicorn.sh) after booting:
#!/bin/bash
set -e
cd MY_PROJ_ROOT
source venv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec python MY_PROJ_PATH/manage.py run_gunicorn
this is based on #j7nn7k answer but with a bit changes
1 - create a .conf file in /etc/init dir to run with upstart
cd /etc/init
nano django_sf_flavour.conf
2 - put below lines in django_sf_flavour file and save it.
description "Run Custom Script"
start on runlevel [2345]
stop on runlevel [06]
respawn
respawn limit 10 5
exec /home/start_gunicorn.sh
3 - create start_gunicorn.sh file in home dir with following lines
cd /home
nano start_gunicorn.sh
4 - put these code in it and save it.
#!/bin/bash
set -e
cd mysite
source myenv/bin/activate
test -d $LOGDIR || mkdir -p $LOGDIR
exec gunicorn —bind 0.0.0.0:80 mysite.wsgi:application
5 - set runable permission to start_gunicorn.sh
cd /home
chmod 775 start_gunicorn.sh
** extra **
check syntax django_sf_flavour.conf with this command
init-checkconf /etc/init/django_sf_flavour.conf
and answer should be like this:
File /etc/init/django_sf_flavour.conf: syntax ok
you can see problems in upstart log file if needed:
cat /var/log/upstart/django_sf_flavour.log
test your django_sf_flavour.conf file like this without reboot
service django_sf_flavour start
test your bash file "start_gunicorn.sh" like this
cd /home
bash start_gunicorn.sh
check state of django_sf_flavour
initctl list | grep django_sf_flavour
I have an nginx configuration specific to a project I'm currently working on (Django, to be precise).
It looks like the "right" way to start nginx on Ubuntu is
sudo /etc/init.d/nginx start
However, I want to supply a custom configuration file. Normally I'd do this in the following way:
sudo nginx -c /my/project/config/nginx.conf
Looking at the init.d/nginx file, it doesn't look like there the start command passes in any arguments, so I can't do
sudo /etc/init.d/nginx start -c /my/project/config/nginx.conf
What's the best way to solve my problem?
init.d is not the right supervisor to use on Ubuntu anymore, you should use Upstart. Put this in /etc/init/nginx.conf and you will be able to start/stop it with sudo start nginx and sudo stop nginx:
description "nginx http daemon"
author "George Shammas"
start on (filesystem and net-device-up IFACE=lo)
stop on runlevel [!2345]
env DAEMON=/usr/local/nginx/sbin/nginx -c /my/project/config/nginx.conf
env PID=/usr/local/nginx/logs/nginx.pid
expect fork
respawn
respawn limit 10 5
pre-start script
$DAEMON -t
if [ $? -ne 0 ]
then exit $?
fi
end script
exec $DAEMON
I really enjoy using upstart. I currently have upstart jobs to run different gunicorn instances in a number of virtualenvs. However, the 2-3 examples I found for Celery upstart scripts on the interwebs don't work for me.
So, with the following variables, how would I write an Upstart job to run django-celery in a virtualenv.
Path to Django Project:
/srv/projects/django_project
Path to this project's virtualenv:
/srv/environments/django_project
Path to celery settings is the Django project settings file (django-celery):
/srv/projects/django_project/settings.py
Path to the log file for this Celery instance:
/srv/logs/celery.log
For this virtual env, the user:
iamtheuser
and the group:
www-data
I want to run the Celery Daemon with celerybeat, so, the command I want to pass to the django-admin.py (or manage.py) is:
python manage.py celeryd -B
It'll be even better if the script starts after the gunicorn job starts, and stops when the gunicorn job stops. Let's say the file for that is:
/etc/init/gunicorn.conf
You may need to add some more configuration, but this is an upstart script I wrote for starting django-celery as a particular user in a virtualenv:
start on started mysql
stop on stopping mysql
exec su -s /bin/sh -c 'exec "$0" "$#"' user -- /home/user/project/venv/bin/python /home/user/project/django_project/manage.py celeryd
respawn
It works great for me.
I know that it looks ugly, but it appears to be the current 'proper' technique for running upstart jobs as unprivileged users, based on this superuser answer.
I thought that I would have had to do more to get it to work inside of the virtualenv, but calling the python binary inside the virtualenv is all it takes.
Here is my working config using the newer systemd running on Ubuntu 16.04 LTS. Celery is in a virtualenv. App is a Python/Flask.
Systemd file: /etc/systemd/system/celery.service
You'll want to change the user and paths.
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=nick
Group=nick
EnvironmentFile=-/home/nick/myapp/server_configs/celery_env.conf
WorkingDirectory=/home/nick/myapp
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
Environment file (referenced above):/home/nick/myapp/server_configs/celery_env.conf
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/nick/myapp/venv/bin/celery"
# App instance to use
CELERY_APP="myapp.tasks"
# How to call manage.py
CELERYD_MULTI="multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
# and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="INFO"
To automatically create the log and run folder with the correct permissions for your user, create a file in /usr/lib/tmpfiles.d. I was having trouble with the /var/run/celery folder being deleted on rebooting and then celery could not start properly.
My /usr/lib/tmpfiles.d/celery.conf file:
d /var/log/celery 2775 nick nick -
d /var/run/celery 2775 nick nick -
To enable: sudo systemctl enable celery.service
Now you'll need to reboot your system for the /var/log/celery and /var/run/celery folders to be created. You can check to see if celery started after rebooting by checking the logs in /var/log/celery.
To restart celery: sudo systemctl restart celery.service
Debugging: tail -f /var/log/syslog and try restarting celery to see what the error is. It could be related to the backend or other things.
Hope this helps!