I currently use the following script to launch my DJango FCGI servers:
#!/bin/bash
MYAPP=$1
PIDFILE=/var/run/${MYAPP}_fcgi.pid
SOCKET=/var/django/${MYAPP}/socket.sock
MANAGESCRIPT=/var/django/${MYAPP}/manage.py
# Maximum requests for a child to service before expiring
#MAXREQ=
# Spawning method - prefork or threaded
#METHOD=
# Maximum number of children to have idle
MAXSPARE=2
# Minimum number of children to have idle
MINSPARE=1
# Maximum number of children to spawn
MAXCHILDREN=3
cd "`dirname $0`"
function failure () {
STATUS=$?;
echo; echo "Failed $1 (exit code ${STATUS}).";
exit ${STATUS};
}
function start_server () {
$MANAGESCRIPT runfcgi socket=$SOCKET pidfile=$PIDFILE \
${MAXREQ:+maxrequests=$MAXREQ} \
${METHOD:+method=$METHOD} \
${MAXSPARE:+maxspare=$MAXSPARE} \
${MINSPARE:+minspare=$MINSPARE} \
${MAXCHILDREN:+maxchildren=$MAXCHILDREN} \
${DAEMONISE:+damonize=True}
touch $SOCKET
chown www-data:www-data $SOCKET
chmod 755 $SOCKET
}
function stop_server () {
if [ -f "$PIDFILE" ]
then
kill `cat $PIDFILE` || failure "Server was not running."
rm $PIDFILE
fi
}
DAEMONISE=$3
case "$2" in
start)
echo -n "Starting fcgi: "
[ -e $PIDFILE ] && { echo "PID file exsts."; exit; }
start_server || failure "starting fcgi"
echo "Done."
;;
stop)
echo -n "Stopping fcgi: "
[ -e $PIDFILE ] || { echo "No PID file found."; exit; }
stop_server
echo "Done."
;;
restart)
echo -n "Restarting fcgi: "
[ -e $PIDFILE ] || { echo -n "No PID file found..."; }
stop_server
start_server || failure "restarting fcgi"
echo "Done."
;;
*)
echo "Usage: $0 {start|stop|restart} [--daemonise]"
;;
esac
exit 0
Which I manually call like this:
/var/django/server.sh mysite start
This works fine but when my hosting company reboots our server it leaves me two issues:
I don't have an automated way to launch multiple sites.
I end up with a mysite_fcgi.pid file existing but no associated process.
So I have two questions:
How can I launch a list of sites (stored in a plain text file) automatically on startup? i.e. call /var/django/server.sh mysite1 start then /var/django/server.sh myothersite start?
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
Create an init script and assign it to the appropriate runlevel.
You need to implement this in your startup/init script (that you would write in step 1)
Or, use a process manager like supervisord which takes care of all your concerns.
Here is a configuration example for fcgi from supervisord.
[fcgi-program:fcgiprogramname]
command=/usr/bin/example.fcgi
socket=unix:///var/run/supervisor/%(program_name)s.sock
process_name=%(program_name)s_%(process_num)02d
numprocs=5
priority=999
autostart=true
autorestart=unexpected
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=chrism
redirect_stderr=true
stdout_logfile=/a/path
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stderr_logfile=/a/path
stderr_logfile_maxbytes=1MB
stderr_logfile_backups
environment=A=1,B=2
How can I launch a list of sites (stored in a plain text file) automatically on startup?
In general, your OS provides a file where you can hook your commands at startup. For example, arch linux uses rc.local, gentoo either /etc/local.start either /etc/local.d/*.start, debian requires you to make an init script - which is basically a script that takes "start" or "stop" as argument and lives in /etc/init.d or /etc/rc.d depending on the distribution ...
You can use some bash code as such.
for site in $(</path/to/text/file); do
/var/django/server.sh $site start
done
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
if [[ -f $PIDFILE ]]; then # if pidfile exists
if [[ ! -d /proc/$(<$PIDFILE)/ ]]; then # if it contains a non running proc
unlink $PIDFILE # delete the pidfile
fi
fi
Related
I'm working on adding Django Channels on my elastic beanstalk enviorment, but running into trouble configuring supervisord. Specifically, in /.ebextensions I have a file channels.config with this code:
container_commands:
01_copy_supervisord_conf:
command: "cp .ebextensions/supervisord/supervisord.conf /opt/python/etc/supervisord.conf"
02_reload_supervisord:
command: "supervisorctl -c /opt/python/etc/supervisord.conf reload"
This errors on the 2nd command with the following error message, through the elastic beanstalk CLI:
Command failed on instance. Return code: 1 Output: error: <class
'FileNotFoundError'>, [Errno 2] No such file or directory:
file: /opt/python/run/venv/local/lib/python3.4/site-
packages/supervisor/xmlrpc.py line: 562.
container_command 02_reload_supervisord in
.ebextensions/channels.config failed.
My guess would be supervisor didn't install correctly, but because command 1 copies the files without an error, that leads me to think supervisor is indeed installed and I have an issue with the container command. Has anyone implemented supervisor in an AWS environment and can see where I'm going wrong?
You should be careful about python versions and exact installation paths ,
Here is how did it maybe it can help
packages:
yum:
python27-setuptools: []
container_commands:
01-supervise:
command: ".ebextensions/supervise.sh"
Here is the supervise.sh
#!/bin/bash
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:process-ipn-api-gpsfsoft]
command = -- command that u want to run ---
directory = /var/app/current/
user = webapp
autorestart = true
startsecs = 0
numprocs = 10
process_name = -- process name that u want ---
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
PS: You have define SUPERVISE as Enable in Elasticbeanstalk environment value to get this run.
I'm new to deploying, so this is probably a rookie mistake, but here it goes.
I have a Rails 4 app that I'm deploying to a Linux server using a combination of Capistrano, Unicorn, and Nginx. The deploy script runs fine and the app is now reachable at the desired IP, so that's great. The thing is, a) Unicorn doesn't restart upon deployment (at least, the PIDs don't change) and b) not surprisingly, the new changes aren't reflected in the available app. I don't seem to be able to do anything other than completely stopping and restarting unicorn in order to refresh it. If I do this, then the changes are picked up, but this process is obviously not ideal.
Manually, if I run kill -s HUP $UNICORN_PID then the pids of the workers change but not the master, and changes aren't picked up (which, apparently they are supposed to be); using USR2 appears to have no effect on the current processes.
Here's the unicorn init script I'm using, based on suggestions from other stack overflow questions with similar problems:
set -e
USAGE="Usage: $0 <start|stop|restart|upgrade|rotate|force-stop>"
# app settings
USER="deploy"
APP_NAME="app_name"
APP_ROOT="/path/to/$APP_NAME"
ENV="production"
# environment settings
PATH="/home/$USER/.rbenv/shims:/home/$USER/.rbenv/bin:$PATH"
CMD="cd $APP_ROOT/current && bundle exec unicorn -c config/unicorn.rb -E $ENV -D"
PID="$APP_ROOT/shared/pids/unicorn.pid"
OLD_PID="$PID.oldbin"
TIMEOUT=${TIMEOUT-60}
# make sure the app exists
cd $APP_ROOT || exit 1
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PID && kill -$1 `cat $OLD_PID`
}
case $1 in
start)
sig 0 && echo >&2 "Already running" && exit 0
echo "Starting $APP_NAME"
su - $USER -c "$CMD"
;;
stop)
echo "Stopping $APP_NAME"
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
echo "Force stopping $APP_NAME"
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo "reloaded $APP_NAME" && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PID && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PID
then
echo >&2 "$OLD_PID still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su - $USER -c "$CMD"
;;
rotate)
sig USR1 && echo rotated logs OK && exit 0
echo >&2 "Couldn't rotate logs" && exit 1
;;
*)
echo >&2 $USAGE
exit 1
;;
esac
Using this script, start and stop work as expected, but reload/restart do nothing (they print the expected output but don't change the running pids) and upgrade fails. According to the error log, it's because the first master is still running (ArgumentError: Already running on PID: $PID).
And here's my unicorn.rb:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
pid "#{app_path}/../../shared/pids/unicorn.pid"
# listen
listen "#{app_path}/../../shared/sockets/unicorn.sock", :backlog => 64
# logging
stderr_path "#{app_path}/../../shared/log/unicorn.stderr.log"
stdout_path "#{app_path}/../../shared/log/unicorn.stdout.log"
# workers
worker_processes 3
# use correct Gemfile on restarts
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{working_directory}/Gemfile"
end
# preload
preload_app false
before_fork do |server, worker|
old_pid = "#{app_path}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
Any help is very much appreciated, thanks!
It is hard to say for certain, since I haven't encountered this particular issue before, but my hunch is that this is your problem:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
Every time you deploy, Capistrano creates a new directory for your app at the location releases/<timestamp>. It then updates a current symlink to point at this latest release directory.
In your case, you may mistakenly be telling Unicorn to use releases/<timestamp> as its working_directory. (SSH to the server and check the contents of unicorn.rb to be certain.) Instead, what you should do is point to current. That way you don't have to stop and cold start unicorn to get it to see the new working directory.
# Since "current" is a symlink to the current release,
# Unicorn will always see the latest code.
working_directory "/var/www/my-app/current"
I suggest rewriting your unicorn.rb so that you aren't using relative paths. Instead hard-code the absolute paths to current and shared. It is OK to do this because those paths will remain the same for every release.
The line
ENV="production"
looks extremely suspicious to me. I suspect that it wants to be
RAILS_ENV="production".
without this won't rails wake up not knowing which environment it is?
I cannot run django-celery in daemon mode, it said Received unregistered task of type 'djcelery_email_send', while it works when python manage.py celery worker, please help.
I am running celery 3.1, with django-celery, django-celery-email on Ubuntu server 14.04. My django settings is in proj/settings/production.py.
Daemon mode of django-celery is init script init.d.
/etc/init.d/celeryd is downloaded from https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd and it is as below:
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
# cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
origin_is_runlevel_dir () {
set +e
dirname $0 | grep -q "/etc/rc.\.d"
echo $?
}
# Can be a runlevel symlink (e.g. S02celeryd)
if [ $(origin_is_runlevel_dir) -eq 0 ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
/etc/default/celeryd is as following:
CELERYD_NODES="proj"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
export DJANGO_SETTINGS_MODULE='proj.settings.production'
export PYTHONPATH='$PYTHONPATH:/home/ubuntu/proj'
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/ubuntu/virtualenv/proj/bin/celery"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/proj/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
error log snippet of /var/log/celery/proj.log:
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2016-06-07 07:22:10,699: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2016-06-07 07:22:10,708: INFO/MainProcess] mingle: searching for neighbors
[2016-06-07 07:22:11,715: INFO/MainProcess] mingle: all alone
[2016-06-07 07:22:11,722: WARNING/MainProcess] proj#ip-172-31-20-158 ready.
[2016-06-07 07:22:44,464: ERROR/MainProcess] Received unregistered task of type 'djcelery_email_send'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see for more information.
The full contents of the message body was:
{'errbacks': None, 'callbacks': None, 'retries': 0, 'taskset': None, 'expires': None, 'timelimit': (None, None), 'args': ([{'cc': [], 'to': ['cjh#yahoo.com', '649858321#qq.com', 'xu_jordan#163.com', '123456#qq.com'], 'alternatives': [], 'bcc': [], 'attachments': [], 'reply_to': [], 'headers': {}, 'subject': 'A show: i is on the way', 'from_email': 'no-reply#angelscity.co', 'body': 'i'}], {}), 'eta': None, 'task': 'djcelery_email_send', 'chord': None, 'kwargs': {}, 'id': '9b108b79-4ac9-4039-bca0-486c0241930f', 'utc': True} (587b)
Traceback (most recent call last):
File "/home/ubuntu/virtualenv/proj/lib/python3.4/site-packages/celery/worker/consumer.py", line 456, in on_task_received
strategies[name](message, body,
KeyError: 'djcelery_email_send'
I finally moved to supervisord and it works.
Here is detail for anybody who may meet the same problem.
Install supervisord: sudo apt-get install supervisord. There are some other options to install it, such as easy_install, but make sure your easy_install is for python 2.x. supervisord only supports python 2.4+ instead of python 3.
add ceelryd.conf to /etc/supervisor/conf.d as following:
[program:celery]
;command=pythonpath manage.pypath celery worker --settings=proj.settings
command=/home/username/virtualenvs/projenv/bin/python /home/username/proj/manage.py celery worker --settings=proj.settings
environment=PYTHONPATH='/home/username/virtualenvs/projenv'
directory=/home/username/virtualenvs/projenv
user=username
numprocs=1
stdout_logfile=/var/log/celeryd.log
stderr_logfile=/var/log/celeryd.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
reread and update supervisor: sudo supervisorctl reread && sudo supervisorctl update
restart celery: sudo supervisorctl restart celery
I've setup stongloop on an ec2.
Everything is running well. I can access the api explorer.
I use Strong Arc composer to discover models in the local mysql db, and make them public. I can see the exposed model on the file model-config.json in my app server folder.
But the explorer is not refreshing. I can't see the new models on the explorer. The solution I've found is to reboot the whole server, but I can't imagine this is the only solution. Is someone has a clue ?
Thanks,
So the easiest solution I found is to kill the process and then relaunch using the following command:
nohup service slc-initd start
Noting that my slc-initd is the following script in my init.d folder (no credit to me for this script):
#!/usr/bin/env bash
# chkconfig: 345 99 01
# description: startup of slc loopback
NAME="Init.d SLC"
NODE_BIN_DIR="/usr/bin"
NODE_PATH="/usr/lib/node_modules"
APPLICATION_DIRECTORY="/home/ec2-user/dev/mpos"
#APPLICATION_START="src/cluster-worker.js"
PIDFILE="/var/run/initd-example.pid"
LOGFILE="/var/log/slc-initd.log"
start() {
echo "Starting $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc run --pid $PIDFILE --log $LOGFILE"
slc run --pid $PIDFILE --log $LOGFILE
RETVAL=$?
}
stop() {
if [ -f $PIDFILE ]; then
echo "Shutting down $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc runctl stop"
slc runctl stop
# No need to get rid of the pidfile, slc does that for us.
RETVAL=$?
else
echo "$NAME is not running."
RETVAL=0
fi
}
restart() {
if [ -f $PIDFILE ]; then
echo "Restarting $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc runctl restart"
slc runctl restart
else
echo "$NAME isn't currently running. Starting from scratch ..."
start
fi
}
status() {
echo "Status for $NAME:"
cd $APPLICATION_DIRECTORY
slc runctl status
RETVAL=$?
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
restart
;;
*)
echo "Usage: {start|stop|status|restart}"
exit 1
;;
esac
exit $RETVAL
I have a problem with printing the stack trace (back-trace) of a demon which is created for C++ application. service is based on /etc/init.d/ init script based architecture and I have seen several services followed the same to deliver applications as services in Linux. in this case when stopping my service by service name stop stack trace of the program is printing on the console. This backtrace contains the uncleaned memory map of the program.
I just need to avoid printing this on the terminal and save it into a separate file till I fix the memory issues of the program.
EDITED:
Can this do by adding more commands to the init script ?
Is there any compilation option to avoid printing the program
stack-trace after stopping the program ?
Can a block of c/c++ source code use to handle this ?
Below is my /etc/init.d/init script.
# Source function library.
. /etc/rc.d/init.d/functions
cscored=${CSCORED-/opt/application-name/bin/cscore &}
prog=cscored
pidfile=${PIDFILE-/var/run/cscored/cscored.pid}
lockfile=${LOCKFILE-/var/lock/subsys/cscored}
RETVAL=0
STOP_TIMEOUT=${STOP_TIMEOUT-10}
start() {
echo -n $"Starting $prog: "
daemon $cscored
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch ${lockfile}
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc $cscored
RETVAL=$?
echo
[ $RETVAL = 0 ] && rm -f ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
if ! $cscored -t >&/dev/null; then
RETVAL=6
echo $"not reloading due to configuration syntax error"
failure $"not reloading $cscored due to configuration syntax error"
else
# Force LSB behaviour from killproc
LSB=1 killproc -p ${pidfile} $cscored -HUP
RETVAL=$?
if [ $RETVAL -eq 7 ]; then
failure $"cscored shutdown"
fi
fi
echo
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status $cscored
RETVAL=$?
;;
restart)
stop
start
;;
condrestart|try-restart)
if status $cscored >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload|status|fullstatus|graceful|help|configtest}"
RETVAL=2
esac
exit $RETVAL
You can redirect the output of the console to another file either using a pipe or dup().