Password Prompt not showing for sudo command in Capistrano3 - ruby-on-rails-4

I am running cap production deploy:setup_config
I get the following output:
DEBUG[659d52b0] Running /usr/bin/env [ ! -d /home/deployer/.rbenv/versions/2.1.2 ] on xxx.xxx.xxx.xxx
DEBUG[659d52b0] Command: [ ! -d /home/deployer/.rbenv/versions/2.1.2 ]
DEBUG[659d52b0] Finished in 2.413 seconds with exit status 1 (failed).
DEBUG[2ea020b0] Running /usr/bin/env [ -f /etc/nginx/sites-enabled/default ] on xxx.xxx.xxx.xxx
DEBUG[2ea020b0] Command: [ -f /etc/nginx/sites-enabled/default ]
DEBUG[2ea020b0] Finished in 0.507 seconds with exit status 1 (failed).
No default Nginx Virtualhost to remove
INFO[e0a045fc] Running /usr/bin/env mkdir -p /home/deployer/apps/managewise_production/shared/config on xxx.xxx.xxx.xxx
DEBUG[e0a045fc] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env mkdir -p /home/deployer/apps/managewise_production/shared/config )
INFO[e0a045fc] Finished in 0.405 seconds with exit status 0 (successful).
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/nginx.conf 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/nginx.conf 100.0%
INFOcopying: #<StringIO:0x007fad45824cc8> to: /home/deployer/apps/managewise_production/shared/config/nginx.conf
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/database.example.yml 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/database.example.yml 100.0%
INFOcopying: #<StringIO:0x007fad448839e0> to: /home/deployer/apps/managewise_production/shared/config/database.example.yml
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/log_rotation 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/log_rotation 100.0%
INFOcopying: #<StringIO:0x007fad4413b2f0> to: /home/deployer/apps/managewise_production/shared/config/log_rotation
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/monit 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/monit 100.0%
INFOcopying: #<StringIO:0x007fad46816898> to: /home/deployer/apps/managewise_production/shared/config/monit
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/unicorn.rb 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/unicorn.rb 100.0%
INFOcopying: #<StringIO:0x007fad46ab7a20> to: /home/deployer/apps/managewise_production/shared/config/unicorn.rb
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh 100.0%
INFOcopying: #<StringIO:0x007fad46a4f7e0> to: /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh
INFO[a1c87854] Running /usr/bin/env chmod +x /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh on xxx.xxx.xxx.xxx
DEBUG[a1c87854] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env chmod +x /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh )
INFO[a1c87854] Finished in 0.483 seconds with exit status 0 (successful).
INFO[30551413] Running /usr/bin/env whoami && echo ${PATH} on xxx.xxx.xxx.xxx
DEBUG[30551413] Command: whoami && echo ${PATH}
INFO[30551413] Finished in 0.552 seconds with exit status 0 (successful).
DEBUG[30551413] deployer
DEBUG[30551413] /home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/bin:/bin:/usr/bin:/home/deployer/bin
INFO[30551413] Finished in 0.552 seconds with exit status 0 (successful).
INFO[60b61747] Running /usr/bin/env sudo ln -nfs /home/deployer/apps/managewise_production/shared/config/nginx.conf /etc/nginx/sites-enabled/ on xxx.xxx.xxx.xxx
DEBUG[60b61747] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env sudo ln -nfs /home/deployer/apps/managewise_production/shared/config/nginx.conf /etc/nginx/sites-enabled/ )
DEBUG[60b61747] [sudo] password for deployer:
Now when i enter a password. its showing the password and also it doesn't even work.
The Task is as Following: lib/capistano/tasks/setup_config.cap
namespace :deploy do
task :setup_config do
on roles(:app) do
# make the config dir
execute :mkdir, "-p #{shared_path}/config"
full_app_name = fetch(:full_app_name)
# config files to be uploaded to shared/config, see the
# definition of smart_template for details of operation.
# Essentially looks for #{filename}.erb in deploy/#{full_app_name}/
# and if it isn't there, falls back to deploy/#{shared}. Generally
# everything should be in deploy/shared with params which differ
# set in the stage files
config_files = fetch(:config_files)
config_files.each do |file|
smart_template file
end
# which of the above files should be marked as executable
executable_files = fetch(:executable_config_files)
executable_files.each do |file|
execute :chmod, "+x #{shared_path}/config/#{file}"
end
# symlink stuff which should be... symlinked
symlinks = fetch(:symlinks)
symlinks.each do |symlink|
execute "whoami && echo ${PATH}"
sudo "ln -nfs #{shared_path}/config/#{symlink[:source]} # sub_strings(symlink[:link])}"
end
end
end
end
In deploy.rb I have the following regarding ssh:
set :pty, true
set :ssh_options, { forward_agent: true, :keys => %w(/home/SMD/.ssh/id_rsa)}
set :use_sudo, false
set :default_shell, '/bin/bash --login'
I can't figure out what the problem is.
Also i'm concerned for the failed code also. ==>
DEBUG[659d52b0] Command: [ ! -d /home/deployer/.rbenv/versions/2.1.2 ]
DEBUG[659d52b0] Finished in 2.413 seconds with exit status 1 (failed).
Thanks in advance.

Try adding something like this to your deploy.rb file(or the corresponding deploy environment file)
set :password, ask('Server password', nil)
server 'xxx.xxx.xx.xx', user: 'root', port: 22, password: fetch(:password), roles: %w{web app db}, primary: true

Related

How to set environment variables in AWS-codebuild for Windows 2019 at build time?

It seems that in AWS-codebuild variables are not propagated between commands for the windows 2019 environment.
With this buildspec.yml
version: 0.2
env:
variables:
MY_VAR_0: $(git log -n 1 --date=short --pretty=format:%cd_%h)
phases:
build:
commands:
- $Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
- Get-ChildItem Env:MY_VAR_*
# build commands here
artifacts:
name: $MY_VAR_0
I get in the logs:
[Container] 2020/12/14 11:41:27 Entering phase BUILD
[Container] 2020/12/14 11:41:27 Running command $Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
[Container] 2020/12/14 11:41:27 Running command Get-ChildItem Env:MY_VAR_*
Name Value
---- -----
MY_VAR_0 $(git log -n 1 --date=short --pretty=format:%...
[Container] 2020/12/14 11:41:28 Phase complete: BUILD State: SUCCEEDED
The problem here are
MY_VAR_0 is set to the string $(git log ... and not the output of the command.
MY_VAR_1 is not propagated to following commands in phases.build.commands
and of course my artifacts end in the wrong place.
Up to now the only way I found to solve this problem is
version: 0.2
phases:
build:
commands:
- |
$Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# first build command here
- |
$Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# second build command here
artifacts:
name: $(git log -n 1 --date=short --pretty=format:%cd_%h)
with the following log:
[Container] 2020/12/14 12:25:18 Entering phase BUILD
[Container] 2020/12/14 12:25:18 Running command $Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# first build command here
Name Value
---- -----
MY_VAR_1 2020-12-14_eccfb77
MY_VAR_0 2020-12-14_eccfb77
[Container] 2020/12/14 12:25:19 Running command $Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# second build command here
Name Value
---- -----
MY_VAR_1 2020-12-14_eccfb77
MY_VAR_0 2020-12-14_eccfb77
[Container] 2020/12/14 12:25:20 Phase complete: BUILD State: SUCCEEDED
What I do not like in this approach is that I have to repeat the code for computing the MY_VAR_* values at the begin of each build command. (And no, I do not consider feasible to have a single, multiline huge build command.) Moreover the same code has to be repeated in artifacts.name
questions
How do I propagate environment variables computed at build-time between different phases.*.commands?
Why is $(...) expanded in artifacts.name but not in env.variables.MY_VAR_0?
Answer to #1:
it looks like setting Environment variables in Env: drive silently fails. However, using local variable syntax $Foo = "bar" seems to work between commands as well as phases.
For example, in the following:
version: 0.2
env:
shell: powershell.exe
phases:
pre_build:
commands:
- $foo = "bar"
- echo $foo
build:
commands:
- echo $foo
both echo commands print bar.
Answer to #2:
artifacts.name always uses the shell language (unix), where when setting a variable you are in either cmd.exe or powershell.exe depending on what you have env.shell set to in the buildspec. Reference

Running supervisord in AWS Environment

I'm working on adding Django Channels on my elastic beanstalk enviorment, but running into trouble configuring supervisord. Specifically, in /.ebextensions I have a file channels.config with this code:
container_commands:
01_copy_supervisord_conf:
command: "cp .ebextensions/supervisord/supervisord.conf /opt/python/etc/supervisord.conf"
02_reload_supervisord:
command: "supervisorctl -c /opt/python/etc/supervisord.conf reload"
This errors on the 2nd command with the following error message, through the elastic beanstalk CLI:
Command failed on instance. Return code: 1 Output: error: <class
'FileNotFoundError'>, [Errno 2] No such file or directory:
file: /opt/python/run/venv/local/lib/python3.4/site-
packages/supervisor/xmlrpc.py line: 562.
container_command 02_reload_supervisord in
.ebextensions/channels.config failed.
My guess would be supervisor didn't install correctly, but because command 1 copies the files without an error, that leads me to think supervisor is indeed installed and I have an issue with the container command. Has anyone implemented supervisor in an AWS environment and can see where I'm going wrong?
You should be careful about python versions and exact installation paths ,
Here is how did it maybe it can help
packages:
yum:
python27-setuptools: []
container_commands:
01-supervise:
command: ".ebextensions/supervise.sh"
Here is the supervise.sh
#!/bin/bash
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:process-ipn-api-gpsfsoft]
command = -- command that u want to run ---
directory = /var/app/current/
user = webapp
autorestart = true
startsecs = 0
numprocs = 10
process_name = -- process name that u want ---
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
PS: You have define SUPERVISE as Enable in Elasticbeanstalk environment value to get this run.

Abort Capistrano deploy bundler:install

Following this tutorial from GoRails I'm getting this error when I try to deploy on Ubuntu 16.04 on Digital Ocean.
$ cap production deploy --trace
Trace Here
** DEPLOY FAILED
** Refer to log/capistrano.log for details. Here are the last 20 lines:
DEBUG [9a2c15d9] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/system ]
DEBUG [9a2c15d9] Finished in 0.181 seconds with exit status 1 (failed).
INFO [86a233a2] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/releases/20160829222734/public/system as deployer#138.68.8.2…
DEBUG [86a233a2] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/…
INFO [86a233a2] Finished in 0.166 seconds with exit status 0 (successful).
DEBUG [07f5e5a2] Running [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [07f5e5a2] Command: [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [07f5e5a2] Finished in 0.166 seconds with exit status 1 (failed).
DEBUG [5e61eaf3] Running [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [5e61eaf3] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [5e61eaf3] Finished in 0.168 seconds with exit status 1 (failed).
INFO [52076052] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets as deployer#138.68.8.2…
DEBUG [52076052] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/…
INFO [52076052] Finished in 0.167 seconds with exit status 0 (successful).
DEBUG [2a6bf02b] Running if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'"…
DEBUG [2a6bf02b] Command: if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'…
DEBUG [2a6bf02b] Finished in 0.164 seconds with exit status 0 (successful).
INFO [f4b636e3] Running $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test --deployment --quiet as deployer#138.6…
DEBUG [f4b636e3] Command: cd /home/deployer/RMG_rodeobest/releases/20160829222734 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; $HOME/.rbenv/bin/rbenv exec bundle inst…
DEBUG [f4b636e3] bash: line 1: 3509 Killed $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test -…
My Capfile:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# If you are using rbenv add these lines:
require 'capistrano/rbenv'
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/bundler'
require 'capistrano/rails'
# require 'capistrano/passenger'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
I'm stuck, don't know why is aborting cap.
Any idea?

Received unregistered task of type 'djcelery_email_send'

I cannot run django-celery in daemon mode, it said Received unregistered task of type 'djcelery_email_send', while it works when python manage.py celery worker, please help.
I am running celery 3.1, with django-celery, django-celery-email on Ubuntu server 14.04. My django settings is in proj/settings/production.py.
Daemon mode of django-celery is init script init.d.
/etc/init.d/celeryd is downloaded from https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd and it is as below:
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
# cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
origin_is_runlevel_dir () {
set +e
dirname $0 | grep -q "/etc/rc.\.d"
echo $?
}
# Can be a runlevel symlink (e.g. S02celeryd)
if [ $(origin_is_runlevel_dir) -eq 0 ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
/etc/default/celeryd is as following:
CELERYD_NODES="proj"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples):
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
export DJANGO_SETTINGS_MODULE='proj.settings.production'
export PYTHONPATH='$PYTHONPATH:/home/ubuntu/proj'
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/ubuntu/virtualenv/proj/bin/celery"
# App instance to use
# comment out this line if you don't use an app
#CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/ubuntu/proj/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
error log snippet of /var/log/celery/proj.log:
If you depend on pickle then you should set a setting to disable this
warning and to be sure that everything will continue working
when you upgrade to Celery 3.2::
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
You must only enable the serializers that you will actually use.
warnings.warn(CDeprecationWarning(W_PICKLE_DEPRECATED))
[2016-06-07 07:22:10,699: INFO/MainProcess] Connected to amqp://guest:**#127.0.0.1:5672//
[2016-06-07 07:22:10,708: INFO/MainProcess] mingle: searching for neighbors
[2016-06-07 07:22:11,715: INFO/MainProcess] mingle: all alone
[2016-06-07 07:22:11,722: WARNING/MainProcess] proj#ip-172-31-20-158 ready.
[2016-06-07 07:22:44,464: ERROR/MainProcess] Received unregistered task of type 'djcelery_email_send'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see for more information.
The full contents of the message body was:
{'errbacks': None, 'callbacks': None, 'retries': 0, 'taskset': None, 'expires': None, 'timelimit': (None, None), 'args': ([{'cc': [], 'to': ['cjh#yahoo.com', '649858321#qq.com', 'xu_jordan#163.com', '123456#qq.com'], 'alternatives': [], 'bcc': [], 'attachments': [], 'reply_to': [], 'headers': {}, 'subject': 'A show: i is on the way', 'from_email': 'no-reply#angelscity.co', 'body': 'i'}], {}), 'eta': None, 'task': 'djcelery_email_send', 'chord': None, 'kwargs': {}, 'id': '9b108b79-4ac9-4039-bca0-486c0241930f', 'utc': True} (587b)
Traceback (most recent call last):
File "/home/ubuntu/virtualenv/proj/lib/python3.4/site-packages/celery/worker/consumer.py", line 456, in on_task_received
strategies[name](message, body,
KeyError: 'djcelery_email_send'
I finally moved to supervisord and it works.
Here is detail for anybody who may meet the same problem.
Install supervisord: sudo apt-get install supervisord. There are some other options to install it, such as easy_install, but make sure your easy_install is for python 2.x. supervisord only supports python 2.4+ instead of python 3.
add ceelryd.conf to /etc/supervisor/conf.d as following:
[program:celery]
;command=pythonpath manage.pypath celery worker --settings=proj.settings
command=/home/username/virtualenvs/projenv/bin/python /home/username/proj/manage.py celery worker --settings=proj.settings
environment=PYTHONPATH='/home/username/virtualenvs/projenv'
directory=/home/username/virtualenvs/projenv
user=username
numprocs=1
stdout_logfile=/var/log/celeryd.log
stderr_logfile=/var/log/celeryd.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
reread and update supervisor: sudo supervisorctl reread && sudo supervisorctl update
restart celery: sudo supervisorctl restart celery

supervisor, gunicorn and django only work when logged in

I'm using this guide to setting up an intranet server. Everything goes ok, the server works and I can checked it is working in my network.
But when I logout, I get 404 error.
The sock file is in the path indicated in gunicorn_start.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ls -l run/
total 0
srwxrwxrwx 1 javier javier 0 mar 10 17:31 cmi.sock
Actually I can se the workers when listed the process list.
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ ps aux | grep cmi
javier 17354 0.0 0.2 14652 8124 ? S 17:27 0:00 gunicorn: master [cmi]
javier 17365 0.0 0.3 18112 10236 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17366 0.0 0.3 18120 10240 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17367 0.0 0.5 36592 17496 ? S 17:27 0:00 gunicorn: worker [cmi]
javier 17787 0.0 0.0 4408 828 pts/0 S+ 17:55 0:00 grep --color=auto cmi
And supervisorctl responds that the process is running:
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ sudo supervisorctl status cmi
[sudo] password for javier:
cmi RUNNING pid 17354, uptime 0:29:21
There is an error in nginx logs,
(cmi2014)javier#sgc:~/workspace/cmi/cmi$ tail logs/nginx-error.log
2014/03/10 17:38:57 [error] 17299#0: *19 connect() to
unix:/home/javier/workspace/cmi/cmi/run/cmi.sock failed (111: Connection refused) while
connecting to upstream, client: 10.69.0.174, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/home/javier/workspace/cmi/cmi/run/cmi.sock:/", host:
"10.69.0.68:2014"
Again, the error appears only when I logout or close session, but everything works fine when run or reload supervisor and stay connected.
By the way, ngnix, supervisor and gunicorn run on my uid.
Thanks in advance.
Edit Supervisor conf
[program:cmi]
command = /home/javier/entornos/cmi2014/bin/cmi_start
user = javier
stdout_logfile = /home/javier/workspace/cmi/cmi/logs/cmi_supervisor.log
redirect_stderr = true
autostart=true
autorestart=true
Gnunicor start script
#!/bin/bash
NAME="cmi" # Name of the application
DJANGODIR=/home/javier/workspace/cmi/cmi # Django project directory
SOCKFILE=/home/javier/workspace/cmi/cmi/run/cmi.sock # we will communicte using this unix socket
USER=javier # the user to run as
GROUP=javier # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=cmi.settings # which settings file should Django use
DJANGO_WSGI_MODULE=cmi.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /home/javier/entornos/cmi2014/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
export CMI_SECRET_KEY='***'
export CMI_DATABASE_HOST='***'
export CMI_DATABASE_NAME='***'
export CMI_DATABASE_USER='***'
export CMI_DATABASE_PASS='***'
export CMI_DATABASE_PORT='3306'
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/javier/entornos/cmi2014/bin/gunicorn ${DJANGO_WSGI_MODULE}:application --name $NAME --workers $NUM_WORKERS --user=$USER --group=$GROUP --log-level=debug --bind=unix:$SOCKFILE