Abort Capistrano deploy bundler:install - ruby-on-rails-4

Following this tutorial from GoRails I'm getting this error when I try to deploy on Ubuntu 16.04 on Digital Ocean.
$ cap production deploy --trace
Trace Here
** DEPLOY FAILED
** Refer to log/capistrano.log for details. Here are the last 20 lines:
DEBUG [9a2c15d9] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/system ]
DEBUG [9a2c15d9] Finished in 0.181 seconds with exit status 1 (failed).
INFO [86a233a2] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/releases/20160829222734/public/system as deployer#138.68.8.2…
DEBUG [86a233a2] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/…
INFO [86a233a2] Finished in 0.166 seconds with exit status 0 (successful).
DEBUG [07f5e5a2] Running [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [07f5e5a2] Command: [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [07f5e5a2] Finished in 0.166 seconds with exit status 1 (failed).
DEBUG [5e61eaf3] Running [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [5e61eaf3] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [5e61eaf3] Finished in 0.168 seconds with exit status 1 (failed).
INFO [52076052] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets as deployer#138.68.8.2…
DEBUG [52076052] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/…
INFO [52076052] Finished in 0.167 seconds with exit status 0 (successful).
DEBUG [2a6bf02b] Running if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'"…
DEBUG [2a6bf02b] Command: if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'…
DEBUG [2a6bf02b] Finished in 0.164 seconds with exit status 0 (successful).
INFO [f4b636e3] Running $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test --deployment --quiet as deployer#138.6…
DEBUG [f4b636e3] Command: cd /home/deployer/RMG_rodeobest/releases/20160829222734 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; $HOME/.rbenv/bin/rbenv exec bundle inst…
DEBUG [f4b636e3] bash: line 1: 3509 Killed $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test -…
My Capfile:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# If you are using rbenv add these lines:
require 'capistrano/rbenv'
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/bundler'
require 'capistrano/rails'
# require 'capistrano/passenger'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
I'm stuck, don't know why is aborting cap.
Any idea?

Related

How to config supervisor with Django channels and server daphne

I have a problem with my configuration supervisor, my file is in etc/supervisor/conf.d/realtimecolonybit.conf,
When I try command supervisorctl reread, show me the "No config updates to processes" and when I try the other command like this
supervisorctl status realtimecolonybit
Shows me this error
realtimecolonybit FATAL can't find command '/home/ubuntu/realtimecolonybit/bin/start.sh;'
And when try the supervisorctl start realtimecolonybit
show me this error
realtimecolonybit: ERROR (no such file)
My configuration in my file realtimecolonybit.conf is below
[program:realtimecolonybit]
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
user = root
stdout_logfile = /home/ubuntu/realtimecolonybit/logs/realtimecolonybit.log;
redirect_strderr = true;
My configuration from my file start.sh is below
#!/bin/bash
NAME="realtimecolonybit"
DJANGODIR=/home/ubuntu/realtimecolonybit/colonybit
SOCKFILE=/home/ubuntu/realtimecolonybit/run/gunicorn.sock
USER=root
GROUP=root
NUM_WORKERS=3
DJANGO_SETTINGS_MODULE=colonybit.settings
echo "Starting $NAME as `whoami`"
cd $DJANGODIR
source /home/ubuntu/realtimecolonybit/bin/activate
# workon realtimecolonybit
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPAHT=$DJANGODIR:$PYTHONPATH
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
exec daphne -b 0.0.0.0 -p 8001 colonybit.asgi:application
When I run without supervisor like this
(realtimecolonybit)realtimecolonybit/#/ ./bin/start.sh
it's run ok and working well, but sometimes down the server
I try to run a Django 1.11 and django_channel with supervisor my app is in aws.
I solve my problem, the error was in file .conf, I remove ; and remove the .sh, change the start.sh to start
wrong command
command = /home/ubuntu/realtimecolonybit/bin/start.sh;
correct command
command = /home/ubuntu/realtimecolonybit/bin/start

Running supervisord in AWS Environment

I'm working on adding Django Channels on my elastic beanstalk enviorment, but running into trouble configuring supervisord. Specifically, in /.ebextensions I have a file channels.config with this code:
container_commands:
01_copy_supervisord_conf:
command: "cp .ebextensions/supervisord/supervisord.conf /opt/python/etc/supervisord.conf"
02_reload_supervisord:
command: "supervisorctl -c /opt/python/etc/supervisord.conf reload"
This errors on the 2nd command with the following error message, through the elastic beanstalk CLI:
Command failed on instance. Return code: 1 Output: error: <class
'FileNotFoundError'>, [Errno 2] No such file or directory:
file: /opt/python/run/venv/local/lib/python3.4/site-
packages/supervisor/xmlrpc.py line: 562.
container_command 02_reload_supervisord in
.ebextensions/channels.config failed.
My guess would be supervisor didn't install correctly, but because command 1 copies the files without an error, that leads me to think supervisor is indeed installed and I have an issue with the container command. Has anyone implemented supervisor in an AWS environment and can see where I'm going wrong?
You should be careful about python versions and exact installation paths ,
Here is how did it maybe it can help
packages:
yum:
python27-setuptools: []
container_commands:
01-supervise:
command: ".ebextensions/supervise.sh"
Here is the supervise.sh
#!/bin/bash
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:process-ipn-api-gpsfsoft]
command = -- command that u want to run ---
directory = /var/app/current/
user = webapp
autorestart = true
startsecs = 0
numprocs = 10
process_name = -- process name that u want ---
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
PS: You have define SUPERVISE as Enable in Elasticbeanstalk environment value to get this run.

Cap deploy:check is stuck

I'm using capistrano 3.2.1 with following gems on my Gemfile:
gem 'capistrano'
gem 'capistrano-safe-deploy-to', '~> 1.1.1'
gem 'capistrano-rvm'
gem 'capistrano-unicorn-nginx', '~> 3.1.0'
gem 'capistrano-rails', '~> 1.1'
gem 'capistrano-bundler', '~> 1.1.2'
On the Capfile, I have the following:
require 'capistrano/setup'
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
#
require 'capistrano/rvm'
require 'capistrano/bundler'
require 'capistrano/rails'
require 'capistrano/unicorn_nginx'
require 'capistrano/safe_deploy_to'
# require 'capistrano/secrets_yml'
# require 'capistrano/rails/assets'
# require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
On the deploy.rb I have the following:
# config valid only for Capistrano 3.1
lock '3.2.1'
set :application, 'MyAoo'
# set :repo_url, 'MyRepoURL'
# Default branch is :master
# ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp }.call
# set :branch, 'master'
# Default deploy_to directory is /var/www/my_app
# set :deploy_to, 'project'
# No need to clone entire repo each time
# set :deploy_via, :remote_cache
# Default value for :scm is :git
# set :scm, :git
# Default value for :format is :pretty
# set :format, :pretty
# Default value for :log_level is :debug
# set :log_level, :info
# Default value for :pty is false
set :pty, true
# TODO check this
# set :forward_agent, true
# Sudo permissions not required
set :use_sudo, false
# Default value for :linked_files is []
# set :linked_files, %w{ config/database.yml }
# Default value for linked_dirs is []
# set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
set :keep_releases, 3
# set :rvm_type, :user # Defaults to: :auto
# set :rvm_custom_path, '~/.myveryownrvm' # only needed if not detected
# Set ruby version on the server.
set :rvm_ruby_version, '2.1.2' # Defaults to: 'default'
set :rvm_roles, [:app, :web]
Now when I run $ cap production safe_deploy_to:ensure on my local I get the following situation:
$ cap production safe_deploy_to:ensure
DEBUG[7b638604] Running /usr/bin/env [ -d ~/.rvm ] on ABC.DEF.GHI.JK
DEBUG[7b638604] Command: [ -d ~/.rvm ]
DEBUG[7b638604] Finished in 0.738 seconds with exit status 1 (failed).
DEBUG[45e6ecbb] Running /usr/bin/env [ -d /usr/local/rvm ] on 104.131.110.30
DEBUG[45e6ecbb] Command: [ -d /usr/local/rvm ]
DEBUG[45e6ecbb] Finished in 0.042 seconds with exit status 0 (successful).
DEBUG[d7085818] Running /usr/local/rvm/bin/rvm version on 104.131.110.30
DEBUG[d7085818] Command: /usr/local/rvm/bin/rvm version
DEBUG[d7085818] rvm 1.25.33 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[d7085818]
DEBUG[d7085818] Finished in 0.264 seconds with exit status 0 (successful).
rvm 1.25.33 (stable) by Wayne E. Seguin <wayneeseguin#gmail.com>, Michal Papis <mpapis#gmail.com> [https://rvm.io/]
DEBUG[7d96ffdc] Running /usr/local/rvm/bin/rvm current on 104.131.110.30
DEBUG[7d96ffdc] Command: /usr/local/rvm/bin/rvm current
DEBUG[7d96ffdc] ruby-2.1.2
DEBUG[7d96ffdc]
DEBUG[7d96ffdc] Finished in 0.257 seconds with exit status 0 (successful).
ruby-2.1.2
DEBUG[caa8eec2] Running /usr/local/rvm/bin/rvm 2.1.2 do ruby --version on 104.131.110.30
DEBUG[caa8eec2] Command: /usr/local/rvm/bin/rvm 2.1.2 do ruby --version
DEBUG[caa8eec2] ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-linux]
DEBUG[caa8eec2]
DEBUG[caa8eec2] Finished in 0.454 seconds with exit status 0 (successful).
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-linux]
INFO[68eea9b0] Running /usr/bin/env sudo mkdir -pv /var/www/myapp on 104.131.110.30
DEBUG[68eea9b0] Command: /usr/bin/env sudo mkdir -pv /var/www/myapp
DEBUG[68eea9b0] [sudo] password for marvin:
And it remains stuck there with no way to put in the password or anything into the prompt. I am unable to understand why is it asking for the password when I've given the sudo privileges to the deployer?
What am I missing?
I'm the creator of capistrano-safe-deploy-to plugin. It seems that plugin is having issues since the task gets stuck at this command /usr/bin/env sudo mkdir -pv /var/www/myapp.
May I suggest re-running the capistrano command with these options removed (or commented out) from deploy.rb:
set :pty, true
set :use_sudo, false
Things should work just fine with the default values for the above 2 options, no need to override them.
If you still have issues, open an issue for capistrano-safe-deploy-to plugin and I'll try to help.

Password Prompt not showing for sudo command in Capistrano3

I am running cap production deploy:setup_config
I get the following output:
DEBUG[659d52b0] Running /usr/bin/env [ ! -d /home/deployer/.rbenv/versions/2.1.2 ] on xxx.xxx.xxx.xxx
DEBUG[659d52b0] Command: [ ! -d /home/deployer/.rbenv/versions/2.1.2 ]
DEBUG[659d52b0] Finished in 2.413 seconds with exit status 1 (failed).
DEBUG[2ea020b0] Running /usr/bin/env [ -f /etc/nginx/sites-enabled/default ] on xxx.xxx.xxx.xxx
DEBUG[2ea020b0] Command: [ -f /etc/nginx/sites-enabled/default ]
DEBUG[2ea020b0] Finished in 0.507 seconds with exit status 1 (failed).
No default Nginx Virtualhost to remove
INFO[e0a045fc] Running /usr/bin/env mkdir -p /home/deployer/apps/managewise_production/shared/config on xxx.xxx.xxx.xxx
DEBUG[e0a045fc] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env mkdir -p /home/deployer/apps/managewise_production/shared/config )
INFO[e0a045fc] Finished in 0.405 seconds with exit status 0 (successful).
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/nginx.conf 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/nginx.conf 100.0%
INFOcopying: #<StringIO:0x007fad45824cc8> to: /home/deployer/apps/managewise_production/shared/config/nginx.conf
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/database.example.yml 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/database.example.yml 100.0%
INFOcopying: #<StringIO:0x007fad448839e0> to: /home/deployer/apps/managewise_production/shared/config/database.example.yml
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/log_rotation 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/log_rotation 100.0%
INFOcopying: #<StringIO:0x007fad4413b2f0> to: /home/deployer/apps/managewise_production/shared/config/log_rotation
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/monit 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/monit 100.0%
INFOcopying: #<StringIO:0x007fad46816898> to: /home/deployer/apps/managewise_production/shared/config/monit
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/unicorn.rb 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/unicorn.rb 100.0%
INFOcopying: #<StringIO:0x007fad46ab7a20> to: /home/deployer/apps/managewise_production/shared/config/unicorn.rb
DEBUGUploading /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh 0.0%
INFOUploading /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh 100.0%
INFOcopying: #<StringIO:0x007fad46a4f7e0> to: /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh
INFO[a1c87854] Running /usr/bin/env chmod +x /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh on xxx.xxx.xxx.xxx
DEBUG[a1c87854] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env chmod +x /home/deployer/apps/managewise_production/shared/config/unicorn_init.sh )
INFO[a1c87854] Finished in 0.483 seconds with exit status 0 (successful).
INFO[30551413] Running /usr/bin/env whoami && echo ${PATH} on xxx.xxx.xxx.xxx
DEBUG[30551413] Command: whoami && echo ${PATH}
INFO[30551413] Finished in 0.552 seconds with exit status 0 (successful).
DEBUG[30551413] deployer
DEBUG[30551413] /home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/bin:/bin:/usr/bin:/home/deployer/bin
INFO[30551413] Finished in 0.552 seconds with exit status 0 (successful).
INFO[60b61747] Running /usr/bin/env sudo ln -nfs /home/deployer/apps/managewise_production/shared/config/nginx.conf /etc/nginx/sites-enabled/ on xxx.xxx.xxx.xxx
DEBUG[60b61747] Command: ( RBENV_ROOT=/home/deployer/.rbenv RBENV_VERSION=2.1.2 /usr/bin/env sudo ln -nfs /home/deployer/apps/managewise_production/shared/config/nginx.conf /etc/nginx/sites-enabled/ )
DEBUG[60b61747] [sudo] password for deployer:
Now when i enter a password. its showing the password and also it doesn't even work.
The Task is as Following: lib/capistano/tasks/setup_config.cap
namespace :deploy do
task :setup_config do
on roles(:app) do
# make the config dir
execute :mkdir, "-p #{shared_path}/config"
full_app_name = fetch(:full_app_name)
# config files to be uploaded to shared/config, see the
# definition of smart_template for details of operation.
# Essentially looks for #{filename}.erb in deploy/#{full_app_name}/
# and if it isn't there, falls back to deploy/#{shared}. Generally
# everything should be in deploy/shared with params which differ
# set in the stage files
config_files = fetch(:config_files)
config_files.each do |file|
smart_template file
end
# which of the above files should be marked as executable
executable_files = fetch(:executable_config_files)
executable_files.each do |file|
execute :chmod, "+x #{shared_path}/config/#{file}"
end
# symlink stuff which should be... symlinked
symlinks = fetch(:symlinks)
symlinks.each do |symlink|
execute "whoami && echo ${PATH}"
sudo "ln -nfs #{shared_path}/config/#{symlink[:source]} # sub_strings(symlink[:link])}"
end
end
end
end
In deploy.rb I have the following regarding ssh:
set :pty, true
set :ssh_options, { forward_agent: true, :keys => %w(/home/SMD/.ssh/id_rsa)}
set :use_sudo, false
set :default_shell, '/bin/bash --login'
I can't figure out what the problem is.
Also i'm concerned for the failed code also. ==>
DEBUG[659d52b0] Command: [ ! -d /home/deployer/.rbenv/versions/2.1.2 ]
DEBUG[659d52b0] Finished in 2.413 seconds with exit status 1 (failed).
Thanks in advance.
Try adding something like this to your deploy.rb file(or the corresponding deploy environment file)
set :password, ask('Server password', nil)
server 'xxx.xxx.xx.xx', user: 'root', port: 22, password: fetch(:password), roles: %w{web app db}, primary: true

Capistrano 3 not deploy and show this error SSHKit::Command::Failed: rake stdout: Nothing written

I have used rails 4 and Capistrano 3. When i depoly with Capistrano fail during rake assets:precompile and show following error
INFO [1c9c2531] Running ~/.rvm/bin/rvm default do bundle exec rake assets:precompile on 107.170.67.113
cap aborted!
SSHKit::Command::Failed: rake stdout: Nothing written
rake stderr: Nothing written
Tasks: TOP => deploy:assets:precompile
(See full trace by running task with --trace)
I'm newbie on rails and server setup so where am i missing.I don't know
Here my depoly.rb
SSHKit.config.command_map[:rake] ||= "bundle exec rake"
# config valid only for Capistrano 3.1
lock '3.0.1'
set :application, 'PCA'
set :repo_url, 'git#github.com:3lackRos3/pca.git'
# Default branch is :master
# ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp }.call
# Default deploy_to directory is /var/www/my_app
set :deploy_to, '/home/pca/app/pca'
set :deploy_user, 'pca'
set :rvm_ruby_string, ENV['GEM_HOME'].gsub(/.*\//,"")
set :rvm_type, :user
set :rvm_bin_path, '/usr/local/rvm/bin'
# set :default_environment, {
# 'RBENV_ROOT' => '/usr/local/rbenv',
# 'PATH' => "/usr/local/rbenv/shims:/usr/local/rbenv/bin:$PATH"
# }
# set :default_environment, {
# 'PATH' => "$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH"
# }
# set :rbenv_ruby, "2.1.1"
# set :rbenv_ruby_dir, -> { "#{fetch(:rbenv_path)}/versions/#{fetch(:rbenv_ruby)}" }
# set :rbenv_map_bins, %w{rake gem bundle ruby rails}
# Default value for :scm is :git
set :scm, :git
set :ssh_options, { forward_agent: true }
#set :ssh_options, proxy: Net::SSH::Proxy::Command.new('ssh pca#107.170.67.113 -W %h:%p')
# Default value for :format is :pretty
# set :format, :pretty
set :log_level, :info
# Default value for :log_level is :debug
# set :log_level, :debug
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
# set :linked_files, %w{config/database.yml}
# Default value for linked_dirs is []
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
# SSHKit.config.command_map[:rake].sub!(/\(.*\)rake/, "\1bundle exec rake")
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
# set :keep_releases, 20
after "deploy", "deploy:cleanup"
namespace :deploy do
desc 'Restart application'
task :restart do
# on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
# end
end
after :publishing, :restart
after :restart, :clear_cache do
# on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
# end
end
after :finishing, 'deploy:cleanup'
end
Here my capfile
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
#require 'rvm1/capistrano3'
#require 'capistrano/rvm'
# Includes tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
#
require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/rails'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
I have run bundle exec cap production deploy:check and
my output is
INFO [ecf1a980] Running /usr/bin/env mkdir -p /tmp/PCA/
INFO [ecf1a980] Finished in 4.848 seconds with exit status 0 (successful).
INFO Uploading /tmp/PCA/git-ssh.sh 100.0%
INFO [8e7210f6] Running /usr/bin/env chmod +x /tmp/PCA/git-ssh.sh on serverip
INFO [8e7210f6] Finished in 0.639 seconds with exit status 0 (successful).
INFO [4c538c51] Running /usr/bin/env mkdir -pv /home/pca/app/pca/shared /home/pca/app/pca/releases
INFO [4c538c51] Finished in 4.480 seconds with exit status 0 (successful).
INFO [d6a7dce6] Running /usr/bin/env mkdir -pv /home/pca/app/pca/shared/bin /home/pca/app/pca/shared/log /home/pca/app/pca/shared/tmp/pids /home/pca/app/pca/shared/tmp/cache /home/pca/app/pca/shared/tmp/sockets /home/pca/app/pca/shared/vendor/bundle /home/pca/app/pca/shared/public/system
INFO [d6a7dce6] Finished in 4.507 seconds with exit status 0 (successful).
Anybody help me..
thanks!
I have the same issue, having the error rake stderr: Nothing written when running (deployment) cap production deploy.
Turns out it's due to lack of memory, I have a 1G memory server, so "The Solution is to create a swap memory". For me, creating a 2G swap memory fixed it. Deployment now continues and finishes without any errors.
Seems like precompiling process killing due to the lack of memory.
Try to create swap memory.
Check out this tutorial
Make sure to include ssh key of server where deployment is occurring in gits trusted keys.