STDERR output from Hadoop, this does mean some issue? - python-2.7

I'm using Mrjob-Hadoop with Python2.7, Ubuntu 14.04 and I had the following screen output:
no configs found; falling back on auto-configuration
no configs found; falling back on auto-configuration
creating tmp directory /tmp/word-document.hduser.20160122.065849.953886
writing wrapper script to /tmp/word-document.hduser.20160122.065849.953886/setup-wrapper.sh
PLEASE NOTE: Starting in mrjob v0.5.0, protocols will be strict by default. It's recommended you run your job with --strict-protocols or set up mrjob.conf as described at https://pythonhosted.org/mrjob/whats-new.html#ready-for-strict-protocols
writing to /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00000
> sh -ex setup-wrapper.sh /usr/bin/python word-document.py --step-num=0 --mapper /tmp/word-document.hduser.20160122.065849.953886/input_part-00000 > /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00000
writing to /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00001
> sh -ex setup-wrapper.sh /usr/bin/python word-document.py --step-num=0 --mapper /tmp/word-document.hduser.20160122.065849.953886/input_part-00001 > /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00001
STDERR: + __mrjob_PWD=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/0
STDERR: + exec
STDERR: + /usr/bin/python -c import fcntl; fcntl.flock(9, fcntl.LOCK_EX)
STDERR: + export PYTHONPATH=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/0/mrjob.tar.gz:/home/ignacio/shogun-install/lib/python2.7/dist-packages:/home/ignacio/shogun/examples/undocumented/python_modular:
STDERR: + exec
STDERR: + cd /tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/0
STDERR: + /usr/bin/python word-document.py --step-num=0 --mapper /tmp/word-document.hduser.20160122.065849.953886/input_part-00000
STDERR: + __mrjob_PWD=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/1
STDERR: + exec
STDERR: + /usr/bin/python -c import fcntl; fcntl.flock(9, fcntl.LOCK_EX)
STDERR: + export PYTHONPATH=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/1/mrjob.tar.gz:/home/ignacio/shogun-install/lib/python2.7/dist-packages:/home/ignacio/shogun/examples/undocumented/python_modular:
STDERR: + exec
STDERR: + cd /tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/mapper/1
STDERR: + /usr/bin/python word-document.py --step-num=0 --mapper /tmp/word-document.hduser.20160122.065849.953886/input_part-00001
Counters from step 1:
(no counters found)
writing to /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper-sorted
> sort /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00000 /tmp/word-document.hduser.20160122.065849.953886/step-0-mapper_part-00001
writing to /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00000
> sh -ex setup-wrapper.sh /usr/bin/python word-document.py --step-num=0 --reducer /tmp/word-document.hduser.20160122.065849.953886/input_part-00000 > /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00000
writing to /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00001
> sh -ex setup-wrapper.sh /usr/bin/python word-document.py --step-num=0 --reducer /tmp/word-document.hduser.20160122.065849.953886/input_part-00001 > /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00001
STDERR: + __mrjob_PWD=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/0
STDERR: + exec
STDERR: + /usr/bin/python -c import fcntl; fcntl.flock(9, fcntl.LOCK_EX)
STDERR: + export PYTHONPATH=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/0/mrjob.tar.gz:/home/ignacio/shogun-install/lib/python2.7/dist-packages:/home/ignacio/shogun/examples/undocumented/python_modular:
STDERR: + exec
STDERR: + cd /tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/0
STDERR: + /usr/bin/python word-document.py --step-num=0 --reducer /tmp/word-document.hduser.20160122.065849.953886/input_part-00000
STDERR: + __mrjob_PWD=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/1
STDERR: + exec
STDERR: + /usr/bin/python -c import fcntl; fcntl.flock(9, fcntl.LOCK_EX)
STDERR: + export PYTHONPATH=/tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/1/mrjob.tar.gz:/home/ignacio/shogun-install/lib/python2.7/dist-packages:/home/ignacio/shogun/examples/undocumented/python_modular:
STDERR: + exec
STDERR: + cd /tmp/word-document.hduser.20160122.065849.953886/job_local_dir/0/reducer/1
STDERR: + /usr/bin/python word-document.py --step-num=0 --reducer /tmp/word-document.hduser.20160122.065849.953886/input_part-00001
Counters from step 1:
(no counters found)
Moving /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00000 -> /tmp/word-document.hduser.20160122.065849.953886/output/part-00000
Moving /tmp/word-document.hduser.20160122.065849.953886/step-0-reducer_part-00001 -> /tmp/word-document.hduser.20160122.065849.953886/output/part-00001
Streaming final output from /tmp/word-document.hduser.20160122.065849.953886/output
removing tmp directory /tmp/word-document.hduser.20160122.065849.953886
Could you say if there is some problem? I mean, the jobs finished but the STDERR: keywords are noisy to me.
Thank you in advance.

Looks like your job isn't generating any output. Could you please post word-document.py and your input data?

Related

awslogs-agent-setup.py not working on Ubuntu 17.10 (artful)

This works fine on ubuntu 16.04, but not on 17.10
+ curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M100 56093 100 56093 0 0 56093 0 0:00:01 --:--:-- 0:00:01 98929
+ chmod +x ./awslogs-agent-setup.py
+ ./awslogs-agent-setup.py -n -c /etc/awslogs/awslogs.conf -r us-west-2
Step 1 of 5: Installing pip ...^[[0mlibyaml-dev does not exist in system ^[[0m^[[92mDONE^[[0m
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... ^[[0mTraceback (most recent call last):
File "./awslogs-agent-setup.py", line 1317, in <module>
main()
File "./awslogs-agent-setup.py", line 1313, in main
setup.setup_artifacts()
File "./awslogs-agent-setup.py", line 858, in setup_artifacts
self.install_awslogs_cli()
File "./awslogs-agent-setup.py", line 570, in install_awslogs_cli
subprocess.call([AWSCLI_CMD, 'configure', 'set', 'plugins.cwlogs', 'cwlogs'], env=DEFAULT_ENV)
File "/usr/lib/python2.7/subprocess.py", line 168, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1025, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I noticed that earlier on in the process, in the AWS boilerplate it failed to install libyaml-dev but not sure if that's the only problem.
Always find the answer right after I post it...
Here's my modified CF template command:
050_install_awslogs:
command: !Sub
"/bin/bash -x\n
exec >>/var/log/cf_050_install_awslogs.log 2>&1 \n
echo 050_install_awslogs...\n
set -xe\n
# Get the CloudWatch Logs agent\n
mkdir /opt/awslogs\n
cd /opt/awslogs\n
# Needed for python3 in 17.10\n
apt-get install -y libyaml-dev python-dev \n
pip3 install awscli-cwlogs\n
# avoid it complaining about not having /var/awslogs/bin/aws binary\n
if [ ! -d /var/awslogs/bin ] ; then\n
mkdir -p /var/awslogs/bin\n
ln -s /usr/local/bin/aws /var/awslogs/bin/aws\n
fi\n
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O\n
chmod +x ./awslogs-agent-setup.py\n
# Hack for python 3.6 & old awslogs-agent-setup.py\n
sed -i 's/3,6/3,7/' awslogs-agent-setup.py\n
./awslogs-agent-setup.py -n -c /etc/awslogs/awslogs.conf -r ${AWS::Region}\n
echo 050_install_awslogs end\n
"
Not entirely sure about the need for the dir creation but I expect this is a temporary case that will get resolved soon as one still needs to fudge the python 3.6 compatibility check.
it may be installable using python 2.7 as well, but that felt like going backwards at this point as the my rationale for 17.10 was python 3.6.
Credit for the yaml package and dir creation idea to https://forums.aws.amazon.com/thread.jspa?threadID=265977 but I prefer to avoid easy_install.
I had similar issue on Ubuntu 18.04.
Instruction from AWS for standalone install worked for my case.
To download and run it standalone, use the following commands and follow the prompts:
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/AgentDependencies.tar.gz -O
tar xvf AgentDependencies.tar.gz -C /tmp/
sudo python ./awslogs-agent-setup.py --region us-east-1 --dependency-path /tmp/AgentDependencies

Abort Capistrano deploy bundler:install

Following this tutorial from GoRails I'm getting this error when I try to deploy on Ubuntu 16.04 on Digital Ocean.
$ cap production deploy --trace
Trace Here
** DEPLOY FAILED
** Refer to log/capistrano.log for details. Here are the last 20 lines:
DEBUG [9a2c15d9] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/system ]
DEBUG [9a2c15d9] Finished in 0.181 seconds with exit status 1 (failed).
INFO [86a233a2] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/releases/20160829222734/public/system as deployer#138.68.8.2…
DEBUG [86a233a2] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/system /home/deployer/RMG_rodeobest/…
INFO [86a233a2] Finished in 0.166 seconds with exit status 0 (successful).
DEBUG [07f5e5a2] Running [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [07f5e5a2] Command: [ -L /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [07f5e5a2] Finished in 0.166 seconds with exit status 1 (failed).
DEBUG [5e61eaf3] Running [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ] as deployer#138.68.8.255
DEBUG [5e61eaf3] Command: [ -d /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets ]
DEBUG [5e61eaf3] Finished in 0.168 seconds with exit status 1 (failed).
INFO [52076052] Running /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/releases/20160829222734/public/assets as deployer#138.68.8.2…
DEBUG [52076052] Command: ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; /usr/bin/env ln -s /home/deployer/RMG_rodeobest/shared/public/assets /home/deployer/RMG_rodeobest/…
INFO [52076052] Finished in 0.167 seconds with exit status 0 (successful).
DEBUG [2a6bf02b] Running if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'"…
DEBUG [2a6bf02b] Command: if test ! -d /home/deployer/RMG_rodeobest/releases/20160829222734; then echo "Directory does not exist '/home/deployer/RMG_rodeobest/releases/20160829222734'…
DEBUG [2a6bf02b] Finished in 0.164 seconds with exit status 0 (successful).
INFO [f4b636e3] Running $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test --deployment --quiet as deployer#138.6…
DEBUG [f4b636e3] Command: cd /home/deployer/RMG_rodeobest/releases/20160829222734 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.3.1" ; $HOME/.rbenv/bin/rbenv exec bundle inst…
DEBUG [f4b636e3] bash: line 1: 3509 Killed $HOME/.rbenv/bin/rbenv exec bundle install --path /home/deployer/RMG_rodeobest/shared/bundle --without development test -…
My Capfile:
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
# Includes tasks from other gems included in your Gemfile
# If you are using rbenv add these lines:
require 'capistrano/rbenv'
set :rbenv_type, :user # or :system, depends on your rbenv setup
set :rbenv_ruby, '2.3.1'
require 'capistrano/bundler'
require 'capistrano/rails'
# require 'capistrano/passenger'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
I'm stuck, don't know why is aborting cap.
Any idea?

Permission denied # rb_sysopen - log/application.log (Errno::EACCES)

Hi I am using docker to deploy my rails app using phusion/passenger image. Here is my Dockerfile:
FROM phusion/passenger-ruby22:0.9.19
# set correct environment variables
ENV HOME /root
ENV RAILS_ENV production
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# Expose Nginx HTTP service
EXPOSE 80
# Start Nginx / Passenger
RUN rm -f /etc/service/nginx/down
# Remove the default site
RUN rm /etc/nginx/sites-enabled/default
# Add the nginx site and config
ADD nginx.conf /etc/nginx/sites-enabled/nginx.conf
ADD rails-env.conf /etc/nginx/main.d/rails-env.conf
# Let ensure these packages are already installed
# otherwise install them anyways
RUN apt-get update && apt-get install -y build-essential \
nodejs \
libpq-dev
# bundle gem and cache them
WORKDIR /tmp
ADD Gemfile /tmp/
ADD Gemfile.lock /tmp/
RUN gem install bundler
RUN bundle install
# Add rails app
ADD . /home/app/webapp
WORKDIR /home/app/webapp
RUN touch log/delayed_job.log log/production.log log/
RUN chown -R app:app /home/app/webapp
RUN RAILS_ENV=production rake assets:precompile
# Clean up APT and bundler when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
I am getting permission issue for tmp and log files.
web_1 | [ 2016-07-19 08:45:12.6653 31/7ff812726700 age/Cor/App/Implementation.cpp:304 ]: Could not spawn process for application /home/app/webapp: An error occurred while starting up the preloader.
web_1 | Error ID: 42930e85
web_1 | Error details saved to: /tmp/passenger-error-9DeJ86.html
web_1 | Message from application: Permission denied # rb_sysopen - log/logentries.log (Errno::EACCES)
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `initialize'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `open'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:628:in `open_logfile'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:584:in `initialize'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:318:in `new'
web_1 | /usr/lib/ruby/2.2.0/logger.rb:318:in `initialize'
web_1 | /var/lib/gems/2.2.0/gems/le-2.7.2/lib/le/host/http.rb:37:in `new'
web_1 | /var/lib/gems/2.2.0/gems/le-2.7.2/lib/le/host/http.rb:37:in `initialize'
I tried to give chmod -R 665/775/777 log/ and still didn't fixed the problem.
Thanks
Rearrange your line RUN RAILS_ENV=production rake assets:precompile first then RUN chown -R app:app /home/app/webapp(after your rake task) So, It should be something like this:
RUN RAILS_ENV=production rake assets:precompile
RUN chown -R app:app /home/app/webapp

/etc/init.d/celeryd start fail on AWS

Hi I've been reading a lot about this on this forums but I just don't have an idea of what's going wrong right now, looks like everything is ok, but just don't work
I set up my local configuration like this (/etc/default/celeryd):
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute or relative path to the 'celery' command:
#CELERY_BIN="/usr/local/bin/celery"
CELERY_BIN="/home/ubuntu/.virtualenvs/wlenv/bin/celery"
# Where to chdir at start.
CELERYD_CHDIR="/var/www/DIR_TO_MANAGE.PY_FOLDER"
# Python interpreter from environment.
ENV_PYTHON="/home/ubuntu/.virtualenvs/wlenv/bin/python"
#ENV_PYTHON="/usr/bin/python2.7"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="sec.settings"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit 300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/logs/celery/log/%n.log"
CELERYD_PID_FILE="/logs/celery/run/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="ubuntu"
CELERYD_GROUP="ubuntu"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
When I run /etc/init.d/celeryd start I get this:
celeryd-multi v3.0.9 (Chiastic Slide)
> Starting nodes...
> celery.ip-10-51-179-42: OK
> 300.ip-10-51-179-42: OK
But the workers are not running (/etc/init.d/celeryd status):
Error: No nodes replied within time constraint.
I read something about run like this (sh -x /etc/init.d/celeryd start) and find the error, most of the time is a file permissions error but I don't see nothing wrong
+ DEFAULT_PID_FILE=/logs/celery/run/celeryd#%n.pid
+ DEFAULT_LOG_FILE=/logs/celery/log/celeryd#%n.log
+ DEFAULT_LOG_LEVEL=INFO
+ DEFAULT_NODES=celery
+ DEFAULT_CELERYD=-m celery.bin.celeryd_detach
+ CELERY_DEFAULTS=/etc/default/celeryd
+ test -f /etc/default/celeryd
+ . /etc/default/celeryd
+ CELERY_BIN=/home/ubuntu/.virtualenvs/wlenv/bin/celery
+ CELERYD_CHDIR=/var/www/DIR_TO_MANAGE.PY_FOLDER
+ ENV_PYTHON=/home/ubuntu/.virtualenvs/wlenv/bin/python
+ export DJANGO_SETTINGS_MODULE=sec.settings
+ CELERYD_MULTI=/var/www/DIR_TO_MANAGE.PY_FOLDER/manage.py celeryd_multi
+ CELERYD_OPTS=--time-limit 300 --concurrency=8
+ CELERY_CONFIG_MODULE=celeryconfig
+ CELERYD_LOG_FILE=/logs/celery/log/%n.log
+ CELERYD_PID_FILE=/logs/celery/run/%n.pid
+ CELERYD_USER=ubuntu
+ CELERYD_GROUP=ubuntu
+ CELERY_CREATE_DIRS=1
+ [ -f /etc/default/celeryd ]
+ . /etc/default/celeryd
+ CELERY_BIN=/home/ubuntu/.virtualenvs/wlenv/bin/celery
+ CELERYD_CHDIR=/var/www/DIR_TO_MANAGE.PY_FOLDER
+ ENV_PYTHON=/home/ubuntu/.virtualenvs/wlenv/bin/python
+ export DJANGO_SETTINGS_MODULE=sec.settings
+ CELERYD_MULTI=/var/www/DIR_TO_MANAGE.PY_FOLDER/manage.py celeryd_multi
+ CELERYD_OPTS=--time-limit 300 --concurrency=8
+ CELERY_CONFIG_MODULE=celeryconfig
+ CELERYD_LOG_FILE=/logs/celery/log/%n.log
+ CELERYD_PID_FILE=/logs/celery/run/%n.pid
+ CELERYD_USER=ubuntu
+ CELERYD_GROUP=ubuntu
+ CELERY_CREATE_DIRS=1
+ CELERYD_PID_FILE=/logs/celery/run/%n.pid
+ CELERYD_LOG_FILE=/logs/celery/log/%n.log
+ CELERYD_LOG_LEVEL=INFO
+ CELERYD_MULTI=/var/www/DIR_TO_MANAGE.PY_FOLDER/manage.py celeryd_multi
+ CELERYD=-m celery.bin.celeryd_detach
+ CELERYCTL=celeryctl
+ CELERYD_NODES=celery
+ export CELERY_LOADER
+ [ -n ]
+ dirname /logs/celery/log/%n.log
+ CELERYD_LOG_DIR=/logs/celery/log
+ dirname /logs/celery/run/%n.pid
+ CELERYD_PID_DIR=/logs/celery/run
+ [ ! -d /logs/celery/log ]
+ [ ! -d /logs/celery/run ]
+ [ -n ubuntu ]
+ DAEMON_OPTS= --uid=ubuntu
+ chown ubuntu /logs/celery/log /logs/celery/run
+ [ -n ubuntu ]
+ DAEMON_OPTS= --uid=ubuntu --gid=ubuntu
+ chgrp ubuntu /logs/celery/log /logs/celery/run
+ [ -n /var/www/DIR_TO_MANAGE.PY_FOLDER/contracts ]
+ DAEMON_OPTS= --uid=ubuntu --gid=ubuntu --workdir="/var/www/DIR_TO_MANAGE.PY_FOLDER/contracts"
+ export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/sbin:/sbin
+ check_dev_null
+ [ ! -c /dev/null ]
+ check_paths
+ dirname /logs/celery/run/%n.pid
+ ensure_dir /logs/celery/run
+ [ -d /logs/celery/run ]
+ mkdir -p /logs/celery/run
+ chown ubuntu:ubuntu /logs/celery/run
+ chmod 02755 /logs/celery/run
+ dirname /logs/celery/log/%n.log
+ ensure_dir /logs/celery/log
+ [ -d /logs/celery/log ]
+ mkdir -p /logs/celery/log
+ chown ubuntu:ubuntu /logs/celery/log
+ chmod 02755 /logs/celery/log
+ start_workers
+ /var/www/DIR_TO_MANAGE.PY_FOLDER/manage.py celeryd_multi start celery --uid=ubuntu --gid=ubuntu --workdir="/var/www/DIR_TO_MANAGE.PY_FOLDER" --pidfile=/logs/celery/run/%n.pid --logfile=/logs/celery/log/%n.log --loglevel=INFO --cmd=-m celery.bin.celeryd_detach --time-limit 300 --concurrency=8
celeryd-multi v3.0.9 (Chiastic Slide)
> Starting nodes...
> celery.ip-10-51-179-42: OK
> 300.ip-10-51-179-42: OK
+ exit 0
Any ideas?
Which version of celery are you using?
When you debugged you used "C_FAKEFORK=1 sh -x /etc/init.d/celeryd start" (with C_FAKEFORK=1) right?
If you are using the version 3.x+ you dont need to use "manage.py celery" (djangp-celery) instead you have to use the "celery" command which come with celery itself.
Take a look to this part of the doc documentation.
Thanks!

Command cron_01_set_leader output: bash: /usr/local/bin/bundle: No such file or directory

After installing/configuring whenever-elasticbeanstalk gem, I'm seeing the following error in /var/log/cfn-init.log on my EC2 instance after running git aws.push from my local repo.
Iam using aws elastic benastalk with rails 4.
2014-10-21 08:08:37,602 [DEBUG] Running test for command cron_01_set_leader
2014-10-21 08:08:37,744 [DEBUG] Test command output:
2014-10-21 08:08:37,745 [DEBUG] Test for command cron_01_set_leader passed
2014-10-21 08:08:38,085 [ERROR] Command cron_01_set_leader (su -c "/usr/local/bin/bundle exec create_cron_leader --no-update" $EB_CONFIG_APP_USER) failed
2014-10-21 08:08:38,086 [DEBUG] Command cron_01_set_leader output: bash: /usr/local/bin/bundle: No such file or directory
Traceback (most recent call last):
I have added the whenever-elasticbeanstalk
Below is my cron.config file content..
Any idea ...what am i doing wrong?
files:
# Reload the on deployment
/opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh:
mode: "00700"
owner: root
group: root
content: |
#!/usr/bin/env bash
. /opt/elasticbeanstalk/containerfiles/envvars
cd $EB_CONFIG_APP_CURRENT
su -c "/usr/local/bin/bundle exec setup_cron" $EB_CONFIG_APP_USER
# Add Bundle to the PATH
"/etc/profile.d/bundle.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
export PATH=$PATH:/usr/local/bin
encoding: plain
container_commands:
cron_01_set_leader:
test: test ! -f /opt/elasticbeanstalk/containerfiles/.cron-setup-complete
leader_only: true
cwd: /var/app/ondeck
command: su -c "/usr/local/bin/bundle exec create_cron_leader --no-update" $EB_CONFIG_APP_USER
cron_02_write_cron_setup_complete_file:
cwd: /opt/elasticbeanstalk/containerfiles
command: touch .cron-setup-complete
Which solution stack are you using? Can you give the exact name, something like "64bit Amazon Linux 2014.03 v1.0.9 running Ruby 2.1 (Puma)".
I think you will need to replace "/usr/local/bin/bundle" with the actual version of bundle that is used for the solution stack.
Can you just try using "bundle" instaed of "/usr/local/bin/bundle"?