Elastic Beanstalk Config fails when trying to install GhostScript 9.10 - amazon-web-services

I'm trying to install GhostScript 9.10 on Elastic Beanstalk because currently only Ghostscript 8.70 is available via yum packages.
The installation is working via SSH on the EC2 instance but the configuration file is always failing and I don't understand whats the reason.
Here is my .ebextensions configuration file:
commands:
01_admin_rights:
command: "sudo su"
02_get_gs:
command: "curl -O http://downloads.ghostscript.com/public/old-gs-releases/ghostscript-9.10.tar.gz"
03_extract_gs:
command: "tar -xzf ghostscript-9.10.tar.gz"
04_cd_gs:
command: "cd ghostscript-9.10"
05_configure_gs:
command: "bash configure"
06_install_gs:
command: "make install"
07_so_gs:
command: "make so"
08_reboot:
command: "reboot"
And here goes the elastic beanstalk error log part:
[2016-06-21T12:22:52.720Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 01_admin_rights] : Starting activity...
[2016-06-21T12:22:52.757Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 01_admin_rights] : Completed activity.
[2016-06-21T12:22:52.757Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 02_get_gs] : Starting activity...
[2016-06-21T12:22:53.524Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 02_get_gs] : Completed activity. Result:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 33.6M 100 33.6M 0 0 49.2M 0 --:--:-- --:--:-- --:--:-- 49.2M
[2016-06-21T12:22:53.524Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 03_extract_gs] : Starting activity...
[2016-06-21T12:22:55.066Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 03_extract_gs] : Completed activity.
[2016-06-21T12:22:55.066Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 04_cd_gs] : Starting activity...
[2016-06-21T12:22:55.069Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 04_cd_gs] : Completed activity.
[2016-06-21T12:22:55.070Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 05_configure_gs] : Starting activity...
[2016-06-21T12:22:55.073Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 05_configure_gs] : Activity execution failed, because: bash: configure: No such file or directory
(ElasticBeanstalk::ExternalInvocationError)
[2016-06-21T12:22:55.073Z] INFO [24703] - [Application update Come on #15#25/AppDeployStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_2__Staging/Command 05_configure_gs] : Activity failed.
I understand that the command 5 is failing because the file doesn't exist. However when I'm doing the steps manually via SSH the file exists and all commands can be executed in this order.
What do I miss?
EDIT:
I played around with the configure argument and tried:
/bin/bash: ./configure
./configure
bash ./configure
All arguments fail with the same error "No such file or directory."
If I connect via SSH and enter one of the configure commands then it works without any issues.
Anybody knows whats going on here?

I think this could be because separate commands don't keep the environment or the current directory consistent between them. One of your commands is cd, but I don't think the changed directory persists to the next command. Try combining all commands into one, like this:
command: |
cd directory
./configure
Also, ebextensions run as root, so you may not need the sudo su in the beginning.

Related

AWS Elasticbeanstalk deployment suddenly failing

I have a play application written in Scala that I deploy using elastic beanstalk. Up until now this has worked fine, but a few days ago new deployments started failing. The error message in eb-activity.log that I get is:
[2020-11-25T20:54:29.150Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore] : Starting activity...
[2020-11-25T20:54:29.150Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore/ConfigCWLAgent] : Starting activity...
[2020-11-25T20:54:29.150Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore/ConfigCWLAgent/10-config.sh] : Starting activity...
[2020-11-25T20:54:58.963Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore/ConfigCWLAgent/10-config.sh] : Activity execution failed, because: (ElasticBeanstalk::ExternalInvocationError)
caused by: (Executor::NonZeroExitStatus)
[2020-11-25T20:54:58.964Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore/ConfigCWLAgent/10-config.sh] : Activity failed.
[2020-11-25T20:54:58.964Z] INFO [3127] - [Application deployment givinga-1.8.1-20201125b#152/AddonsBefore/ConfigCWLAgent] : Activity failed.
Deploying to other test environments works, here is the relevant log line when it works:
[2020-11-25T23:19:51.549Z] INFO [3058] - [Application deployment givinga-1.8.1-20201126a#482/AddonsBefore/ConfigCWLAgent] : Starting activity...
[2020-11-25T23:19:51.549Z] INFO [3058] - [Application deployment givinga-1.8.1-20201126a#482/AddonsBefore/ConfigCWLAgent/10-config.sh] : Starting activity...
[2020-11-25T23:19:53.910Z] INFO [3058] - [Application deployment givinga-1.8.1-20201126a#482/AddonsBefore/ConfigCWLAgent/10-config.sh] : Completed activity. Result:
Starting awslogs: [ OK ]
Enabled log streaming.
[2020-11-25T23:19:53.910Z] INFO [3058] - [Application deployment givinga-1.8.1-20201126a#482/AddonsBefore/ConfigCWLAgent] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logstreaming/hooks/config.
So my question is, what is the log streaming doing here? What could cause it to fail? There doesn't seem to be a way for me to delete this addon, or even to configure it.
Which AWS region are you using for your EB environments?
If that deployment worked yesterday and you didn't make changes. It is probably that us-east-1 had a failure today.
https://status.aws.amazon.com/

Activate a Conda Environment During Ray Setup

I'm trying to start a local Ray cluster but the initialization and setup commands are raising errors and I'm not sure what they mean.
For each command, the following message is shown after it is executed (the full logs are shown further down):
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
They don't appear to be stopping some commands from executing successfully, but I'm unable to activate a conda environment on each node using:
# List of shell commands to run to set up each nodes.
setup_commands:
- conda activate pytorch-dev
Any help or explanation would be greatly appreciated.
My cluster configuration file (cluster_config_local.yaml) contains:
# An unique identifier for the head node and workers of this cluster.
cluster_name: default
## NOTE: Typically for local clusters, min_workers == initial_workers == max_workers.
# The minimum number of workers nodes to launch in addition to the head
# node. This number should be >= 0.
# Typically, min_workers == initial_workers == max_workers.
min_workers: 12
# The initial number of worker nodes to launch in addition to the head node.
# Typically, min_workers == initial_workers == max_workers.
initial_workers: 12
# The maximum number of workers nodes to launch in addition to the head node.
# This takes precedence over min_workers.
# Typically, min_workers == initial_workers == max_workers.
max_workers: 12
# Autoscaling parameters.
# Ignore this if min_workers == initial_workers == max_workers.
autoscaling_mode: default
target_utilization_fraction: 0.8
idle_timeout_minutes: 5
# This executes all commands on all nodes in the docker container,
# and opens all the necessary ports to support the Ray cluster.
# Empty string means disabled. Assumes Docker is installed.
docker:
image: "" # e.g., tensorflow/tensorflow:1.5.0-py3
container_name: "" # e.g. ray_docker
run_options: [] # Extra options to pass into "docker run"
# Local specific configuration.
provider:
type: local
head_ip: cs19090bs #Lab 3, machine 311
worker_ips: [
cs19091bs, cs19093bs, cs19094bs, cs19095bs, cs19096bs,
cs19103bs, cs19102bs, cs19101bs, cs19100bs, cs19099bs, cs19098bs, cs19097bs
]
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: user
ssh_private_key: ~/.ssh/id_rsa
# Leave this empty.
head_node: {}
# Leave this empty.
worker_nodes: {}
# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH, e.g.
file_mounts: {
# "/path1/on/remote/machine": "/path1/on/local/machine",
# "/path2/on/remote/machine": "/path2/on/local/machine",
}
# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []
# List of shell commands to run to set up each nodes.
setup_commands:
- conda activate pytorch-dev
# Custom commands that will be run on the head node after common setup.
head_setup_commands: []
# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands: []
# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands:
- ray stop
- ulimit -c unlimited && ray start --head --redis-port=6379 --autoscaling-config=~/ray_bootstrap_config.yaml
# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands:
- ray stop
- ray start --redis-address=$RAY_HEAD_IP:6379
The full logs that are shown when I execute ray up cluster_config_local.yaml are:
2019-11-11 10:18:06,930 INFO node_provider.py:41 -- ClusterState: Loaded cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
This will create a new cluster [y/N]: y
2019-11-11 10:18:08,413 INFO commands.py:201 -- get_or_create_head_node: Launching new head node...
2019-11-11 10:18:08,414 INFO node_provider.py:85 -- ClusterState: Writing cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
2019-11-11 10:18:08,416 INFO commands.py:214 -- get_or_create_head_node: Updating files on head node...
2019-11-11 10:18:08,417 INFO updater.py:356 -- NodeUpdater: cs19090bs: Updating to 345f31e4c980153f1c40ae2c0be26b703d4bbfde
2019-11-11 10:18:08,419 INFO node_provider.py:85 -- ClusterState: Writing cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
2019-11-11 10:18:08,419 INFO updater.py:398 -- NodeUpdater: cs19090bs: Waiting for remote shell...
2019-11-11 10:18:08,420 INFO updater.py:210 -- NodeUpdater: cs19090bs: Waiting for IP...
2019-11-11 10:18:08,429 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Got IP [LogTimer=9ms]
2019-11-11 10:18:08,442 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running uptime on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
10:18:10 up 4 days, 22:41, 1 user, load average: 1.14, 0.56, 0.38
2019-11-11 10:18:10,178 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Got remote shell [LogTimer=1759ms]
2019-11-11 10:18:10,181 INFO node_provider.py:85 -- ClusterState: Writing cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
2019-11-11 10:18:10,182 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running mkdir -p ~ on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2019-11-11 10:18:11,640 INFO updater.py:460 -- NodeUpdater: cs19090bs: Syncing /tmp/ray-bootstrap-aomvoo_d to ~/ray_bootstrap_config.yaml...
sending incremental file list
ray-bootstrap-aomvoo_d
sent 120 bytes received 47 bytes 111.33 bytes/sec
total size is 1,063 speedup is 6.37
2019-11-11 10:18:12,147 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Synced /tmp/ray-bootstrap-aomvoo_d to ~/ray_bootstrap_config.yaml [LogTimer=1964ms]
2019-11-11 10:18:12,147 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running mkdir -p ~ on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2019-11-11 10:18:13,610 INFO updater.py:460 -- NodeUpdater: cs19090bs: Syncing /home/cosc/student/atu31/.ssh/id_rsa to ~/ray_bootstrap_key.pem...
sending incremental file list
sent 60 bytes received 12 bytes 48.00 bytes/sec
total size is 3,243 speedup is 45.04
2019-11-11 10:18:14,131 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Synced /home/cosc/student/atu31/.ssh/id_rsa to ~/ray_bootstrap_key.pem [LogTimer=1984ms]
2019-11-11 10:18:14,133 INFO node_provider.py:85 -- ClusterState: Writing cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
2019-11-11 10:18:14,134 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Initialization commands completed [LogTimer=0ms]
2019-11-11 10:18:14,134 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running conda activate pytorch-dev on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2019-11-11 10:18:15,740 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Setup commands completed [LogTimer=1605ms]
2019-11-11 10:18:15,740 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running ray stop on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2019-11-11 10:18:17,809 INFO updater.py:262 -- NodeUpdater: cs19090bs: Running ulimit -c unlimited && ray start --head --redis-port=6379 --autoscaling-config=~/ray_bootstrap_config.yaml on 132.181.15.173...
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
2019-11-11 10:18:19,923 INFO scripts.py:303 -- Using IP address 132.181.15.173 for this node.
2019-11-11 10:18:19,924 INFO resource_spec.py:205 -- Starting Ray with 7.62 GiB memory available for workers and up to 3.81 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2019-11-11 10:18:20,169 INFO scripts.py:333 --
Started Ray on this node. You can add additional nodes to the cluster by calling
ray start --redis-address 132.181.15.173:6379
from the node you wish to add. You can connect a driver to the cluster from Python by running
import ray
ray.init(redis_address="132.181.15.173:6379")
If you have trouble connecting from a different machine, check that your firewall is configured properly. If you wish to terminate the processes that have been started, run
ray stop
2019-11-11 10:18:20,221 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Ray start commands completed [LogTimer=4480ms]
2019-11-11 10:18:20,222 INFO log_timer.py:21 -- NodeUpdater: cs19090bs: Applied config 345f31e4c980153f1c40ae2c0be26b703d4bbfde [LogTimer=11804ms]
2019-11-11 10:18:20,224 INFO node_provider.py:85 -- ClusterState: Writing cluster state: ['cs19091bs', 'cs19093bs', 'cs19094bs', 'cs19095bs', 'cs19096bs', 'cs19090bs', 'cs19103bs', 'cs19102bs', 'cs19101bs', 'cs19100bs', 'cs19099bs', 'cs19098bs', 'cs19097bs']
2019-11-11 10:18:20,226 INFO commands.py:281 -- get_or_create_head_node: Head node up-to-date, IP address is: 132.181.15.173
To monitor auto-scaling activity, you can run:
ray exec cluster/cluster_config_local.yaml 'tail -n 100 -f /tmp/ray/session_*/logs/monitor*'
To open a console on the cluster:
ray attach cluster_config_local.yaml
To get a remote shell to the cluster manually, run:
ssh -i ~/.ssh/id_rsa user#132.181.15.173
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
This error message is harmless (and should be muted in Ray). See How to tell bash not to issue warnings "cannot set terminal process group" and "no job control in this shell" when it can't assert job control?.

Celery workers failing in aws elastic beanstalk [exited: celeryd-worker (exit status 1; not expected)]

I've been trying to follow this thorough explanation on how to deploy a django app with celery worker to aws elastic beanstalk:
How to run a celery worker with Django app scalable by AWS Elastic Beanstalk?
I had some problems installing pycurl but solved it with the comment in:
Pip Requirements.txt --global-option causing installation errors with other packages. "option not recognized"
Then i got:
[2019-01-26T06:43:04.865Z] INFO [12249] - [Application update app-190126_134200#28/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_raiseflags/Command 05_celery_tasks_run] : Activity execution failed, because: /usr/bin/env: bash
: No such file or directory
(ElasticBeanstalk::ExternalInvocationError)
But also solved it: it turns out I had to convert "celery_configuration.txt" file to UNIX EOL (i'm using Windows, and Notepad++ automatically converted it to Windows EOL).
With all these modifications I can successfully deploy the project. But the problem is that the periodic tasks are not running.
I get:
2019-01-26 09:12:57,337 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:12:58,583 INFO spawned: 'celeryd-worker' with pid 25691
2019-01-26 09:12:59,453 INFO spawned: 'celeryd-beat' with pid 25695
2019-01-26 09:12:59,666 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:00,790 INFO spawned: 'celeryd-worker' with pid 25705
2019-01-26 09:13:00,791 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:01,915 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:03,919 INFO spawned: 'celeryd-worker' with pid 25728
2019-01-26 09:13:03,920 INFO spawned: 'celeryd-beat' with pid 25729
2019-01-26 09:13:05,985 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:06,091 INFO exited: celeryd-beat (exit status 1; not expected)
2019-01-26 09:13:07,092 INFO gave up: celeryd-beat entered FATAL state, too many start retries too quickly
2019-01-26 09:13:09,096 INFO spawned: 'celeryd-worker' with pid 25737
2019-01-26 09:13:10,084 INFO exited: celeryd-worker (exit status 1; not expected)
2019-01-26 09:13:11,085 INFO gave up: celeryd-worker entered FATAL state, too many start retries too quickly
I also have this part of the logs:
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook/run_supervised_celeryd.sh] : Completed activity. Result:
[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A raiseflags --loglevel=INFO
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A raiseflags --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid
directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="puigdemontAWS",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="holahola",RDS_HOSTNAME="aa1m59206y4fljn.cdreg3t50bbl.eu-west-1.rds.amazonaws.com"
No config updates to processes
celeryd-beat: ERROR (not running)
celeryd-beat: ERROR (abnormal termination)
celeryd-worker: ERROR (not running)
celeryd-worker: ERROR (abnormal termination)
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1/AppDeployPostHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/post.
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AppDeployStage1] : Completed activity. Result:
Application version switch - Command CMD-AppDeploy stage 1 completed
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Starting activity...
[2019-01-26T09:13:00.583Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Starting activity...
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation/10-config.sh] : Completed activity. Result:
Disabled forced hourly log rotation.
[2019-01-26T09:13:00.756Z] INFO [25247] - [Application update app-190126_161213#43/AddonsAfter/ConfigLogRotation] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/addons/logpublish/hooks/config.
I don't know if it has something to do with the error, but notice above the line [[ PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s" ]] --> shouldn't ENV_PATH be something else?:
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com"
I'ts my first time deploying an app with celery, and i'm really lost to be honest. I fought a lot to solve the first two errors (i'm really amateur), and now that i get this I don't even know where to start.
Also, i'm not sure if I'm using "celery_configuration.txt" the right way. The only thing I edited was the 2 places where it says "django_app", which I changed for "raiseflags" (the name of my django project). Is this correct?
Does anyone know how to solve it? I can paste my files if needed, but they are just like the ones provided in the first link. I'm using Windows.
Thank you very much!
Ok, the problem had nothing to do with the PATH line I was referring to. I just had to add 'django_celery_beat' and 'django_celery_results' in INSTALLED_APPS in my settings.py
The connection error I later referred to talking to Fran was because I needed to set BROKER_URL instead of CELERY_BROKER_URL, also in the settings.py file. I guess this had to do with me not specifying 'CELERY' as the namespace in the app.autodiscover_tasks() in celery.py file (although in the linked question they do it, i didn't do it because i was using a different version of celery).
Thanks to Fran for everything, specially for pointing out that i should review the celery error logs. I didn't know how to do it. If any other amateur is also struggling, know that you have to "eb ssh" to your instance and then "tail -n 40 /var/log/celery-worker.log" and ""tail -n 40 /var/log/celery-beat.log" (where "40" is the number of lines you want to read). I know this sounds obvious to a lot of people but, stupid me, I had no clue.
(btw, i'm still struggling with a problem with the celery worker, that can't find pycurl module, but this has nothing to do with this question).
Referring to the line you pointed out where appears
environment=PYTHONPATH="/opt/python/current/app/:",PATH="/opt/python/run/venv/bin/:%%(ENV_PATH)s",RDS_PORT="5432",RDS_DB_NAME="ebdb",RDS_USERNAME="foobar",PYCURL_SSL_LIBRARY="nss",DJANGO_SETTINGS_MODULE="raiseflags.settings",RDS_PASSWORD="foobar",RDS_HOSTNAME="something.something.eu-west-1.rds.amazonaws.com", do you copy this line from somewhere? Because I don't see it in the link you posted.
In the linked answer was environment=$celeryenv, where $celeryenv was defined as
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}```

RoR Gemfile issue on Elastic Beanstalk

I've been struggling AWS-Elastic Beanstalk problems.
The other day, I appended some gems into Gemfile and then used eb deploy.
(Maybe the gem is whenever? or bcrypt? Sorry not sure)
It didn't work correctly on my deployment. The results are below.
ERROR: [Instance: i-452520da] Command failed on instance. Return code: 10 Output: /opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh: line 3: cd: HOME not set
Could not locate Gemfile or .bundle/ directory.
Hook /opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-452520da'. Aborting the operation.
ERROR: Failed to deploy application.
Here's eb-activity.log.
[2016-08-28T01:51:16.844Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployEnactHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/enact.
[2016-08-28T01:51:16.844Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook] : Starting activity...
[2016-08-28T01:51:16.844Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook/01_create_pids.sh] : Starting activity...
[2016-08-28T01:51:17.044Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook/01_create_pids.sh] : Completed activity.
[2016-08-28T01:51:17.044Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook/10_reload_cron.sh] : Starting activity...
[2016-08-28T01:51:17.242Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook/10_reload_cron.sh] : Activity execution failed, because: /opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh: line 3: cd: HOME not set
Could not locate Gemfile or .bundle/ directory (ElasticBeanstalk::ExternalInvocationError)
caused by: /opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh: line 3: cd: HOME not set
Could not locate Gemfile or .bundle/ directory (Executor::NonZeroExitStatus)
[2016-08-28T01:51:17.242Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook/10_reload_cron.sh] : Activity failed.
[2016-08-28T01:51:17.243Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1/AppDeployPostHook] : Activity failed.
[2016-08-28T01:51:17.243Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147/AppDeployStage1] : Activity failed.
[2016-08-28T01:51:17.243Z] INFO [20749] - [Application update app-ed0b6-160828_104745#147] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
[2016-08-28T01:52:38.630Z] INFO [2486] - [CMD-TailLogs] : Starting activity...
[2016-08-28T01:52:38.630Z] INFO [2486] - [CMD-TailLogs/AddonsBefore] : Starting activity...
How could I eliminate this problem? Thanks in advance.
I can work out this problem.
The thing is "/opt/elasticbeanstalk/hooks/appdeploy/post/10_reload_cron.sh".
So I deleted this file by using .ebextensions/some.sh

Error Docker deployment in Amazon Elastic Beanstalk - Docker container quit unexpectedly

I am trying deploy a simple docker container through Elastic Beanstalk but I am getting Docker container quit unexpectedly error. Not sure what is wrong here. Thanks in advance for the help.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "janedoe/image",
"Update": "true"
},
"Ports": [{
"ContainerPort": "10010"
}],
"Volumes": [{
"HostDirectory": "/home/ec2-user/testdocker",
"ContainerDirectory": "/home/ec2-user/testdocker"
}],
"Logging": "/home/ec2-user/testlogs"
}
Dockerfile:
FROM centos:centos6
MAINTAINER janedoe
RUN echo "test"
EXPOSE 10010
Log :
[2016-03-22T22:56:35.034Z] INFO [15895] - [Application update/AppDeployStage0/AppDeployPreHook/03build.sh] : Completed activity.
Result:
centos6: Pulling from library/centos
Digest: sha256:ec1bf627545d77d05270b3bbd32a9acca713189c58bc118f21abd17ff2629e3f
Status: Image is up to date for centos:centos6
Successfully pulled centos:centos6
Sending build context to Docker daemon 4.608 kB
Sending build context to Docker daemon 4.608 kB
Step 1 : FROM centos:centos6
---> ed452988fb6e
Step 2 : MAINTAINER janedoe
---> Running in 8bce7dfb7e59
---> 04de6fffed04
Removing intermediate container 8bce7dfb7e59
Step 3 : RUN echo "test"
---> Running in 36cef1d7c0e5
test
---> c5b3d119184c
Removing intermediate container 36cef1d7c0e5
Step 4 : EXPOSE 10010
---> Running in ea07cbcc1136
---> 45f9b3fe6503
Removing intermediate container ea07cbcc1136
Successfully built 45f9b3fe6503
Successfully built aws_beanstalk/staging-app
[2016-03-22T22:56:35.034Z] INFO [15895] - [Application update/AppDeployStage0/AppDeployPreHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/pre.
[2016-03-22T22:56:35.035Z] INFO [15895] - [Application update/AppDeployStage0/EbExtensionPostBuild] : Starting activity...
[2016-03-22T22:56:35.550Z] INFO [15895] - [Application update/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild] : Starting activity...
[2016-03-22T22:56:35.550Z] INFO [15895] - [Application update/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild] : Completed activity.
[2016-03-22T22:56:35.587Z] INFO [15895] - [Application update/AppDeployStage0/EbExtensionPostBuild] : Completed activity.
[2016-03-22T22:56:35.588Z] INFO [15895] - [Application update/AppDeployStage0/InfraCleanEbextension] : Starting activity...
[2016-03-22T22:56:36.107Z] INFO [15895] - [Application update/AppDeployStage0/InfraCleanEbextension] : Completed activity. Result:
Cleaned ebextensions subdirectories from .
[2016-03-22T22:56:36.107Z] INFO [15895] - [Application update/AppDeployStage0] : Completed activity. Result:
Application update - Command CMD-AppDeploy stage 0 completed
[2016-03-22T22:56:36.107Z] INFO [15895] - [Application update/AppDeployStage1] : Starting activity...
[2016-03-22T22:56:36.108Z] INFO [15895] - [Application update/AppDeployStage1/AppDeployEnactHook] : Starting activity...
[2016-03-22T22:56:36.108Z] INFO [15895] - [Application update/AppDeployStage1/AppDeployEnactHook/00run.sh] : Starting activity...
[2016-03-22T22:56:44.157Z] INFO [15895] - [Application update/AppDeployStage1/AppDeployEnactHook/00run.sh] : Activity execution failed, because: 268f1a5e43874771bc6039977e9eb048e704c0b94a5e100a2a9ffbf2d9d7f271
Docker container quit unexpectedly after launch: Docker container quit unexpectedly on Tue Mar 22 22:56:44 UTC 2016:. Check snapshot logs for details. (ElasticBeanstalk::ExternalInvocationError)
caused by: 268f1a5e43874771bc6039977e9eb048e704c0b94a5e100a2a9ffbf2d9d7f271
Docker container quit unexpectedly after launch: Docker container quit unexpectedly on Tue Mar 22 22:56:44 UTC 2016:. Check snapshot logs for details. (Executor::NonZeroExitStatus)
You should use a CMD instead of a RUN on your Dockerfile.
When executing commands in a Dockerfile, you must choose carefully between RUN, CMD and ENTRYPOINT (extracted from the Docker reference):
RUN:
The RUN instruction will execute any commands in a new layer on top of
the current image and commit the results. The resulting committed
image will be used for the next step in the Dockerfile.
CMD:
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
ENTRYPOINT:
An ENTRYPOINT allows you to configure a container that will run as an
executable.
You should have a deep read to the Docker reference and to the Docker best practices
Apart from that, if you intend to use the volumes you defined in your Dockerrun.aws.json, have in mind what is stated in the AWS documentation:
Do not specify the Image key in the Dockerrun.aws.json file when using
a Dockerfile. .Elastic Beanstalk will always build and use the image
described in the Dockerfile when one is present.
This means that your Dockerrun.aws.json will be ignored, so take care.
It is easier to just see the command prompt output if your run it like
eb create <replace_with_your_env_name/> -vvv