I am trying to install an application through .ebextensions in my elasticbeanstalk stack. I've followed the doc here for advanced environment customization. This is my config:
files:
"/tmp/software.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
wget https://website.net/software/software-LATEST-1.x86_64.rpm
software-LATEST-1.x86_64.rpm
sed -i -e '$a\
*.* ##127.0.0.1:1514;RSYSLOG_FileFormat' /etc/rsyslog.conf
/sbin/service rsyslog restart
/sbin/service software start
container_commands:
01_run:
command: "/tmp/software.sh"
When applying the config I receive an error that the command "service" is not found even though I point to the location of the service command in /sbin/service. I've tried a lot of different things but I always get this error. Running the script manually on the host works without any issue.
The image the stack is using is Amazon Linux release 2 (Karoo)
The specific error message is:
[3744211/3744211]\n\n/tmp/[01;31m[Kalert[m[K_software.sh: line 8: service: command not found\n/tmp/[01;31m[Kalert[m[K_software.sh: line 9: service: command not found. \ncontainer_command 02_run in .eb[01;31m[Kextensions[m[K/[01;31m[Kalert[m[K-software.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":127,"events":
My co-worker tried to install the software a different way and it worked. This is what worked:
install.config >>
01_install_software:
command: rpm -qa | grep -qw software || yum -y -q install https://website.net/software/software-LATEST-1.x86_64.rpm
02_update_rsyslog:
command: sed -i -e '$a*.* ##127.0.0.1:1514;RSYSLOG_FileFormat' -e '/*.* ##127.0.0.1:1514;RSYSLOG_FileFormat/d' /etc/rsyslog.conf
03_restart_rsyslog:
command: service rsyslog restart
services.config >>
services:
sysvinit:
rsyslog:
enabled: "true"
ensureRunning: "true"
software:
enabled: "true"
ensureRunning: "true"
Related
The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!
Title has most of the question, but more context is below
Tried following the directions found here:
https://cloud.google.com/compute/docs/gpus/monitor-gpus
I modified the code a bit, but haven't been able to get it working. Here's the abbreviated cloud config I've been running that should show the relevant parts:
- path: /etc/scripts/gpumonitor.sh
permissions: "0644"
owner: root
content: |
#!/bin/bash
echo "Starting script..."
sudo mkdir -p /etc/google
cd /etc/google
sudo git clone https://github.com/GoogleCloudPlatform/compute-gpu-monitoring.git
echo "Downloaded Script..."
echo "Starting up monitoring service..."
sudo systemctl daemon-reload
sudo systemctl --no-reload --now enable /etc/google/compute-gpu-monitoring/linux/systemd/google_gpu_monitoring_agent.service
echo "Finished Script..."
- path: /etc/systemd/system/install-monitoring-gpu.service
permissions: "0644"
owner: root
content: |
[Unit]
Description=Install GPU Monitoring
Requires=install-gpu.service
After=install-gpu.service
[Service]
User=root
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/bash /etc/scripts/gpumonitor.sh
StandardOutput=journal+console
StandardError=journal+console
runcmd:
- systemctl start install-monitoring-gpu.service
Edit:
Turned out it was best to build a docker container with the monitoring script in it and run the docker container in my config script by passing the GPU into the docker container like shown in the following link
https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus
So I use AWS Elastic Beanstalk to serve my PHP application. I want to mount EFS to have permanent storage for the images uploaded via my application.
I have created .ebextensions folder and created one file called mount.config with the below code
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /mnt/efs
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/ /mnt/efs || true
mkdir -p /mnt/efs/questions
chown webapp:webapp /mnt/efs/questions
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-symlink-uploads:
command: ln -s /mnt/efs/questions /var/app/ondeck/images/
Everything is working fine until the last line where it fails to create a symlink.
What I have tried so far:
Running the command directly on the machine while changing ondeck -> current. This works fine.
Removing the EC2 instance and adding a new one. Still failing
In the logs I see
ln: failed to create symbolic link '/var/app/current/images/questions': No such file or directory
Any suggestion what could be the reason?
Ok, I fixed it by replacing ondeck with staging
And adding this line under container_commands:
01-change-permission:
command: chmod -R 777 /var/app/staging/images
I have a Django web application that is deployed to AWS elastic beanstalk (Python 3.7 running on 64bit Amazon Linux 2/3.1.3). I am trying to run the following config file
files:
"/usr/local/bin/cron_tab.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
exec &>> /tmp/cron_tab_log.txt
date > /tmp/date
source /var/app/venv/staging-LQM1lest/bin/activate
cd /var/app/current
python manage.py crontab add
exit 0
container_commands:
cron_tab:
command: "curl /usr/local/bin/cron_tab.sh | bash"
This file placed in the .ebextentions folder. All other config files are working properly. However, this one is not working. Also, I have tried to run the container_commands code manually on SSH and it gives output such as below.
curl: (3) <url> malformed
I also checked the /tmp folder but there is no cron_tab_log.txt. I checked /usr/local/bin the cron_tab.sh is located there.
I just want this Django-crontab run after the deploy and it doesn't work. How can I handle this issue?
Curl is used for web url call not executing a script, I think you need to change the last line in your config file to be:
command: "sudo /usr/local/bin/cron_tab.sh"
Hi I want to use goofys on AWS ElasticBeanstalk php 7.0 environment.
I create .ebextentions/00_install_goofy.config.
(install golang from binary because golang version by yum is old.
packages:
yum:
fuse: []
commands:
100_install_golang_01:
command: wget https\://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
100_install_golang_02:
command: tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz
100_install_golang_03:
command: export GOROOT=/usr/local/go
test: [ -z "$GOROOT" ]
100_install_golang_04:
command: export GOPATH=/home/ec2-user/go
test: [ -z "$GOPATH" ]
100_install_golang_05:
command: export PATH=$PATH\:$GOROOT/bin\:$GOPATH/bin
100_install_golang_06:
command: echo $GOPATH > gopath
But 100_install_golang_03 not work well...
Test for Command 100_install_golang_03
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_yubion_website] : Completed activity.
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild] : Activity execution failed, because: [Errno 2] No such file or directory (ElasticBeanstalk::ExternalInvocationError)
I cant export env and path. Can I set PATH on .ebextensions?
Or is there better way to install goofys on ElasticBeanstalk automatically.
finally I find commands defined by .ebextensions run NO EVIRONMET VALUE.
It work on an environment like sandbox.
So scope of "export" commands is only "command" section.
if you want to use PATH in commands, you have to add export command to every commands.
Additionally if you want use PATH after eb deployed, see following link.
How can I add PATH on Elastic Beanstalk