EFS symlink fails while deploying - amazon-web-services

So I use AWS Elastic Beanstalk to serve my PHP application. I want to mount EFS to have permanent storage for the images uploaded via my application.
I have created .ebextensions folder and created one file called mount.config with the below code
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /mnt/efs
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/ /mnt/efs || true
mkdir -p /mnt/efs/questions
chown webapp:webapp /mnt/efs/questions
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-symlink-uploads:
command: ln -s /mnt/efs/questions /var/app/ondeck/images/
Everything is working fine until the last line where it fails to create a symlink.
What I have tried so far:
Running the command directly on the machine while changing ondeck -> current. This works fine.
Removing the EC2 instance and adding a new one. Still failing
In the logs I see
ln: failed to create symbolic link '/var/app/current/images/questions': No such file or directory
Any suggestion what could be the reason?

Ok, I fixed it by replacing ondeck with staging
And adding this line under container_commands:
01-change-permission:
command: chmod -R 777 /var/app/staging/images

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Curl command doesn't work in config file on AWS

I have a Django web application that is deployed to AWS elastic beanstalk (Python 3.7 running on 64bit Amazon Linux 2/3.1.3). I am trying to run the following config file
files:
"/usr/local/bin/cron_tab.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
exec &>> /tmp/cron_tab_log.txt
date > /tmp/date
source /var/app/venv/staging-LQM1lest/bin/activate
cd /var/app/current
python manage.py crontab add
exit 0
container_commands:
cron_tab:
command: "curl /usr/local/bin/cron_tab.sh | bash"
This file placed in the .ebextentions folder. All other config files are working properly. However, this one is not working. Also, I have tried to run the container_commands code manually on SSH and it gives output such as below.
curl: (3) <url> malformed
I also checked the /tmp folder but there is no cron_tab_log.txt. I checked /usr/local/bin the cron_tab.sh is located there.
I just want this Django-crontab run after the deploy and it doesn't work. How can I handle this issue?
Curl is used for web url call not executing a script, I think you need to change the last line in your config file to be:
command: "sudo /usr/local/bin/cron_tab.sh"

Copying File From Gitlab To EC2 (/www/html) Folder Using Ssh

I am trying to copy files from my GitLab repository to the folder of my ec2 instance over ssh using server_ip and ec2 private_key.
I am not able to copy my files into the target folder.
My .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: alpine
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#$DEPLOY_SERVER 'rm -rf /var/www/html/*'
- scp -r . ubuntu#$DEPLOY_SERVER:/var/www/html **
## Here How Can I Copy all my repositroy file to target folder**
Check first the ssh call just before scp actually works.
Then try:
scp -o LogLevel=DEBUG -r . ubuntu#$DEPLOY_SERVER:/var/www/html
That will give you an idea why the scp fails, while the ssh call, I presume, works.

Installing Anaconda on Amazon Elastic Beanstalk to use in Django application

I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log

Django migrations with Docker on AWS Elastic Beanstalk

I have a django app running inside a single docker container on AWS Elastic Beanstalk. I cannot get it to run migrations properly, it always sees the old docker image and tries to run migrations from that (but it doesn’t have the latest files).
I package an .ebextensions directory with my EBS source bundle (a zip containing a Dockerrun.aws.json file and the .ebextensions dir). And it has a setup.config file that looks like this:
container_commands:
01_migrate:
command: "CONTAINER=`docker ps -a --no-trunc | grep aws_beanstalk | cut -d' ' -f1 | head -1` && docker exec $CONTAINER python3 manage.py migrate"
leader_only: true
Which is partially modeled after the comments on this SO question.
I have verified that it can work if I simply re-deploy the app a second time, since this time the previous running image will have the updated migrations file.
Does anyone know how to access the latest docker image or latest running container in an .ebextensions script?
Based on AWS Documentation on Customizing Software on Linux Servers, container_commands will be executed before your app is deployed.
You can use the container_commands key to execute commands for your container. The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. They also have access to environment variables such as your AWS security credentials. Additionally, you can use leader_only. One instance is chosen to be the leader in an Auto Scaling group. If the leader_only value is set to true, the command runs only on the instance that is marked as the leader.
Take a look also into my answer in here. It run some command in different app deployment state and give the command result.
So, your problem solution might be create an post app deployment hook.
.ebextensions/00_post_migrate.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/10_post_migrate.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
if [ -f /tmp/leader_only ]
then
rm /tmp/leader_only
docker exec `docker ps --no-trunc -q | head -n 1` python3 manage.py migrate
fi
container_commands:
01_migrate:
command: "touch /tmp/leader_only"
leader_only: true
I am using another approach. What I did is run a container based on the newly build image, then pass in the environment variables from Elastic Beanstalk and run the custom command in that container. When that command is done, it will remove itself and proceed with the deployment.
So this is the script I have put inside .ebextensions/scripts/container_command.sh (make sure you replace everything that is within <>):
#!/bin/bash
COMMAND=$1
EB_CONFIG_DOCKER_IMAGE_STAGING=$(/opt/elasticbeanstalk/bin/get-config container -k <environment_name>_image)
EB_SUPPORT_FILES=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
# build --env arguments for docker from env var settings
EB_CONFIG_DOCKER_ENV_ARGS=()
while read -r ENV_VAR; do
EB_CONFIG_DOCKER_ENV_ARGS+=(--env "${ENV_VAR}")
done < <($EB_SUPPORT_FILES/generate_env)
docker run --name=shopblender_pre_deploy -d \
"${EB_CONFIG_DOCKER_ENV_ARGS[#]}" \
"${EB_CONFIG_DOCKER_IMAGE_STAGING}"
docker exec shopblender_pre_deploy ${COMMAND}
# clean up
docker stop shopblender_pre_deploy
docker rm shopblender_pre_deploy
Now, you can use this script to execute any custom command to the container that will be deployed later.
Something like this .ebextensions/container_commands.config:
container_commands:
01-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:schema:update --force --no-interaction" &>> /var/log/database.log
leader_only: true
02-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console fos:elastica:reset --no-interaction" &>> /var/log/database.log
leader_only: true
03-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:fixtures:load --no-interaction" &>> /var/log/database.log
leader_only: true
This way you also do not need to worry about what your latest started container is, which is a problem with the solution described above.