Elastic-Beanstalk: How to find the S3 Bucket From the EB Instance - amazon-web-services

I'm interested in creating a backup of the current application as an application version. As a result I'd like to try and track down the S3 Bucket where the App Versions are saved to save a new file to that location.
Does anyone know how this is done?

To do this:
Get the Instance Id
Get the environment Id.
Get the App Name
Get the App Versions
The data is in the response of the App Versions.
Based on the above, here is my entire backup script. It will figure out steps 1-4 above, then zip up the current app directory, send it to S3, and create a new app version based on it.
#/bin/bash
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
EB_ENV_ID="`aws ec2 describe-instances --instance-ids $EC2_INSTANCE_ID --region $EC2_REGION | grep -B 1 \"elasticbeanstalk:environment-id\" | grep -Po '\"Value\":\\s+\"[^\"]+\"' | cut -d':' -f 2 | grep -Po '[^\"]+'`"
EB_APP_NAME="`aws elasticbeanstalk describe-environments --environment-id $EB_ENV_ID --region $EC2_REGION | grep 'ApplicationName' | cut -d':' -f 2 | grep -Po '[^\\",]+'`"
EB_BUCKET_NAME="`aws elasticbeanstalk describe-application-versions --application-name $EB_APP_NAME --region $EC2_REGION | grep -m 1 'S3Bucket' | cut -d':' -f 2 | grep -Po '[^\", ]+'`"
DATE=$(date -d "today" +"%Y%m%d%H%M")
FILENAME=application.$DATE.zip
cd /var/app/current
sudo -u webapp zip -r -J /tmp/$FILENAME *
sudo mv /tmp/$FILENAME ~/.
sudo chmod 755 ~/$FILENAME
cd ~
aws s3 cp $FILENAME s3://$EB_BUCKET_NAME/
rm -f ~/$FILENAME
aws elasticbeanstalk create-application-version --application-name $EB_APP_NAME --version-label "$DATE-$FILENAME" --description "Auto-Save $DATE" --source-bundle S3Bucket="$EB_BUCKET_NAME",S3Key="$FILENAME" --no-auto-create-application --region $EC2_REGION

tldr: most probably you want
jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)
Detailed version:
Add to any of your .ebextensions
packages:
yum:
jq: []
(assuming you are using Amazon Linux or Centos)
and run
export BUCKET="$(jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)' || sudo !!)"
echo bucketname: $BUCKET
then full s3 path would be accessible at
bash -c 'source /etc/elasticbeanstalk/.aws-eb-stack.properties; echo s3:\\${BUCKET} -r ${region:-eu-east-3}'

Related

I'm getting "command not found" error after running these commands in GitLab. I used these commands 2 days ago and they worked fine

I was using these commands for my deploy-job the other day and it worked fine. This is a new pipeline for a new project and now these commands aren't working. I'm getting errors in my pipeline after every command saying "command not found". Here's my gitlab-ci file for reference
variables:
DOCKER_REGISTRY: 775362094965.dkr.ecr.us-west-2.amazonaws.com
AWS_DEFAULT_REGION: us-west-2
APP_NAME: flask-app
DOCKER_HOST: tcp://docker:2375
stages:
- build
- deploy
build-job:
stage: build
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
deploy-job:
stage: deploy
script:
- echo `aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2` > input.json
- echo $(cat input.json | jq '.taskDefinition.containerDefinitions[].image="'$REPOSITORY_URI':'$IMAGE_TAG'"') > input.json
- echo $(cat input.json | jq '.taskDefinition') > input.json
- echo $(cat input.json | jq 'del(.taskDefinitionArn)' | jq 'del(.revision)' | jq 'del(.status)' | jq 'del(.requiresAttributes)' | jq 'del(.compatibilities)' | jq 'del(.registeredAt)' | jq 'del(.registeredBy)') > input.json
- aws ecs register-task-definition --cli-input-json file://input.json --region us-west-2
- revision=$(aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2 | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//' | cut -d "," -f 1)
- aws ecs update-service --cluster $CI_AWS_ECS_CLUSTER --service $CI_AWS_ECS_SERVICE --task-definition $CI_AWS_ECS_TASK_DEFINITION:$revision --region us-west-2
My build-job works fine, I'm just getting "command not found" with my deploy-job.
You need to specify an image outside of the build job or in the deploy job. Right now, you're only specifying an image inside your build-job.

Add instances in target group aws cli, shell script

Wrote a script to add instances to the AWS target group
#!/bin/bash
export AWS_PROFILE=***
export AWS_DEFAULT_REGION=eu-central-1
for INST_NAME in $(aws ec2 describe-instances --query 'Reservations[].Instances[].Tags[?Key==`Name`].Value' --output text | sort); do
echo "check ${INST_NAME} in Target Group"
TARGET_GROUP=$(aws elbv2 describe-target-groups|jq -r '.[]|.[].TargetGroupArn'| grep ${INST_NAME})
RUNNING_INSTANCES=$(aws ec2 describe-instances| jq '.Reservations[].Instances[] | select(.Tags[].Value=="${INST_NAME}")'| jq -r .InstanceId| sort | uniq | wc -l)
COUNT=$(aws elbv2 describe-target-health --target-group-arn ${TARGET_GROUP}|jq -r '.TargetHealthDescriptions[].Target.Id'| wc -l)
if [[ ${RUNNING_INSTANCES} = ${COUNT} ]]; then
echo "VSE OK"
else
echo "dobavit ${RUNNING_INSTANCES} v ${TARGET_GROUP}"
for INSTANCE_ID in $(aws ec2 describe-instances --filter Name=tag-key,Values=Name --query "Reservations[*].Instances[*].{Instance:InstanceId,Name:Tags[?Key=='Name']|[0].Value}"|jq ".[][]|select(.Name==\"${TAGS}\")"|jq -r .Instance); do
ASG=$(aws autoscaling describe-auto-scaling-instances|jq '.AutoScalingInstances'|jq ".[]|select(.InstanceId==\"${INSTANCE_ID}\")"|jq -r .InstanceId)
echo "Updating ${TARGET_GROUP} to add instances from ${ASG}"
aws elbv2 register-targets --target-group-arn ${TARGET_GROUP} --targets "Id="${ASG}
done
fi
done
but he doesn't add. Need select all instances by tag and compare with the number in the target group, if the number is different, then add all instances with the tag to the target group
Update, it's working
for INSTANCE_NAME in $(aws ec2 describe-instances --query 'Reservations[].Instances[].Tags[?Key==`Name`].Value' --output text | sort | uniq ); do
echo "check ${INSTANCE_NAME} in Target Group"
TARGET_GROUP=$(aws elbv2 describe-target-groups|jq -r '.[]|.[].TargetGroupArn'| grep ${INSTANCE_NAME})
if ! [ -z "${TARGET_GROUP}" ]; then
for i in $(echo ${TARGET_GROUP}); do
RUNNING_INSTANCES=$(aws ec2 describe-instances| jq ".Reservations[].Instances[] | select(.Tags[].Value==\"${INSTANCE_NAME}\")"| jq -r .InstanceId| sort | uniq | wc -l)
COUNT=$(aws elbv2 describe-target-health --target-group-arn ${i}|jq -r '.TargetHealthDescriptions[].Target.Id'| wc -l)
if ! [[ ${RUNNING_INSTANCES} = ${COUNT} ]]; then
for INSTANCE_ID in $(aws ec2 describe-instances --filter Name=tag-key,Values=Name --query "Reservations[*].Instances[*].{Instance:InstanceId,Name:Tags[?Key=='Name']|[0].Value}"|jq ".[][]|select(.Name==\"${INSTANCE_NAME}\")"|jq -r .Instance); do
ASG=$(aws autoscaling describe-auto-scaling-instances|jq '.AutoScalingInstances'|jq ".[]|select(.InstanceId==\"${INSTANCE_ID}\")"|jq -r .InstanceId)
aws elbv2 register-targets --target-group-arn ${i} --targets "Id="${ASG}
echo "Updating ${TARGET_GROUP} to add instances from ${ASG}"
done
fi
done
fi
done

Copy docker image from one AWS ECR repo to another

We want to copy a docker image from non-prod to prod ECR account. Is it possible without pulling, retaging and pushing it again.
No you have to run these commands
docker login OLD_REPO
docker pull OLD_REPO/IMAGE:TAG
docker tag OLD_REPO/IMAGE:TAG NEW_REPO/IMAGE:TAG
docker login NEW_REPO
docker push NEW_REPO/IMAGE:TAG
I have written this program in python to migrate all the images (or a specific image) from a repository to another region or to another account in a different region
https://gist.github.com/fabidick22/6a1962697357360f0d73e01950ae962b
Answer: No, you must pull, tag, and push.
I wrote a bash script for this today. You can specify the number of tagged images that will be copied.
https://gist.github.com/virtualbeck/a635ef6701991f2087384eab7edbb18b
a slight improvement ( and may be couple of bug fixes on ) this answer: https://stackoverflow.com/a/69905254/65706
set -e
################################# UPDATE THESE #################################
LAST_N_TAGS=10
SRC_AWS_REGION="us-east-1"
TGT_AWS_REGION="eu-central-1"
SRC_AWS_PROFILE="your_source_aws_profile"
TGT_AWS_PROFILE="your_target_aws_profile"
SRC_BASE_PATH="386151140899.dkr.ecr.$SRC_AWS_REGION.amazonaws.com"
TGT_BASE_PATH="036149202915.dkr.ecr.$TGT_AWS_REGION.amazonaws.com"
#################################################################################
URI=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryUri' --output text --region $SRC_AWS_REGION))
NAME=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryName' --output text --region $SRC_AWS_REGION))
echo "Start repo copy: `date`"
# source account login
aws --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $SRC_BASE_PATH
# destination account login
aws --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $TGT_BASE_PATH
for i in ${!URI[#]}; do
echo "====> Grabbing latest $LAST_N_TAGS from ${NAME[$i]} repo"
# create ecr repo if one does not exist in destination account
aws ecr --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION describe-repositories --repository-names ${NAME[$i]} || aws ecr --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION create-repository --repository-name ${NAME[$i]}
for tag in $(aws ecr describe-images --repository-name ${NAME[$i]} \
--query 'sort_by(imageDetails,& imagePushedAt)[*]' \
--filter tagStatus=TAGGED --output text \
| grep IMAGETAGS | awk '{print $2}' | tail -$LAST_N_TAGS); do
# if [[ ${NAME[$i]} == "repo-name/frontend-nba" ]]; then
# continue
# fi
# # 386517340899.dkr.ecr.us-east-1.amazonaws.com/spectralha-api/white-ref-detector
# if [[ ${NAME[$i]} == "386351741199.dkr.ecr.us-east-1.amazonaws.com/repo-name/white-ref-detector" ]]; then
# continue
# fi
echo "START ::: pulling image ${URI[$i]}:$tag"
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker pull ${URI[$i]}:$tag
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker tag ${URI[$i]}:$tag $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pulling image ${URI[$i]}:$tag"
echo "START ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
# status=$(AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag)
# echo $status
AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
sleep 2
echo ""
done
# docker image prune -a -f #clean-up ALL the images on the system
done
echo "Finish repo copy: `date`"
echo "Don't forget to purge you local docker images!"
#Uncomment to delete all
#docker rmi $(for i in ${!NAME[#]}; do docker images | grep ${NAME[$i]} | tr -s ' ' | cut -d ' ' -f 3 | uniq; done) -f

Amazon's AWS ElasticBeanstalk Let's Encrypt CertBot

How can I run and automated scalable version of Let's encrypt Certbot on Elastic beanstalk ? Haven't found an answer on the web so I put together this .ebextension. I hope you find it useful.
Don't use as is, check and modify according to your needs.
Requirements:
ElasticBeanstalk Environmnet
Ec2-elastic-beanstalk-service-role with access to Route53 Resource
TCP Loadbalancer in front of EC2 instances
Deployment policy rolling (one instance at a the time)
NFS mount
It's a single .ebextension:
jq package is needed:
packages:
yum:
jq: []
Creating jsons:
files:
"/usr/local/bin/certbot/add_json.sh":
mode: "000550"
owner: root
group: root
content: |
#!/bin/bash
#Create TXT record JSON
cat > $1_TXT.json << END
{
"Comment": "TXT Verification for CertBOT",
"Changes": [
{
"Action": "$1",
"ResourceRecordSet": {
"Name": "_acme-challenge.$CERTBOT_DOMAIN",
"Type": "TXT",
"TTL": 300,
"ResourceRecords": [
{
"Value": "\"$CERTBOT_VALIDATION\""
}
]
}
}
]
}
END
Removing hook to Route53:
"/usr/local/bin/certbot/remove_txt_hook.sh":
mode: "000550"
owner: root
group: root
content: |
#!/bin/bash
PWD=`pwd`
APEX_DOMAIN=$(expr match "$CERTBOT_DOMAIN" '.*\.\(.*\..*\)')
ZONE_ID=`aws route53 list-hosted-zones --output text | awk '$4 ~ /^ *'$APEX_DOMAIN'/''{print $3}' | sed 's:.*/::'`
aws route53 change-resource-record-sets \
--hosted-zone-id $ZONE_ID --change-batch file://$PWD/DELETE_TXT.json
rm CREATE_TXT.json
rm DELETE_TXT.json
Adding hook to Route53:
"/usr/local/bin/certbot/add_txt_hook.sh":
mode: "000550"
owner: root
group: root
content: |
#!/bin/bash
PWD=`pwd`
APEX_DOMAIN=$(expr match "$CERTBOT_DOMAIN" '.*\.\(.*\..*\)')
ZONE_ID=`aws route53 list-hosted-zones --output text | awk '$4 ~ /^ *'$APEX_DOMAIN'/''{print $3}' | sed 's:.*/::'`
./add_json.sh CREATE
./add_json.sh DELETE
aws route53 change-resource-record-sets \
--hosted-zone-id $ZONE_ID --change-batch file://$PWD/CREATE_TXT.json
sleep 30
Deploying:
"/usr/local/bin/certbot/start_process.sh":
mode: "000550"
owner: root
group: root
content: |
#!/bin/bash
MY_DOMAIN=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MY_DOMAIN')
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
PWD=`pwd`
if [ "$MY_DOMAIN" = example.com ]; then
if [ ! -d /etc/letsencrypt ]; then
mkdir -p /etc/letsencrypt
fi
if ! grep -qs ' /etc/letsencrypt ' /proc/mounts; then
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/.certificates/.letsencrypt /etc/letsencrypt || exit
fi
if [ ! -f certbot-auto ]; then
yum -y install mod24_ssl augeas-libs libffi-devel python27-tools system-rpm-config
wget https://dl.eff.org/certbot-auto
chmod 550 certbot-auto
fi
if [ ! -f /etc/letsencrypt/live/$MY_DOMAIN/fullchain.pem ]; then
./certbot-auto certonly --debug -n --no-bootstrap --email <your e-mail> --agree-tos --manual-public-ip-logging-ok --manual --preferred-challenges=dns --manual-auth-hook $PWD/add_txt_hook.sh --manual-cleanup-hook $PWD/remove_txt_hook.sh -d $MY_DOMAIN
fi
echo "00 15 * * SUN root cd /usr/local/bin/certbot && ./renewal.sh >> certbot.log 2>&1" > /etc/cron.d/cron_certbot
fi
Renewing:
"/usr/local/bin/certbot/renewal.sh":
mode: "000550"
owner: root
group: root
content: |
##!/bin/bash
MY_DOMAIN=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MY_DOMAIN')
PWD=`pwd`
ENV_ID=`{"Ref": "AWSEBEnvironmentId" }`
METADATA=/opt/aws/bin/ec2-metadata
INSTANCE_ID=`$METADATA -i | awk '{print $2}'`
REGION=`$METADATA -z | awk '{print substr($2, 0, length($2)-1)}'`
TODAY=`date +%Y-%m-%d`
STATUS=`aws elasticbeanstalk describe-environments --environment-ids $ENV_ID --region $REGION | awk '/"Status"/ {print substr($2, 1, length($2)-1)}' | sed 's/\"//g'`
while [ "$STATUS" != "Ready" ]; do
STATUS=`aws elasticbeanstalk describe-environments --environment-ids $ENV_ID --region $REGION | awk '/"Status"/ {print substr($2, 1, length($2)-1)}' | sed 's/\"//g'`
sleep 10
done
if ! /usr/local/bin/certbot/one_instance.sh; then
i="0"
while [ "$i" -lt 180 ] && [ ! -f /etc/letsencrypt/renewed_$TODAY ]; do
i=$[$i+1]
sleep 2
done
if [ ! -f /etc/letsencrypt/renewed_$TODAY ]; then
exit
else
/etc/init.d/httpd graceful; exit
fi
fi
./certbot-auto renew --debug --no-bootstrap --renew-hook "/etc/init.d/httpd graceful; touch /etc/letsencrypt/renewed_$TODAY; find /etc/letsencrypt/ -type f -name 'renewed_*' -mtime +0 -exec rm {} \;"
Only on one instance:
"/usr/local/bin/certbot/one_instance.sh":
mode: "000550"
owner: root
group: root
content: |
#!/bin/bash
METADATA=/opt/aws/bin/ec2-metadata
INSTANCE_ID=`$METADATA -i | awk '{print $2}'`
REGION=`$METADATA -z | awk '{print substr($2, 0, length($2)-1)}'`
ASG=`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
--region $REGION --output text | awk '/aws:autoscaling:groupName/ {print $5}'`
SOLO_I=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG \
--region $REGION --output text | awk '/InService/ {print $4}' | sort | head -1`
[ "$SOLO_I" = "$INSTANCE_ID" ]
Executing:
commands:
01_start_certbot_deploy:
command: "/usr/local/bin/certbot/start_process.sh &>> certbot.log"
cwd: "/usr/local/bin/certbot"
02_delete_bak_files:
command: "rm -f *.bak"
cwd: "/usr/local/bin/certbot"
Let me know if you have some remarks or improvement suggestions.
Thanks!

AWS CLI commands on JENKINS

I'm trying to write a script that with the help of Jenkins will look at the updated files in git, download it and will encrypt them using AWS KMS. I have a working script that does it all and the file is downloaded to the Jenkins repository on local server. But my problem is encrypting this file in Jenkins repo. Basically, when I encrypt files on local computer, I use the command:
aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://file.json --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
and all is ok, but if I try to do it with Jenkins I get an error that the AWS command not found.
Does somebody know how to solve this problem and how do I make it run AWS through Jenkins?
Here is my working code which breaks down on the last line:
bom_sniffer() {
head -c3 "$1" | LC_ALL=C grep -qP '\xef\xbb\xbf';
if [ $? -eq 0 ]
then
echo "BOM SNIFFER DETECTED BOM CHARACTER IN FILE \"$1\""
exit 1
fi
}
check_rc() {
# exit if passed in value is not = 0
# $1 = return code
# $2 = command / label
if [ $1 -ne 0 ]
then
echo "$2 command failed"
exit 1
fi
}
# finding files that differ from this commit and master
echo 'git fetch'
check_rc $? 'echo git fetch'
git fetch
check_rc $? 'git fetch'
echo 'git diff --name-only origin/master'
check_rc $? 'echo git diff'
diff_files=`git diff --name-only $GIT_PREVIOUS_COMMIT $GIT_COMMIT | xargs`
check_rc $? 'git diff'
for x in ${diff_files}
do
echo "${x}"
cat ${x}
bom_sniffer "${x}"
check_rc $? "BOM character detected in ${x},"
aws configure kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
done
After discussion with you this is how this issue was resolved :
First corrected the command by removing configure from it.
Installed the awscli for the jenkins user :
pip install awscli --user
Used the absolute path of aws in your script
for eg. say if it's in ~/.local/bin/ use ~/.local/bin/aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json in your script.
Or add the path of aws in PATH.