CodeBuild YML formatting and syntax - amazon-web-services

I've got a codebuild project up and running on an ubuntu server to clone rds database servers from snapshots. A lot of it is working as expected but when i try and include the following like in the buildspec.yml the job falls over and it doesn't like the command.
Guessing the job doesn't like the formatting but bit stumped where to go with it
- while [ $(aws rds describe-db-clusters --db-cluster-identifier mysql-dev-20201009|grep -c '"Status": "available"') -eq 0 ]; do echo "sleep 60s"; sleep 60; done
Here's the full buildspec file:
Version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- pip install --upgrade pip
- pip3 install awscli --upgrade --user
- export SOURCEDBENV=mysql-dev
- export DATE=`date +%Y%m%d`
- export TARGETDBENV=$SOURCEDBENV-$DATE
- echo $TARGETDBENV
- export PREARNSNAP=$(aws rds describe-db-cluster-snapshots --db-cluster-identifier $SOURCEDBENV --query="reverse(sort_by(DBClusterSnapshots, &SnapshotCreateTime))[0]|DBClusterSnapshotArn" )
- export ARNSNAP=`echo $PREARNSNAP | tr -d '"'`
- echo $ARNSNAP
- aws rds restore-db-cluster-from-snapshot --snapshot-identifier $ARNSNAP --db-cluster-identifier $TARGETDBENV --engine aurora-mysql
- aws rds create-db-instance --db-instance-identifier $TARGETDBENV --db-instance-class db.t3.medium --db-subnet-group-name db_subnet_grp_2019 --engine aurora-mysql --db-cluster-identifier $TARGETDBENV
- while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
- echo "Temp db ready"
- export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
- echo $ENDPOINT
build:
commands:
- echo Build started on `date`
- echo proceed db connection to $ENDPOINT
- echo proceed db migrate update, DDL proceed here
- echo proceed application test, CRUD test run here
post_build:
commands:
- echo Build completed on `date`
- echo $DBNAME

Perhaps you can take advantage of the wait command in the AWS cli?
Rather than use the while loop, simply wait for the db instance to be available
aws rds wait db-instance-available --filters Name=db-cluster-id,Values=$TARGETDBENV

Related

AWS EC2 Image Builder issue with authorized_keys

I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script

I'm getting "command not found" error after running these commands in GitLab. I used these commands 2 days ago and they worked fine

I was using these commands for my deploy-job the other day and it worked fine. This is a new pipeline for a new project and now these commands aren't working. I'm getting errors in my pipeline after every command saying "command not found". Here's my gitlab-ci file for reference
variables:
DOCKER_REGISTRY: 775362094965.dkr.ecr.us-west-2.amazonaws.com
AWS_DEFAULT_REGION: us-west-2
APP_NAME: flask-app
DOCKER_HOST: tcp://docker:2375
stages:
- build
- deploy
build-job:
stage: build
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
deploy-job:
stage: deploy
script:
- echo `aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2` > input.json
- echo $(cat input.json | jq '.taskDefinition.containerDefinitions[].image="'$REPOSITORY_URI':'$IMAGE_TAG'"') > input.json
- echo $(cat input.json | jq '.taskDefinition') > input.json
- echo $(cat input.json | jq 'del(.taskDefinitionArn)' | jq 'del(.revision)' | jq 'del(.status)' | jq 'del(.requiresAttributes)' | jq 'del(.compatibilities)' | jq 'del(.registeredAt)' | jq 'del(.registeredBy)') > input.json
- aws ecs register-task-definition --cli-input-json file://input.json --region us-west-2
- revision=$(aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2 | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//' | cut -d "," -f 1)
- aws ecs update-service --cluster $CI_AWS_ECS_CLUSTER --service $CI_AWS_ECS_SERVICE --task-definition $CI_AWS_ECS_TASK_DEFINITION:$revision --region us-west-2
My build-job works fine, I'm just getting "command not found" with my deploy-job.
You need to specify an image outside of the build job or in the deploy job. Right now, you're only specifying an image inside your build-job.

CodeBuild shows SUCCEEDED; however, it should be saying FAILED when using Packer

I am running into an issue which I know has to do with my buildspec.yml file:
phases:
install:
runtime-versions:
python: 3.8
pre_build:
commands:
- echo "Installing Packer"
- curl -o packer.zip https://releases.hashicorp.com/packer/1.7.2/packer_1.7.2_linux_amd64.zip && unzip packer.zip
- echo "Validating Packer template"
- ./packer validate pipeline/build/${FUNCTION}-build.json
build:
commands:
- ./packer build -color=false pipeline/build/${FUNCTION}-build.json | tee build.log
post_build:
commands:
# Get the ARN of our Lambda notifier
- SLACK_ARN=$(aws cloudformation list-exports | jq -r '.["Exports"][] | select(.Name == "notify_slack_arn") | .Value')
# Send a Slack notification
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" -eq 1 ]
then
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} SUCCESSFUL!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
else
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} FAILED!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
fi
- echo "Build completed on $(date)"
artifacts:
files:
- "**/*"
discard-paths: no
So from my code that builds in CodeBuild, this is what happens:
Even though it fails, CodeBuild says SUCCEEDED which is driving me nuts! It is suppose to fail and give a failing notification. Does this have to do with my buildspec.yml file or my bash script that is running? Thanks!
I think that happens because of your | tee build.log. packer fails, but tee works, so as this being the last command executed, build stage succeeds.

Copy docker image from one AWS ECR repo to another

We want to copy a docker image from non-prod to prod ECR account. Is it possible without pulling, retaging and pushing it again.
No you have to run these commands
docker login OLD_REPO
docker pull OLD_REPO/IMAGE:TAG
docker tag OLD_REPO/IMAGE:TAG NEW_REPO/IMAGE:TAG
docker login NEW_REPO
docker push NEW_REPO/IMAGE:TAG
I have written this program in python to migrate all the images (or a specific image) from a repository to another region or to another account in a different region
https://gist.github.com/fabidick22/6a1962697357360f0d73e01950ae962b
Answer: No, you must pull, tag, and push.
I wrote a bash script for this today. You can specify the number of tagged images that will be copied.
https://gist.github.com/virtualbeck/a635ef6701991f2087384eab7edbb18b
a slight improvement ( and may be couple of bug fixes on ) this answer: https://stackoverflow.com/a/69905254/65706
set -e
################################# UPDATE THESE #################################
LAST_N_TAGS=10
SRC_AWS_REGION="us-east-1"
TGT_AWS_REGION="eu-central-1"
SRC_AWS_PROFILE="your_source_aws_profile"
TGT_AWS_PROFILE="your_target_aws_profile"
SRC_BASE_PATH="386151140899.dkr.ecr.$SRC_AWS_REGION.amazonaws.com"
TGT_BASE_PATH="036149202915.dkr.ecr.$TGT_AWS_REGION.amazonaws.com"
#################################################################################
URI=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryUri' --output text --region $SRC_AWS_REGION))
NAME=($(aws ecr describe-repositories --profile $SRC_AWS_PROFILE --query 'repositories[].repositoryName' --output text --region $SRC_AWS_REGION))
echo "Start repo copy: `date`"
# source account login
aws --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $SRC_BASE_PATH
# destination account login
aws --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION ecr get-login-password | docker login --username AWS --password-stdin $TGT_BASE_PATH
for i in ${!URI[#]}; do
echo "====> Grabbing latest $LAST_N_TAGS from ${NAME[$i]} repo"
# create ecr repo if one does not exist in destination account
aws ecr --profile $SRC_AWS_PROFILE --region $SRC_AWS_REGION describe-repositories --repository-names ${NAME[$i]} || aws ecr --profile $TGT_AWS_PROFILE --region $TGT_AWS_REGION create-repository --repository-name ${NAME[$i]}
for tag in $(aws ecr describe-images --repository-name ${NAME[$i]} \
--query 'sort_by(imageDetails,& imagePushedAt)[*]' \
--filter tagStatus=TAGGED --output text \
| grep IMAGETAGS | awk '{print $2}' | tail -$LAST_N_TAGS); do
# if [[ ${NAME[$i]} == "repo-name/frontend-nba" ]]; then
# continue
# fi
# # 386517340899.dkr.ecr.us-east-1.amazonaws.com/spectralha-api/white-ref-detector
# if [[ ${NAME[$i]} == "386351741199.dkr.ecr.us-east-1.amazonaws.com/repo-name/white-ref-detector" ]]; then
# continue
# fi
echo "START ::: pulling image ${URI[$i]}:$tag"
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker pull ${URI[$i]}:$tag
AWS_REGION=$SRC_AWS_REGION AWS_PROFILE=$SRC_AWS_PROFILE docker tag ${URI[$i]}:$tag $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pulling image ${URI[$i]}:$tag"
echo "START ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
# status=$(AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag)
# echo $status
AWS_REGION=$TGT_AWS_REGION AWS_PROFILE=$TGT_AWS_PROFILE docker push $TGT_BASE_PATH/${NAME[$i]}:$tag
echo "STOP ::: pushing image $TGT_BASE_PATH/${NAME[$i]}:$tag"
sleep 2
echo ""
done
# docker image prune -a -f #clean-up ALL the images on the system
done
echo "Finish repo copy: `date`"
echo "Don't forget to purge you local docker images!"
#Uncomment to delete all
#docker rmi $(for i in ${!NAME[#]}; do docker images | grep ${NAME[$i]} | tr -s ' ' | cut -d ' ' -f 3 | uniq; done) -f

Elastic-Beanstalk: How to find the S3 Bucket From the EB Instance

I'm interested in creating a backup of the current application as an application version. As a result I'd like to try and track down the S3 Bucket where the App Versions are saved to save a new file to that location.
Does anyone know how this is done?
To do this:
Get the Instance Id
Get the environment Id.
Get the App Name
Get the App Versions
The data is in the response of the App Versions.
Based on the above, here is my entire backup script. It will figure out steps 1-4 above, then zip up the current app directory, send it to S3, and create a new app version based on it.
#/bin/bash
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
EB_ENV_ID="`aws ec2 describe-instances --instance-ids $EC2_INSTANCE_ID --region $EC2_REGION | grep -B 1 \"elasticbeanstalk:environment-id\" | grep -Po '\"Value\":\\s+\"[^\"]+\"' | cut -d':' -f 2 | grep -Po '[^\"]+'`"
EB_APP_NAME="`aws elasticbeanstalk describe-environments --environment-id $EB_ENV_ID --region $EC2_REGION | grep 'ApplicationName' | cut -d':' -f 2 | grep -Po '[^\\",]+'`"
EB_BUCKET_NAME="`aws elasticbeanstalk describe-application-versions --application-name $EB_APP_NAME --region $EC2_REGION | grep -m 1 'S3Bucket' | cut -d':' -f 2 | grep -Po '[^\", ]+'`"
DATE=$(date -d "today" +"%Y%m%d%H%M")
FILENAME=application.$DATE.zip
cd /var/app/current
sudo -u webapp zip -r -J /tmp/$FILENAME *
sudo mv /tmp/$FILENAME ~/.
sudo chmod 755 ~/$FILENAME
cd ~
aws s3 cp $FILENAME s3://$EB_BUCKET_NAME/
rm -f ~/$FILENAME
aws elasticbeanstalk create-application-version --application-name $EB_APP_NAME --version-label "$DATE-$FILENAME" --description "Auto-Save $DATE" --source-bundle S3Bucket="$EB_BUCKET_NAME",S3Key="$FILENAME" --no-auto-create-application --region $EC2_REGION
tldr: most probably you want
jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)
Detailed version:
Add to any of your .ebextensions
packages:
yum:
jq: []
(assuming you are using Amazon Linux or Centos)
and run
export BUCKET="$(jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)' || sudo !!)"
echo bucketname: $BUCKET
then full s3 path would be accessible at
bash -c 'source /etc/elasticbeanstalk/.aws-eb-stack.properties; echo s3:\\${BUCKET} -r ${region:-eu-east-3}'