I am running into an issue which I know has to do with my buildspec.yml file:
phases:
install:
runtime-versions:
python: 3.8
pre_build:
commands:
- echo "Installing Packer"
- curl -o packer.zip https://releases.hashicorp.com/packer/1.7.2/packer_1.7.2_linux_amd64.zip && unzip packer.zip
- echo "Validating Packer template"
- ./packer validate pipeline/build/${FUNCTION}-build.json
build:
commands:
- ./packer build -color=false pipeline/build/${FUNCTION}-build.json | tee build.log
post_build:
commands:
# Get the ARN of our Lambda notifier
- SLACK_ARN=$(aws cloudformation list-exports | jq -r '.["Exports"][] | select(.Name == "notify_slack_arn") | .Value')
# Send a Slack notification
- |
if [ "${CODEBUILD_BUILD_SUCCEEDING}" -eq 1 ]
then
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} SUCCESSFUL!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
else
aws lambda invoke --cli-binary-format raw-in-base64-out --function-name ${SLACK_ARN} --payload "{ \"LambdaInvokeEvent\": { \"message\": \"Daily AMI Build for ${FUNCTION} FAILED!\", \"slack_url\": \"${SLACK_URL}\" } }" slack_output.log
fi
- echo "Build completed on $(date)"
artifacts:
files:
- "**/*"
discard-paths: no
So from my code that builds in CodeBuild, this is what happens:
Even though it fails, CodeBuild says SUCCEEDED which is driving me nuts! It is suppose to fail and give a failing notification. Does this have to do with my buildspec.yml file or my bash script that is running? Thanks!
I think that happens because of your | tee build.log. packer fails, but tee works, so as this being the last command executed, build stage succeeds.
Related
I was using these commands for my deploy-job the other day and it worked fine. This is a new pipeline for a new project and now these commands aren't working. I'm getting errors in my pipeline after every command saying "command not found". Here's my gitlab-ci file for reference
variables:
DOCKER_REGISTRY: 775362094965.dkr.ecr.us-west-2.amazonaws.com
AWS_DEFAULT_REGION: us-west-2
APP_NAME: flask-app
DOCKER_HOST: tcp://docker:2375
stages:
- build
- deploy
build-job:
stage: build
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
deploy-job:
stage: deploy
script:
- echo `aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2` > input.json
- echo $(cat input.json | jq '.taskDefinition.containerDefinitions[].image="'$REPOSITORY_URI':'$IMAGE_TAG'"') > input.json
- echo $(cat input.json | jq '.taskDefinition') > input.json
- echo $(cat input.json | jq 'del(.taskDefinitionArn)' | jq 'del(.revision)' | jq 'del(.status)' | jq 'del(.requiresAttributes)' | jq 'del(.compatibilities)' | jq 'del(.registeredAt)' | jq 'del(.registeredBy)') > input.json
- aws ecs register-task-definition --cli-input-json file://input.json --region us-west-2
- revision=$(aws ecs describe-task-definition --task-definition $CI_AWS_ECS_TASK_DEFINITION --region us-west-2 | egrep "revision" | tr "/" " " | awk '{print $2}' | sed 's/"$//' | cut -d "," -f 1)
- aws ecs update-service --cluster $CI_AWS_ECS_CLUSTER --service $CI_AWS_ECS_SERVICE --task-definition $CI_AWS_ECS_TASK_DEFINITION:$revision --region us-west-2
My build-job works fine, I'm just getting "command not found" with my deploy-job.
You need to specify an image outside of the build job or in the deploy job. Right now, you're only specifying an image inside your build-job.
I've got a codebuild project up and running on an ubuntu server to clone rds database servers from snapshots. A lot of it is working as expected but when i try and include the following like in the buildspec.yml the job falls over and it doesn't like the command.
Guessing the job doesn't like the formatting but bit stumped where to go with it
- while [ $(aws rds describe-db-clusters --db-cluster-identifier mysql-dev-20201009|grep -c '"Status": "available"') -eq 0 ]; do echo "sleep 60s"; sleep 60; done
Here's the full buildspec file:
Version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- pip install --upgrade pip
- pip3 install awscli --upgrade --user
- export SOURCEDBENV=mysql-dev
- export DATE=`date +%Y%m%d`
- export TARGETDBENV=$SOURCEDBENV-$DATE
- echo $TARGETDBENV
- export PREARNSNAP=$(aws rds describe-db-cluster-snapshots --db-cluster-identifier $SOURCEDBENV --query="reverse(sort_by(DBClusterSnapshots, &SnapshotCreateTime))[0]|DBClusterSnapshotArn" )
- export ARNSNAP=`echo $PREARNSNAP | tr -d '"'`
- echo $ARNSNAP
- aws rds restore-db-cluster-from-snapshot --snapshot-identifier $ARNSNAP --db-cluster-identifier $TARGETDBENV --engine aurora-mysql
- aws rds create-db-instance --db-instance-identifier $TARGETDBENV --db-instance-class db.t3.medium --db-subnet-group-name db_subnet_grp_2019 --engine aurora-mysql --db-cluster-identifier $TARGETDBENV
- while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
- echo "Temp db ready"
- export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
- echo $ENDPOINT
build:
commands:
- echo Build started on `date`
- echo proceed db connection to $ENDPOINT
- echo proceed db migrate update, DDL proceed here
- echo proceed application test, CRUD test run here
post_build:
commands:
- echo Build completed on `date`
- echo $DBNAME
Perhaps you can take advantage of the wait command in the AWS cli?
Rather than use the while loop, simply wait for the db instance to be available
aws rds wait db-instance-available --filters Name=db-cluster-id,Values=$TARGETDBENV
I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box
Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.
Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
I'm following this doc https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html to set up a pipeline to deploy to the ECS cluster.
This doc is using a custom task def JSON file and using the same for the deployment after updating the image name.
Am I required to copy the complete task definition JSON and put that in my repository? My task definition has lots of environment variables in it. I do not want to expose them by putting it in the repository.
Or, the task definition template will update the default task definition and create a new revision. (not overwrite)
The deployment step is
tags:
revision-*:
- step:
deployment: production
name: Deploy to ECS
script:
# Replace the docker image name in the task definition with the newly pushed image.
- export IMAGE_NAME=${ECR_USERNAME}/${BITBUCKET_REPO_SLUG}:latest
- envsubst < task-definition-template.json > task-definition.json
# Update the task definition.
- pipe: atlassian/aws-ecs-deploy:1.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: $AWS_ECS_CLUSTER_NAME
SERVICE_NAME: $AWS_ECS_SERVICE_NAME
TASK_DEFINITION: 'task-definition.json'
It is expecting me to have a definition file in my repository task-definition-template.json
How can I use the predefined tasks instead of using the JSON file? Also, where can I find more doc about the pipe.
atlassian/aws-ecs-deploy
You can put a shell script into your repository for deployment, and execute this script in the Bitbucket pipeline.
e.g. put this shell script in cicd/update-task.sh
update-task.sh :
#!/bin/bash
set -e
ECR_IMAGE_TAG=1234555555.dkr.ecr.eu-west-1.amazonaws.com/my-image:abcdefa
if [ "$TASK_FAMILY" = "" ]; then
echo "Missing variable TASK_FAMILY" >&2
exit 1
fi
if [ "$AWS_DEFAULT_REGION" = "" ]; then
echo "Missing variable AWS_DEFAULT_REGION" >&2
exit 1
fi
if [ "$ECR_IMAGE_TAG" = "" ]; then
echo "Missing variable ECR_IMAGE_TAG" >&2
exit 1
fi
TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "$TASK_FAMILY")
NEW_TASK_DEFINTIION=$(echo "$TASK_DEFINITION" | jq --arg IMAGE "$ECR_IMAGE_TAG" '.taskDefinition | .containerDefinitions[0].image = $IMAGE | del(.taskDefinitionArn) | del(.revision) | del(.status) | del(.requiresAttributes) | del(.compatibilities)')
NEW_TASK_INFO=$(aws ecs register-task-definition --region "$AWS_DEFAULT_REGION" --cli-input-json "$NEW_TASK_DEFINTIION")
NEW_REVISION=$(echo "$NEW_TASK_INFO" | jq '.taskDefinition.revision')
# return new task revision
echo "${TASK_FAMILY}:${NEW_REVISION}"
You can use aws cli to run this command and retrieve the existing task definition JSON:
https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-task-definition.html
I can trigger my AWS pipeline from jenkins but I don't want to create buildspec.yaml and instead use the pipeline script which already works for jenkins.
In order to user Codebuild you need to provide the Codebuild project with a buildspec.yaml file along with your source code or incorporate the commands into the actual project.
However, I think you are interested in having the creation of the buildspec.yaml file done within the Jenkins pipeline.
Below is a snippet of a stage within a Jenkinsfile, it creates a build spec file for building docker images and then sends the contents of the workspace to a codebuild project. This uses the plugin for Codebuild.
stage('Build - Non Prod'){
String nonProductionBuildSpec = """
version: 0.1
phases:
pre_build:
commands:
- \$(aws ecr get-login --registry-ids <number> --region us-east-1)
build:
commands:
- docker build -t ces-sample-docker .
- docker tag $NAME:$TAG <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
post_build:
commands:
- docker push <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
""".replace("\t"," ")
writeFile file: 'buildspec.yml', text: nonProductionBuildSpec
//Send checked out files to AWS
awsCodeBuild projectName: "<codebuild-projectname>",region: "us-east-1", sourceControlType: "jenkins"
}
I hope this gives you an idea of whats possible.
Good luck!
Patrick
You will need to write a buildspec for the commands that you want AWS CodeBuild to run. If you use the CodeBuild plugin for Jenkins, you can add that to your Jenkins pipeline and use CodeBuild as a Jenkins build slave to execute the commands in your buildspec.
See more details here: https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html
#hynespm - excellent example mate.
Here is another one based off yours but with stripIndent() and "withAWS" to switch roles:
#!/usr/bin/env groovy
def cbResult = null
pipeline {
.
.
.
script {
echo ("app_version TestwithAWS value : " + "${app_version}")
String buildspec = """\
version: 0.2
env:
parameter-store:
TOKEN: /some/token
phases:
pre_build:
commands:
- echo "List files...."
- ls -l
- echo "TOKEN is ':' \${TOKEN}"
build:
commands:
- echo "build':' Do something here..."
- echo "\${CODEBUILD_SRC_DIR}"
- ls -l "\${CODEBUILD_SRC_DIR}"
post_build:
commands:
- pwd
- echo "postbuild':' Done..."
""".stripIndent()
withAWS(region: 'ap-southeast-2', role: 'CodeBuildWithJenkinsRole', roleAccount: '123456789123', externalId: '123456-2c1a-4367-aa09-7654321') {
sh 'aws ssm get-parameter --name "/some/token"'
try {
cbResult = awsCodeBuild projectName: 'project-lambda',
sourceControlType: 'project',
credentialsType: 'keys',
awsAccessKey: env.AWS_ACCESS_KEY_ID,
awsSecretKey: env.AWS_SECRET_ACCESS_KEY,
awsSessionToken: env.AWS_SESSION_TOKEN,
region: 'ap-southeast-2',
envVariables: '[ { GITHUB_OWNER, special }, { GITHUB_REPO, project-lambda } ]',
artifactTypeOverride: 'S3',
artifactLocationOverride: 'special-artifacts',
overrideArtifactName: 'True',
buildSpecFile: buildspec
} catch (Exception cbEx) {
cbResult = cbEx.getCodeBuildResult()
}
}
} //script
.
.
.
}