How to deploy an AWS Amplify app from GitHub Actions? - amazon-web-services

I want to control Amplify deployments from GitHub Actions because Amplify auto-build
doesn't provide a GitHub Environment
doesn't watch the CI for failures and will deploy anyways, or
requires me to duplicate the CI setup and re-run it in Amplify
didn't support running a cypress job out-of-the-box

Turn off auto-build (in the App settings / General / Branches).
Add the following script and job
scripts/amplify-deploy.sh
echo "Deploy app $1 branch $2"
JOB_ID=$(aws amplify start-job --app-id $1 --branch-name $2 --job-type RELEASE | jq -r '.jobSummary.jobId')
echo "Release started"
echo "Job ID is $JOB_ID"
while [[ "$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')" =~ ^(PENDING|RUNNING)$ ]]; do sleep 1; done
JOB_STATUS="$(aws amplify get-job --app-id $1 --branch-name $2 --job-id $JOB_ID | jq -r '.job.summary.status')"
echo "Job finished"
echo "Job status is $JOB_STATUS"
deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
steps:
- uses: actions/checkout#v2
- name: Deploy
run: ./scripts/amplify-deploy.sh xxxxxxxxxxxxx master
You could improve the script to fail if the release fails, add needed steps (e.g. lint, test), add a GitHub Environment, etc.
There's also amplify-cli-action but it didn't work for me.

Disable automatic builds:
Go to App settings > general in the AWS Amplify console and disable automatic builds there.
Go to App settings > Build Settings and create a web hook which is a curl command that will trigger a build.
Example: curl -X POST -d {} URL -H "Content-Type: application/json"
Save the URL in GitHub as a secret.
Add the curl script to the GitHub actions YAML script like this:
deploy:
runs-on: ubuntu-latest
steps:
- name: deploy
run: |
URL="${{ secrets.WEBHOOK_URL }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"

Similar to answer 2 here, but I used tags instead.
Create an action like ci.yml, turn off auto-build on the staging & prod envs in amplify and create the webhook triggers.
name: CI-Staging
on:
release:
types: [prereleased]
permissions: read-all # This is required to read the secrets
jobs:
deploy-staging:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.STAGING_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"
name: CI-production
on:
release:
types: [released]
permissions: read-all # This is required to read the secrets
jobs:
deploy-production:
runs-on: ubuntu-latest
permissions: read-all # This is required to read the secrets
steps:
- name: deploy
run: |
URL="${{ secrets.PRODUCTION_DEPLOY_WEBHOOK }}"
curl -X POST -d {} "$URL" -H "Content-Type: application/json"

Related

Permission denied on entrypoint when trying to update Elastic Beanstalk via GitHub Actions

I feel this might be an IAM question, but I don't really know where to begin. I have a Docker-based EBS environment that works great when I update it manually. However, when I update it with GitHub Actions, the container fails with the following message.
unable to start container process: exec: "./docker/entrypoint.sh": permission denied: unknown.
My CD pipeline authenticates, push a new Docker image to the registry, and then updates the Dockerrun.aws.js by editing the image name. The workflow runs ok: the image is pushed, and the Dockerrun.aws.js is correct... and yet the environment fails to launch.
name: Release
on:
push:
tags:
- 'v*'
jobs:
deploy-to-aws-ebs:
runs-on: ubuntu-latest
environment: staging
permissions:
id-token: write
contents: read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
steps:
- name: Check out the repository
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/ServiceRoleForEBSDeploy
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Get tag name
run: echo "tag=`echo ${{ github.ref }} | sed -e 's/\./-/g' | cut -c11-`-`echo ${{ github.sha }} | cut -c1-8`" >> $GITHUB_ENV
- name: Build, tag, and push docker image to Amazon ECR
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: docker_repository
IMAGE_TAG: ${{ env.tag }}
run: |
docker build -t $REGISTRY/$REPOSITORY:$IMAGE_TAG .
docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG
echo "IMAGE_NAME=$REGISTRY/$REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
- name: Create deployment package
run: |
sed -e "s|<IMAGE_NAME>|${{ env.IMAGE_NAME }}|g" \
docker/Dockerrun.aws.template.json > Dockerrun.aws.json
cat Dockerrun.aws.json
- name: Deploy to AWS Elastic Beanstalk
env:
AWS_EBS_APP_NAME: app_name
AWS_EBS_ENV_NAME: env_name
run: |
aws s3 cp Dockerrun.aws.json s3://${{ secrets.AWS_S3_BUCKET_NAME }}/versions/${{ env.tag }}-Dockerrun.aws.json
aws elasticbeanstalk create-application-version \
--application-name $AWS_EBS_APP_NAME \
--source-bundle S3Bucket=${{ secrets.AWS_S3_BUCKET_NAME }},S3Key=versions/${{ env.tag }}-Dockerrun.aws.json \
--version-label ${{ env.tag }}
aws elasticbeanstalk update-environment \
--application-name $AWS_EBS_APP_NAME \
--environment-name $AWS_EBS_ENV_NAME \
--version-label ${{ env.tag }}
Meanwhile, the Dockerfile is your basic Django stuff.
FROM python:3.10-slim-buster
ARG APP_HOME=/code \
USERNAME=user101
WORKDIR ${APP_HOME}
RUN addgroup --system ${USERNAME} \
&& adduser --system --ingroup ${USERNAME} ${USERNAME}
RUN apt-get update --yes --quiet && apt-get install --no-install-recommends --yes --quiet \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev \
# dev utils
git \
# cleanup
&& rm -rf /var/lib/apt/lists/*
COPY . --chown=${USERNAME}:${USERNAME} ${APP_HOME} ${APP_HOME}
RUN pip install --upgrade pip
RUN pip install poetry
RUN poetry install --no-interaction --no-ansi
EXPOSE 80
USER ${USERNAME}
ENTRYPOINT ["./docker/entrypoint.sh" ]
CMD ["gunicorn", "config.wsgi:application", "--bind", ":80"]
My guess is that EBS is trying to build the environment with the GitHub Actions service user? Does that make sense? Should it be using the user defined in the Dockerfile?
This has nothing to do with IAM permissions.
You just need to make your script executable:
$ chmod +x ./docker/entrypoint.sh
You can also run it inside the Dockerfile before the ENTRYPOINT command:
RUN chmod +x ./docker/entrypoint.sh
ENTRYPOINT ["./docker/entrypoint.sh" ]

Deploy a Django application on Azure VM using Github Actions

I have built a Django application and dockerized it with Nginx, I also created a GitHub workflow to build the docker image and push it to ghcr.io.
Now I want to deploy the docker image (from ghcr.io) to the Azure virtual machine (ubuntu). but I couldn't find, how to connect azure VM to GitHub workflow and execute some commands from it.
name: CI and CD
on: [push]
env:
DOMAIN_NAME: ${{ secrets.DOMAIN_NAME }}
jobs:
build:
name: Build Docker Images
runs-on: ubuntu-latest
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Add environment variables to .env
run: |
echo DJANGO_SECRET_KEY=${{ secrets.DJANGO_SECRET_KEY }} >> .env
echo DJANGO_ALLOWED_HOSTS=${{ secrets.DJANGO_ALLOWED_HOSTS }} >> .env
echo DATABASE=postgres >> .env
echo DB_NAME=${{ secrets.DB_NAME }} >> .env
echo DB_USER=${{ secrets.DB_USER }} >> .env
echo DB_PASS='${{ secrets.DB_PASS }}' >> .env
echo DB_HOST=${{ secrets.DB_HOST }} >> .env
echo DB_PORT=${{ secrets.DB_PORT }} >> .env
echo VIRTUAL_HOST=$DOMAIN_NAME >> .env
echo VIRTUAL_PORT=8000 >> .env
echo LETSENCRYPT_HOST=$DOMAIN_NAME >> .env
echo EMAIL_HOST_USER=${{ secrets.EMAIL_HOST_USER }} >> .env
echo EMAIL_HOST_PASSWORD=${{ secrets.EMAIL_HOST_PASSWORD }} >> .env
echo DEFAULT_EMAIL=${{ secrets.DEFAULT_EMAIL }} >> .env
echo NGINX_PROXY_CONTAINER=nginx-proxy >> .env
- name: Set environment variables
run: |
echo WEB_IMAGE=ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/web >> $GITHUB_ENV
echo NGINX_IMAGE=ghcr.io/$(echo $GITHUB_REPOSITORY | tr '[:upper:]' '[:lower:]')/nginx >> $GITHUB_ENV
- name: Login to GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ secrets.NAMESPACE }}
password: ${{ secrets.PERSONAL_ACCESS_TOKEN }}
- name: Pull images
run: |
docker pull $WEB_IMAGE || true
docker pull $NGINX_IMAGE || true
- name: Build images
run: docker-compose build
- name: Push images
run: |
docker push $WEB_IMAGE
docker push $NGINX_IMAGE
You can set up GitHub Action self-hosted runner on Azure VM to run GitHub Actions on the Azure VMto deploy a Django application
Firstly, you need to install the GitHub Action self-hosted runner on Azure VM by SSH into the VM and running below commands:
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.278.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.278.0/actions-runner-linux-x64-2.278.0.tar.gz #Extract the installer
tar xzf ./actions-runner-linux-x64-2.278.0.tar.gz
Now you need to configure the VM to communicate with your GitHub account by using below command:
./config.sh --url https://github.com/{{Yourorganization}} --token <YOURTOKENFROMGITHUB>
You will be prompted through the registration process of your GitHub Action self-hosted runner
Then install the needed dependencies for your Django Application to the VM
Now you can run your GitHub Action Workflow
Reference: Using the GitHub self-hosted runner and Azure Virtual Machines to login with a System Assigned Managed Identity | Cloud With Chris

How to use serverless framework in github actions using github actions OIDC feature

I have followed this question How can I connect GitHub actions with AWS deployments without using a secret key?.
however i am trying to go one step further by dpeloying a lambda function using serverless.
what i have tried so far.
name: For Production
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Configure AWS
run: |
sleep 5 # Need to have a delay to acquire this
export AWS_ROLE_ARN=arn:aws:iam::xxxxxxx:role/my-role
export AWS_WEB_IDENTITY_TOKEN_FILE=/tmp/awscreds
export AWS_DEFAULT_REGION=ap-southeast-1
echo AWS_WEB_IDENTITY_TOKEN_FILE=$AWS_WEB_IDENTITY_TOKEN_FILE >> $GITHUB_ENV
echo AWS_ROLE_ARN=$AWS_ROLE_ARN >> $GITHUB_ENV
echo AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION >> $GITHUB_ENV
curl -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=githubactions" \
| jq -r '.value' > $AWS_WEB_IDENTITY_TOKEN_FILE
sls deploy --stage prod --verbose
working-directory: './backend-operations'
# - name: Deploy to AWS
# run: serverless deploy --stage prod --verbose
# working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}
I solved it using this using aws-actions/configure-aws-credentials github actions, as it sets temporary access key and id to environment.
Hence no need of creating aws programmticv keys from here on.
Note:- latest update of github OIDC has changed its domain name -> https://token.actions.githubusercontent.com
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Production-Deployment
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
cache-dependency-path: ./backend-operations/package-lock.json
- name: Create env file
run: |
touch ./backend-operations/.env
echo JWKS_URI=${{secrets.JWKS_URI}} >> ./backend-operations/.env
echo AUDIENCE=${{ secrets.AUDIENCE }} >> ./backend-operations/.env
echo TOKEN_ISSUER=${{ secrets.TOKEN_ISSUER }} >> ./backend-operations/.env
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#master
with:
aws-region: ap-southeast-1
role-to-assume: ${{secrets.ROLE_ARN}}
- run: npm ci
working-directory: ./backend-operations
- run: npm run build --if-present
working-directory: ./backend-operations
- run: npm test
working-directory: ./backend-operations
- name: Install Serverless Framework
run: npm install -g serverless
- name: Serverless Authentication
run: sls config credentials --provider aws --key ${{ env.AWS_ACCESS_KEY_ID }} --secret ${{ env.AWS_SECRET_ACCESS_KEY }}
- name: Deploy to AWS
run: serverless deploy --stage prod --verbose
working-directory: './backend-operations'
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v1
with:
token: ${{secrets.CODECOV_SECRET_TOKEN}}

aws-iam-authenticator install via ansible

Looking how to translate (properly) from a bash command (orig inside a Dockerfile) to ansible task/role that will download latest aws-iam-authenticator binary and install into /usr/local/bin on Ubuntu (x64) OS.
currently I have:
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest | grep "browser_download.url.*linux_amd64" | cut -d : -f 2,3 | tr -d '"' | wget -O /usr/local/bin/aws-iam-authenticator -qi - && chmod 555 /usr/local/bin/aws-iam-authenticator
Basically you need to write a playbook and separate that command in various tasks
Example example.yml file
- hosts: localhost
tasks:
- shell: |
curl -s https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
register: json
- set_fact:
url: "{{ (json.stdout | from_json).assets[2].browser_download_url }}"
- get_url:
url: "{{ url }}"
dest: /usr/local/bin/aws-iam-authenticator-ansible
mode: 0555
you can execute it by doing
ansible-playbook --become example.yml
I hope this is what you're looking for ;-)
So after finding other posts that gave strong hints, information and unresolved issues, Ansible - Download latest release binary from Github repo & https://github.com/ansible/ansible/issues/27299#issuecomment-331068246. I was able to come up with the following ansible task that works for me.
- name: Get latest url for linux-amd64 release for aws-iam-authenticator
uri:
url: https://api.github.com/repos/kubernetes-sigs/aws-iam-authenticator/releases/latest
return_content: true
body_format: json
register: json_response
- name: Download and install aws-iam-authenticator
get_url:
url: " {{ json_response.json | to_json | from_json| json_query(\"assets[?ends_with(name,'linux_amd64')].browser_download_url | [0]\") }}"
mode: 555
dest: /usr/local/bin/aws-iam-authenticator
Note
If you're running the AWS CLI version 1.16.156 or later, then you don't need to install the authenticator. Instead, you can use the aws eks get-token command. For more information, see Create kubeconfig manually.

How to run my jenkins pipeline code in AWS CodeBuild?

I can trigger my AWS pipeline from jenkins but I don't want to create buildspec.yaml and instead use the pipeline script which already works for jenkins.
In order to user Codebuild you need to provide the Codebuild project with a buildspec.yaml file along with your source code or incorporate the commands into the actual project.
However, I think you are interested in having the creation of the buildspec.yaml file done within the Jenkins pipeline.
Below is a snippet of a stage within a Jenkinsfile, it creates a build spec file for building docker images and then sends the contents of the workspace to a codebuild project. This uses the plugin for Codebuild.
stage('Build - Non Prod'){
String nonProductionBuildSpec = """
version: 0.1
phases:
pre_build:
commands:
- \$(aws ecr get-login --registry-ids <number> --region us-east-1)
build:
commands:
- docker build -t ces-sample-docker .
- docker tag $NAME:$TAG <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
post_build:
commands:
- docker push <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
""".replace("\t"," ")
writeFile file: 'buildspec.yml', text: nonProductionBuildSpec
//Send checked out files to AWS
awsCodeBuild projectName: "<codebuild-projectname>",region: "us-east-1", sourceControlType: "jenkins"
}
I hope this gives you an idea of whats possible.
Good luck!
Patrick
You will need to write a buildspec for the commands that you want AWS CodeBuild to run. If you use the CodeBuild plugin for Jenkins, you can add that to your Jenkins pipeline and use CodeBuild as a Jenkins build slave to execute the commands in your buildspec.
See more details here: https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html
#hynespm - excellent example mate.
Here is another one based off yours but with stripIndent() and "withAWS" to switch roles:
#!/usr/bin/env groovy
def cbResult = null
pipeline {
.
.
.
script {
echo ("app_version TestwithAWS value : " + "${app_version}")
String buildspec = """\
version: 0.2
env:
parameter-store:
TOKEN: /some/token
phases:
pre_build:
commands:
- echo "List files...."
- ls -l
- echo "TOKEN is ':' \${TOKEN}"
build:
commands:
- echo "build':' Do something here..."
- echo "\${CODEBUILD_SRC_DIR}"
- ls -l "\${CODEBUILD_SRC_DIR}"
post_build:
commands:
- pwd
- echo "postbuild':' Done..."
""".stripIndent()
withAWS(region: 'ap-southeast-2', role: 'CodeBuildWithJenkinsRole', roleAccount: '123456789123', externalId: '123456-2c1a-4367-aa09-7654321') {
sh 'aws ssm get-parameter --name "/some/token"'
try {
cbResult = awsCodeBuild projectName: 'project-lambda',
sourceControlType: 'project',
credentialsType: 'keys',
awsAccessKey: env.AWS_ACCESS_KEY_ID,
awsSecretKey: env.AWS_SECRET_ACCESS_KEY,
awsSessionToken: env.AWS_SESSION_TOKEN,
region: 'ap-southeast-2',
envVariables: '[ { GITHUB_OWNER, special }, { GITHUB_REPO, project-lambda } ]',
artifactTypeOverride: 'S3',
artifactLocationOverride: 'special-artifacts',
overrideArtifactName: 'True',
buildSpecFile: buildspec
} catch (Exception cbEx) {
cbResult = cbEx.getCodeBuildResult()
}
}
} //script
.
.
.
}