Deploy with Docker Compose on AWS ECS with GitHub Actions - amazon-web-services

I recently started experimenting with ECS, Docker Compose, and context and it's really interesting. I have managed to deploy and host a compose-file through my terminal using docker compose upĀ and ecs-context, but I would also like to automate this through something like Github actions.
I'm struggling to see how one would set that up, and I have yet to find a guide for it.
Are there any good resources for researching this further? What would be the alternate or maybe even better way of doing CI/CD on AWS through Github?

I was also searching about this, but I haven't found anything that confirms this is possible using any of the AWS GitHub actions. However, you can specify multiple containers as part of a same ECS task definition.

A bit open ended for a StackOverflow question but this blog post walk you through an example of how to use AWS native CI/CD tools to deploy to ECS via the docker compose integration.

I moved away from using Docker Compose and just wrote the CloudFormation templates manually. Docker Compose still has some limitations that require quirky workarounds.
But for anyone wondering how I approached this before moving away from it (including GHA caching):
name: Deploy with Docker Compose
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Setup Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Build and push <YOUR_IMAGE>
uses: docker/build-push-action#v2
with:
context: .
push: true
cache-from: type=gha,scope=<YOUR_IMAGE>
cache-to: type=gha,scope=<YOUR_IMAGE>
tags: |
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-${{ github.sha }}
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-latest
- name: Install Docker Compose
run: curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
- name: Docker context
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
GITHUB_SHA: ${{ github.sha }}
run: |
docker context create ecs ecs-context --from-env
docker --context ecs-context compose up

Related

Unable to load AWS credentials in github actions

I'm running a github actions pipeline to deploy a react project to an S3 bucket in AWS and recieve the following error when running the action:
Run aws-actions/configure-aws-credentials#v1
Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers
Here's my .github/workflows/main.yaml
name: S3 Pipeline
on:
push:
branches:
- DEV
permissions:
id-token: write
contents: read
jobs:
Deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 16
cache: 'npm'
- name: install
run: npm ci
- name: Run build
run: npm run build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION}}
- name: Deploy static site to S3 bucket
run: aws s3 sync ./build s3://${{secrets.BUCKET}}
I've tried adding, with no success. Also played around with the aws-actions version
permissions:
id-token: write
contents: read

AWS sync to deploy only new or updated files to s3

I've written a Github actions script that takes files from a folder migrations and uploads it to s3. The problem with this pipeline is that all other files in the directory also get updated. How can I go about only updating new or updated files?
Here's the current script as it stands.
name: function-name
on:
push:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Deploy file to s3
run: aws s3 sync ./migration/ s3://s3_bucket
You could try the GitHub Action jakejarvis/s3-sync-action, which uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.
It is based on aws s3 sync, which should enable an incremental backup, instead of copying/modifying every files.
Add as "source_dir" the migration folder
steps:
...
- uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1' # optional: defaults to us-east-1
SOURCE_DIR: 'migration' # optional: defaults to entire repository
However, taseenb comments:
This does not work as intended (like an incremental backup).
S3 sync cli command will copy all files every time when run inside a GitHub Action.
I believe this happens when we clone the repository inside a Docker image to execute the operation (this is what jakejarvis/s3-sync-action does).
I don't think there is a perfect solution using S3 sync.
But if you are sure that your files always change size you can use --size-only in the args.
It will ignore files with the same size, so probably not safe in most cases.

Multiple containers in ECS task deinition deployment with github actions

I have deployed my docker images in ecs cluster from github actions. But the issue is i have created a single task-definition and created multiple containers within. Now through github actions i want to edit the task-def.json with multiple containers an deploy it at once.
I know it is ideal to have single task-def for single container and use github actions but is there anyway so that i can pass multiple images to the task-def and deploy it in 2 steps itself
These are the steps i have been using
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: task-definition.json
container-name: my-container
image: image_name
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: my-container-service
cluster: my-cluster
wait-for-service-stability: true
Any Help is appreciated, To send multiple images at once to the task-def file
Thanks in advance!
- name: Modify Amazon ECS task definition with second container
id: render-app-container
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
container-name: app2
image: amazon/amazon-ecs-sample-2:latest

How to configure / use AWS CLI in GitHub Actions?

I'd like to run commands like aws amplify start-job in GitHub Actions. I understand the AWS CLI is pre-installed but I'm not sure how to configure it.
In particular, I'm not sure how the env vars are named for all configuration options as some docs only mention AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but nothing for the region and output settings.
I recommend using this AWS action for setting all the AWS region and credentials environment variables in the GitHub Actions environment. It doesn't set the output env vars so you still need to do that, but it has nice features around making sure that credential env vars are masked in the output as secrets, supports assuming a role, and provides your account ID if you need it in other actions.
https://github.com/marketplace/actions/configure-aws-credentials-action-for-github-actions
I could provide the following secrets and env vars and then use the commands:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
E.g.
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-1
AWS_DEFAULT_OUTPUT: json
run: aws amplify start-job --app-id xxx --branch-name master --job-type RELEASE
In my experience, the out-of-box AWS CLI tool coming from action runner just works fine.
But there would be some time that you'd prefer to use credentials file (like terraform AWS provider), and this is example for it.
This would base64 decode the encoded file and use for the following steps.
- name: Write into file
id: write_file
uses: timheuer/base64-to-file#v1.0.3
with:
fileName: 'myTemporaryFile.txt'
encodedString: ${{ secrets.AWS_CREDENTIALS_FILE_BASE64 }}

Using Gitihub actions for CI CD on aws ec2 machine?

i am new github actions workflow and was wondering that is it possible that i set my ec2 machine directly for CI and CD after every push.
I have seen that it is possible with ECS , but i wanted a straight forward solution as we are trying this out on our Dev environment we don't want over shoot our budget.
is it possible , if yes how can i achieve it ?
If you build your code in GitHub Actions, and just want to copy the package over existing EC2, you can use SCP files action plugin
https://github.com/marketplace/actions/scp-files
- name: copy file via ssh key
uses: appleboy/scp-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
port: ${{ secrets.PORT }}
key: ${{ secrets.KEY }}
source: "tests/a.txt,tests/b.txt"
target: "test"
If you have any other AWS resource which interacts with EC2 (or any other AWS service) and you want to use AWS CLI, you can use AWS Credentials Action
https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync . s3://my-s3-test-website-bucket
Here there is a nice article. The goal of article is to build a CI/CD stack with Github Actions + AWS EC2, CodeDeploy and S3.