Multiple containers in ECS task deinition deployment with github actions - amazon-web-services

I have deployed my docker images in ecs cluster from github actions. But the issue is i have created a single task-definition and created multiple containers within. Now through github actions i want to edit the task-def.json with multiple containers an deploy it at once.
I know it is ideal to have single task-def for single container and use github actions but is there anyway so that i can pass multiple images to the task-def and deploy it in 2 steps itself
These are the steps i have been using
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: task-definition.json
container-name: my-container
image: image_name
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: my-container-service
cluster: my-cluster
wait-for-service-stability: true
Any Help is appreciated, To send multiple images at once to the task-def file
Thanks in advance!

- name: Modify Amazon ECS task definition with second container
id: render-app-container
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
container-name: app2
image: amazon/amazon-ecs-sample-2:latest

Related

How can i configure my aws credentials in shared credentials file for github action

I am trying to deploy the ci/cd pipeline for ECR in AWS.
It will push/pull the image from ECR
We are trying to migrate the azure pipeline to GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
[05:25:00] CredentialsProviderError: Profile Pinz could not be found or parsed in shared credentials file.
at resolveProfileData (/home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/resolveProfileData.js:26:11)
at /home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/fromIni.js:8:56
at async loadFromProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/aws/GetCredentialsFromProfile.js:23:25)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:95:70)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using below config file,
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
I couldn't find the shared credential file where the AWS credentials are saved.
I don't have any idea where the below profile configured
"awsProfile": "Pinz"
I analyzed all project files but couldn't get the shared credentials
I analyzed this in many documents and ended up with some nearer answers but couldn't get the exact answer. below it says ~/.aws/credentials. but how does above JSON file get the credentials from there?
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html
Honestly, this ECR pipeline deployment is my first time. Also, I didn't get proper KT about the process as well.
I think am almost done on this but for gulp it shows this error
Can anyone please guide me to where will be this shared credentials file? If not how can I configure the AWS credentials to authenticate with AWS?
Your gulp file has the profile set to Pinz, remove this line completely.
{
...
"awsProfile": "Pinz",
...
}
The action will automatically pick up on your access key ID & secret access key, proceeding to then exporting them as environment variables that the AWS SDK can use.
The rest of the pipeline should pick up on the configured credentials automatically.

Deploy with Docker Compose on AWS ECS with GitHub Actions

I recently started experimenting with ECS, Docker Compose, and context and it's really interesting. I have managed to deploy and host a compose-file through my terminal using docker compose upĀ and ecs-context, but I would also like to automate this through something like Github actions.
I'm struggling to see how one would set that up, and I have yet to find a guide for it.
Are there any good resources for researching this further? What would be the alternate or maybe even better way of doing CI/CD on AWS through Github?
I was also searching about this, but I haven't found anything that confirms this is possible using any of the AWS GitHub actions. However, you can specify multiple containers as part of a same ECS task definition.
A bit open ended for a StackOverflow question but this blog post walk you through an example of how to use AWS native CI/CD tools to deploy to ECS via the docker compose integration.
I moved away from using Docker Compose and just wrote the CloudFormation templates manually. Docker Compose still has some limitations that require quirky workarounds.
But for anyone wondering how I approached this before moving away from it (including GHA caching):
name: Deploy with Docker Compose
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Setup Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Build and push <YOUR_IMAGE>
uses: docker/build-push-action#v2
with:
context: .
push: true
cache-from: type=gha,scope=<YOUR_IMAGE>
cache-to: type=gha,scope=<YOUR_IMAGE>
tags: |
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-${{ github.sha }}
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-latest
- name: Install Docker Compose
run: curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
- name: Docker context
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
GITHUB_SHA: ${{ github.sha }}
run: |
docker context create ecs ecs-context --from-env
docker --context ecs-context compose up

AWS Elastic Beanstalk with AMI2 and docker-compose.yml

I have been trying to make Elastic Beanstalk work with Docker AMI2 image and docker-compose.yml.
The documentation says it should work out of the box with docker-compose.yml file.
I use ECR as docker registry and have updated Elastic Beanstalk role to be able to pull images from ECR.
https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
Create a docker-compose.yml file to deploy a Docker image from a hosted repository to Elastic Beanstalk. No other files are required if all your deployments are sourced from images in public repositories. (If your deployment must source an image from a private repository, you need to include additional configuration files for authentication. For more information, see Using images from a private repository.) For more information about the docker-compose.yml file, see Compose file reference on the Docker website.
However, I keep getting the following message when spinning up the environment:
Instance deployment: You must specify a Docker image in either 'Dockerfile' or 'Dockerrun.aws.json' in your source bundle. The deployment failed.
According to documentation Dockerrun.aws.json should only be required for the old AMI. Has anyone come across similar issue?
Found the solution. The documentation states docker-compose.yml is the only file needed but it still needs to be zipped before being uploaded to Elastic Beanstalk environment.
I have had some similar issues when migrating to docker-compose file from the dockerrun.json file.
What I did was to basically create a new app on elastic beanstalk and specify the Amazon Linux 2 platform under Docker as deployement option.
Below I will attach my docker-compose.yaml file and also the Github workflow with the part of deploying to ELB if you are interested (Or if it might help someone else)
Docker-compose.yaml, you will need to either remove the image or insert your own image tag url.
version: '3'
services:
node-app:
image: <IMG-TAG here e.g from ECR repository>
ports:
- 80:80
github.yaml
deploy-staging:
runs-on: ubuntu-latest
needs: [build]
steps:
- uses: actions/checkout#v2
- name: Generate deployment package
run: |
zip -r deploy.zip *
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy#v9
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: test
environment_name: test
version_label: ${{ github.sha }}
region: eu-north-1
deployment_package: deploy.zip
use_existing_version_if_available: true

Using Gitihub actions for CI CD on aws ec2 machine?

i am new github actions workflow and was wondering that is it possible that i set my ec2 machine directly for CI and CD after every push.
I have seen that it is possible with ECS , but i wanted a straight forward solution as we are trying this out on our Dev environment we don't want over shoot our budget.
is it possible , if yes how can i achieve it ?
If you build your code in GitHub Actions, and just want to copy the package over existing EC2, you can use SCP files action plugin
https://github.com/marketplace/actions/scp-files
- name: copy file via ssh key
uses: appleboy/scp-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
port: ${{ secrets.PORT }}
key: ${{ secrets.KEY }}
source: "tests/a.txt,tests/b.txt"
target: "test"
If you have any other AWS resource which interacts with EC2 (or any other AWS service) and you want to use AWS CLI, you can use AWS Credentials Action
https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync . s3://my-s3-test-website-bucket
Here there is a nice article. The goal of article is to build a CI/CD stack with Github Actions + AWS EC2, CodeDeploy and S3.

Deploy code directly to AWS EC2 instance using Github Actions

As the title says I am trying to deploy my Laravel-Angular application directly from Github to AWS EC2 instance using Github Actions.
In my application there are 3 Angular 8+ projects which are needed to be build before deployment. Where as laravel does not need to be build.
The solutions that are available suggests to use AWS Elastic Beanstalk to deploy code. But, if that is to be done how to attach an elastic beanstalk to an existing instance is not clear enough.
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
Here is my Github Actions build.yml :
name: Build Develop Branch
on:
push:
branches: [ develop ]
pull_request:
branches: [ develop ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- name: Code Checkout
uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: App 1 npm install
run: npm install
working-directory: angular-app-1
- name: App 1 Build
run: npm run build:staging
working-directory: angular-app-1
- name: App 2 npm install
run: npm install
working-directory: angular-app-2
- name: App 2 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-2
- name: App 3 npm install
run: npm install
working-directory: angular-app-3
- name: App 3 Build
run: node node_modules/#angular/cli/bin/ng build --configuration=staging
working-directory: angular-app-3
Is there a way to deploy code to AWS EC2 without using Elastic Beanstalk?
I found a simple way to deploy to EC2 instance (or to any server that accepts rsync commands over ssh) using GitHub Actions.
I have a simple file in the repo's .github/workflows folder, which GitHub Actions runs to deploy to my EC2 instance whenever a push is made to my GitHub repo.
No muss, no fuss, no special incantations or Byzantine AWS configuration details.
File .github/workflows/pushtoec2.yml:
name: Push-to-EC2
on: push
jobs:
deploy:
name: Push to EC2 Instance
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v1
- name: Deploy to my EC2 instance
uses: easingthemes/ssh-deploy#v2.1.5
env:
SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_KEY }}
SOURCE: "./"
REMOTE_HOST: "ec2-34-213-48-149.us-west-2.compute.amazonaws.com"
REMOTE_USER: "ec2-user"
TARGET: "/home/ec2-user/SampleExpressApp"
Details of the ssh deploy GitHub Action, used above.
Real final edit
A year later, I finally got around to making the tutorial: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-github.
I found a Medium tutorial that also deserves some light if anyone wants to use Code Pipeline (there's a couple of differences; I store my files on GitHub while the Medium tutorial is on S3. I create a custom VPC that the other author doesn't).
Earlier final edit
AWS has finally made a neat tutorial for CodeDeploy w/ GitHub repository: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-github-prerequisites.html take a look there and enjoy :)
Like the ECS tutorial, we're using Parameter Store to store our secrets. The way AWS previous wanted us to grab secrets was via a script in a bash script: https://aws.amazon.com/blogs/mt/use-parameter-store-to-securely-access-secrets-and-config-data-in-aws-codedeploy/
For example:
password=$(aws ssm get-parameters --region us-east-1 --names MySecureSQLPassword --with-decryption --query Parameters[0].Value)
password=`echo $password | sed -e 's/^"//' -e 's/"$//'`
mysqladmin -u root password $password
New edit (24 December 2020): I think I've nailed it. Below I pointed to Donate Anything for AWS ECS. I've moved to a self deploying setting. If you take a look at bin/scripts, I'm taking advantage of supervisord and gunicorn (for Python web development). But in context of EC2, you can simply point your AppSpec.yml to those scripts! Hope that helps everyone!
Before I start:
This is not a full answer. Not a complete walkthrough, but a lot of hints and some code that will help you with setting up certain AWS stuff like ALB and your files in your repo for this to work. This answer is more like several clues jumbled together from my sprint run trying to make ECS work last night.
I also don't have enough points to neither comment nor chat soo... here's the best thing I can offer.
Quick links (you should probably just skip these two points, though):
Check this out: https://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html
I don't have enough points to comment or chat... This won't be a full answer, as well, though, as I'm trying to first finish an ECS deploy from GH before moving on to EC2 from GH. Anyhow...
One last edit: this will sound like a marketing ploy but a correct implementation with GitHub actions and workflow_dispatch is located at Donate Anything's GitHub repository. You'll find the same ECS work located below in there. Do note that I changed my GitHub action to use Docker Hub since it was free (and to me cheaper if you're going to use ECS since AWS ECR is expensive).
Edit: The ECS deployment works now. Will start working on the EC2 deployment soon.
Edit 2: I added Donate Anything repo. Additionally, I'm not sure if direct EC2 deployment, at least for me, is viable since install scripts would kinda be weird. However, I still haven't found the time to get to EC2. Again, if anyone is willing to share their time, please do so and contribute!
I do want to warn everyone that SECURITY GROUPS are very important. That clogged me for a long time, so make sure you get them right. In the ECS tutorial, I teach you how I do it.
Full non-full answer:
I'm working on this issue right now in this repo and another for ECS here using GitHub actions. I haven't started too far on the EC2 one, but the basic rundown for testing is this:
CRUCIAL
You need to try and deploy from the AWS CLI first. This is because AWS Actions does not have a dedicated action for deploying to EC2 yet.
Write down each of these statements. We're going to need them later for the GitHub action.
Some hints when testing this AWS setup:
Before using CodeDeploy, you need an EC2 instance, an Application Load Balancer (you'll find it under Elastic Load Balancer), and a target group (which you create DURING the ALB setup). Go to target groups, right click on the group, and register your instance.
To deploy from CodeDeploy, create a new application. Create a new deployment group. I think, for your setup, you should do the in-place deployment type rather than the Blue/Green deployment type.
Finally, testing on the CLI, you should run the code you see here: https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-deploy-application.html#tutorials-wordpress-deploy-application-create-deployment-cli
Do note, you may want to start from here (using S3 as a location to store your latest code. You can delete it afterwards anyways, as I believe DELETE requests don't incur charges): https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-upload-application.html I personally don't know if that GitHub OAuth integration works. I tried once before (very amateur though, i.e. no clue what I was doing before) and nothing happened, soo... I'd just stick with that tutorial.
How your test rundown will look like:
For me, for my ECS repo, I just went a full 10 hours straight trying to configure everything properly step by step like the GitHub action. For you, you should do the same. Imagine you're the code: figure out where you need to start from.
Aha! I should probably figure out CodeDeploy first. Let's right an appspec.yaml file first! The appspec file is how CodeDeploy will work on the hooks for everything. Unfortunately, I'm current going through that problem here but that's because the EC2 and ECS syntax for AppSpec files are different. Luckily, EC2 doesn't have any special areas. Just get your files and hooks right. An example from my test:
version: 0.0
os: linux
files:
- source: /
destination: /code
hooks:
BeforeInstall:
- location: aws_scripts/install_dependencies
timeout: 300
runas: root
ApplicationStop:
- location: aws_scripts/start_server
runas: root
The GitHub action:
What you'll need at minimum:
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# TODO Change your AWS region here!
aws-region: us-east-2
The checking out of code is necessary to... well... get the code.
For the configuration of AWS credentials, you'll want to add AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your GitHub secrets with a proper IAM credential. For this, I believe the only IAM role needed is for full CodeDeploy stuff.
Deploying the code:
This is when that test code that you should've tried before reaching this step is for. Now that your workflow is setup, let's paste the code from the CLI into your action.
- name: Deploying with CodeDeploy
id: a-task
env:
an-environment-variable: anything you want
run: |
echo "Your CLI code should be placed here"
Sorry if this was confusing, not what you're looking for, or wanted a complete tutorial. I, too, haven't actually gotten this to work, but it's also been awhile since I last tried, and the last time I tried, I didn't even know what an EC2 instance was... I just did a standalone EC2 instance and used rsync to transfer my files. Hopefully what I've written was several clues that can guide you very easily to a solution.
If you got it to work, please share it on here: https://github.com/Andrew-Chen-Wang/cookiecutter-django-ec2-gh-action so that no one else has to suffer the pain of AWS deployment...
First, you need to go through this tutorial on AWS to set up your EC2 server, as well as configure the Application and Deployment Group in CodeDeploy: Tutorial: Use CodeDeploy to deploy an application from GitHub
Then, you can use the following workflow in GitHub Actions to deploy your code on push. You essentially use the AWS CLI to create a new deployment. Store the AWS credentials for the CLI in GitHub Secrets.
Here is an example for deploying a Node app:
name: Deploy to AWS
on:
push:
branches: [ main ]
jobs:
deploy:
name: Deploy AWS
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x]
app-name: ['your-codedeploy-application']
deployment-group: ['your-codedeploy-deploy-group']
repo: ['username/repository-name']
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install dependencies
run: npm install
- name: Build app
run: npm run build
- name: Install AWS CLI
run: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }}
aws-region: us-east-1
- name: Deploy to AWS
run: |
aws deploy create-deployment \
--application-name ${{ matrix.app-name }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--deployment-group-name ${{ matrix.deployment-group }} \
--description "GitHub Deployment for the ${{ matrix.app-name }}-${{ github.sha }}" \
--github-location repository=${{ matrix.repo }},commitId=${{ github.sha }}