How to configure / use AWS CLI in GitHub Actions? - amazon-web-services

I'd like to run commands like aws amplify start-job in GitHub Actions. I understand the AWS CLI is pre-installed but I'm not sure how to configure it.
In particular, I'm not sure how the env vars are named for all configuration options as some docs only mention AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but nothing for the region and output settings.

I recommend using this AWS action for setting all the AWS region and credentials environment variables in the GitHub Actions environment. It doesn't set the output env vars so you still need to do that, but it has nice features around making sure that credential env vars are masked in the output as secrets, supports assuming a role, and provides your account ID if you need it in other actions.
https://github.com/marketplace/actions/configure-aws-credentials-action-for-github-actions

I could provide the following secrets and env vars and then use the commands:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json
E.g.
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-1
AWS_DEFAULT_OUTPUT: json
run: aws amplify start-job --app-id xxx --branch-name master --job-type RELEASE

In my experience, the out-of-box AWS CLI tool coming from action runner just works fine.
But there would be some time that you'd prefer to use credentials file (like terraform AWS provider), and this is example for it.
This would base64 decode the encoded file and use for the following steps.
- name: Write into file
id: write_file
uses: timheuer/base64-to-file#v1.0.3
with:
fileName: 'myTemporaryFile.txt'
encodedString: ${{ secrets.AWS_CREDENTIALS_FILE_BASE64 }}

Related

How can i configure my aws credentials in shared credentials file for github action

I am trying to deploy the ci/cd pipeline for ECR in AWS.
It will push/pull the image from ECR
We are trying to migrate the azure pipeline to GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
[05:25:00] CredentialsProviderError: Profile Pinz could not be found or parsed in shared credentials file.
at resolveProfileData (/home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/resolveProfileData.js:26:11)
at /home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/fromIni.js:8:56
at async loadFromProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/aws/GetCredentialsFromProfile.js:23:25)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:95:70)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using below config file,
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
I couldn't find the shared credential file where the AWS credentials are saved.
I don't have any idea where the below profile configured
"awsProfile": "Pinz"
I analyzed all project files but couldn't get the shared credentials
I analyzed this in many documents and ended up with some nearer answers but couldn't get the exact answer. below it says ~/.aws/credentials. but how does above JSON file get the credentials from there?
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html
Honestly, this ECR pipeline deployment is my first time. Also, I didn't get proper KT about the process as well.
I think am almost done on this but for gulp it shows this error
Can anyone please guide me to where will be this shared credentials file? If not how can I configure the AWS credentials to authenticate with AWS?
Your gulp file has the profile set to Pinz, remove this line completely.
{
...
"awsProfile": "Pinz",
...
}
The action will automatically pick up on your access key ID & secret access key, proceeding to then exporting them as environment variables that the AWS SDK can use.
The rest of the pipeline should pick up on the configured credentials automatically.

AWS sync to deploy only new or updated files to s3

I've written a Github actions script that takes files from a folder migrations and uploads it to s3. The problem with this pipeline is that all other files in the directory also get updated. How can I go about only updating new or updated files?
Here's the current script as it stands.
name: function-name
on:
push:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Deploy file to s3
run: aws s3 sync ./migration/ s3://s3_bucket
You could try the GitHub Action jakejarvis/s3-sync-action, which uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.
It is based on aws s3 sync, which should enable an incremental backup, instead of copying/modifying every files.
Add as "source_dir" the migration folder
steps:
...
- uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1' # optional: defaults to us-east-1
SOURCE_DIR: 'migration' # optional: defaults to entire repository
However, taseenb comments:
This does not work as intended (like an incremental backup).
S3 sync cli command will copy all files every time when run inside a GitHub Action.
I believe this happens when we clone the repository inside a Docker image to execute the operation (this is what jakejarvis/s3-sync-action does).
I don't think there is a perfect solution using S3 sync.
But if you are sure that your files always change size you can use --size-only in the args.
It will ignore files with the same size, so probably not safe in most cases.

How to configure GithubAction to assume role to access my CodeBuild Project

I am using GithHub Actions with CodeBuild,when I run GitHub Actions,I am getting error message CodeBuild project name can not be found.The issue is that my codebuild project is in my assumed role(sandbox_role) but github action is looking for the project in the root account that i configured as environment variable in github secret.How can I configure my GitHub Action workflows to first connect to the root then from there assume sandbox_role to get my codebuild project?below is my code sample..I am using terragrunt/terraform code to provision my environment
name:'GitHub Actions For CodeBuild'
on:
pull_request:
branches:
- staging
jobs:
CodeBuild:
name:'Build'
runs-on:ubuntu-latest
steps:
-name:'checkout'
uses:actions/checkout#v2
-name:configure AWS credentials
uses:aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY_ID}}
aws-secret-access-key: ${{secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
-name:Run CodeBuild
uses: aws-actions/aws-codebuild-run-build#v1
with:
project-name: CodeBuild
buildspec-override: staging/buildspec.yml
env-vars-for-codebuild: |
TF_INPUT,
AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY,
AWS_REGION,
env:
TF_INPUT: false
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1
Not pretty sure if this works, but whenever I use roles I also pass the role ARN "to tell" AWS which is the role you are using and which permissions it should have.
This role ARN can be added in the configuration and credentials file:
Configuration:
region = us-east-1
output = json
role_arn = arn:aws:iam::account_id:role/role-name
source_profile=default
Credentials:
[default]
aws_access_key_id="your_key_id"
aws_secret_access_key="your_access_key"
aws_session_token="your_session_token"
source_profile=default

Using Gitihub actions for CI CD on aws ec2 machine?

i am new github actions workflow and was wondering that is it possible that i set my ec2 machine directly for CI and CD after every push.
I have seen that it is possible with ECS , but i wanted a straight forward solution as we are trying this out on our Dev environment we don't want over shoot our budget.
is it possible , if yes how can i achieve it ?
If you build your code in GitHub Actions, and just want to copy the package over existing EC2, you can use SCP files action plugin
https://github.com/marketplace/actions/scp-files
- name: copy file via ssh key
uses: appleboy/scp-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
port: ${{ secrets.PORT }}
key: ${{ secrets.KEY }}
source: "tests/a.txt,tests/b.txt"
target: "test"
If you have any other AWS resource which interacts with EC2 (or any other AWS service) and you want to use AWS CLI, you can use AWS Credentials Action
https://github.com/aws-actions/configure-aws-credentials
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.TEST_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.TEST_AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync . s3://my-s3-test-website-bucket
Here there is a nice article. The goal of article is to build a CI/CD stack with Github Actions + AWS EC2, CodeDeploy and S3.

How to run AWS CLI command tasks in Ansible Tower

The AWS CLI command tasks in Ansible playbooks work fine form command line if AWS credentials are specified as environment variables as per boto requirements. More info can be found here Environment Variables.
But they fail to run in Tower because it exports another set of env. vars:
AWS_ACCESS_KEY
AWS_SECRET_KEY
In order to make them work in Tower just add the below in task definition:
environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"
e.g. this task:
- name: Describe instances
command: aws ec2 describe-instances --region us-east-1
will transform to:
- name: Describe instances
command: aws ec2 describe-instances --region us-east-1
environment:
AWS_ACCESS_KEY_ID: "{{ lookup('env','AWS_ACCESS_KEY') }}"
AWS_SECRET_ACCESS_KEY: "{{ lookup('env','AWS_SECRET_KEY') }}"
NOTE: This only injects env.var. for the specific task - not the whole playbook!
So you have to amend this way every AWS CLI task.
Put your environment variable in a file:
export AWS_ACCESS_KEY=
export AWS_SECRET_KEY=
save the file in ~/.vars in the remote host and then in your playbook.
- name: Describe instances
command: source ~/.vars && aws ec2 describe-instances --region us-east-2
for security you could delete the file after run and copy again in the next play.
While this may not be applicable to tower we use the opensource version. Setup your .aws and/or .boto files.