Unable to load AWS credentials in github actions - amazon-web-services

I'm running a github actions pipeline to deploy a react project to an S3 bucket in AWS and recieve the following error when running the action:
Run aws-actions/configure-aws-credentials#v1
Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers
Here's my .github/workflows/main.yaml
name: S3 Pipeline
on:
push:
branches:
- DEV
permissions:
id-token: write
contents: read
jobs:
Deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 16
cache: 'npm'
- name: install
run: npm ci
- name: Run build
run: npm run build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION}}
- name: Deploy static site to S3 bucket
run: aws s3 sync ./build s3://${{secrets.BUCKET}}
I've tried adding, with no success. Also played around with the aws-actions version
permissions:
id-token: write
contents: read

Related

How can i configure my aws credentials in shared credentials file for github action

I am trying to deploy the ci/cd pipeline for ECR in AWS.
It will push/pull the image from ECR
We are trying to migrate the azure pipeline to GitHub actions pipeline
When I try to implement the pipeline I am facing the below error,
[05:25:00] CredentialsProviderError: Profile Pinz could not be found or parsed in shared credentials file.
at resolveProfileData (/home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/resolveProfileData.js:26:11)
at /home/runner/work/test-api/test-api/node_modules/#aws-sdk/credential-provider-ini/dist-cjs/fromIni.js:8:56
at async loadFromProfile (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/aws/GetCredentialsFromProfile.js:23:25)
at async BuildDeployContext (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/DeployContext.js:95:70)
at async Publish (/home/runner/work/test-api/test-api/node_modules/#pinzgolf/pinz-build/dist/publish/Publish.js:14:21)
Error: Process completed with exit code 1.
Here is my workflow YAML file,
on:
push:
branches: [ main ]
name: Node Project `my-app` CI on ECRjobs
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node 14.17.X
uses: actions/setup-node#v2
with:
node-version: 14.17.X
- name: 'Yarn'
uses: borales/actions-yarn#v2.3.0
with:
cmd: install --frozen-lockfile --non-interactive
- name: Update SAM version
uses: aws-actions/setup-sam#v1
- run: |
wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-x86_64.zip
unzip aws-sam-cli-linux-x86_64.zip -d sam-installation
sudo ./sam-installation/install --update
sam --version
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push the image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: test-pinz-api
IMAGE_TAG: latest
run: |
gulp publish --profile-name development
Using gulp we publish the environment using below config file,
{
"apiDomainName": "domain",
"assetsDomainName": "domain",
"awsProfile": "Pinz",
"bastionBucket": "bucketname",
"corsDomains": ["domain"],
"dbBackupSources": ["db source", "db source"],
"dbClusterIdentifier": "cluster identfier",
"designDomainName": "domain",
"lambdaEcr": "ecr",
"snsApplication": "sns",
"snsServerKeySecretName": "name",
"stackName": "name",
"templateBucket": "bucketname",
"userJwtPublicKey": "token",
"websiteUrl": "domain",
"wwwDomainName": "domain",
"wwwEcr": "ecr repo"
}
I couldn't find the shared credential file where the AWS credentials are saved.
I don't have any idea where the below profile configured
"awsProfile": "Pinz"
I analyzed all project files but couldn't get the shared credentials
I analyzed this in many documents and ended up with some nearer answers but couldn't get the exact answer. below it says ~/.aws/credentials. but how does above JSON file get the credentials from there?
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html
Honestly, this ECR pipeline deployment is my first time. Also, I didn't get proper KT about the process as well.
I think am almost done on this but for gulp it shows this error
Can anyone please guide me to where will be this shared credentials file? If not how can I configure the AWS credentials to authenticate with AWS?
Your gulp file has the profile set to Pinz, remove this line completely.
{
...
"awsProfile": "Pinz",
...
}
The action will automatically pick up on your access key ID & secret access key, proceeding to then exporting them as environment variables that the AWS SDK can use.
The rest of the pipeline should pick up on the configured credentials automatically.

AWS sync to deploy only new or updated files to s3

I've written a Github actions script that takes files from a folder migrations and uploads it to s3. The problem with this pipeline is that all other files in the directory also get updated. How can I go about only updating new or updated files?
Here's the current script as it stands.
name: function-name
on:
push:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Deploy file to s3
run: aws s3 sync ./migration/ s3://s3_bucket
You could try the GitHub Action jakejarvis/s3-sync-action, which uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.
It is based on aws s3 sync, which should enable an incremental backup, instead of copying/modifying every files.
Add as "source_dir" the migration folder
steps:
...
- uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1' # optional: defaults to us-east-1
SOURCE_DIR: 'migration' # optional: defaults to entire repository
However, taseenb comments:
This does not work as intended (like an incremental backup).
S3 sync cli command will copy all files every time when run inside a GitHub Action.
I believe this happens when we clone the repository inside a Docker image to execute the operation (this is what jakejarvis/s3-sync-action does).
I don't think there is a perfect solution using S3 sync.
But if you are sure that your files always change size you can use --size-only in the args.
It will ignore files with the same size, so probably not safe in most cases.

Deploy with Docker Compose on AWS ECS with GitHub Actions

I recently started experimenting with ECS, Docker Compose, and context and it's really interesting. I have managed to deploy and host a compose-file through my terminal using docker compose upĀ and ecs-context, but I would also like to automate this through something like Github actions.
I'm struggling to see how one would set that up, and I have yet to find a guide for it.
Are there any good resources for researching this further? What would be the alternate or maybe even better way of doing CI/CD on AWS through Github?
I was also searching about this, but I haven't found anything that confirms this is possible using any of the AWS GitHub actions. However, you can specify multiple containers as part of a same ECS task definition.
A bit open ended for a StackOverflow question but this blog post walk you through an example of how to use AWS native CI/CD tools to deploy to ECS via the docker compose integration.
I moved away from using Docker Compose and just wrote the CloudFormation templates manually. Docker Compose still has some limitations that require quirky workarounds.
But for anyone wondering how I approached this before moving away from it (including GHA caching):
name: Deploy with Docker Compose
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout source code
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Setup Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Build and push <YOUR_IMAGE>
uses: docker/build-push-action#v2
with:
context: .
push: true
cache-from: type=gha,scope=<YOUR_IMAGE>
cache-to: type=gha,scope=<YOUR_IMAGE>
tags: |
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-${{ github.sha }}
${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY }}:<YOUR_IMAGE>-latest
- name: Install Docker Compose
run: curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
- name: Docker context
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
GITHUB_SHA: ${{ github.sha }}
run: |
docker context create ecs ecs-context --from-env
docker --context ecs-context compose up

Timeout error after 10 minutes when deploying from github repo to aws ec2 using github action workflow

I am trying to do auto deploy react app from Github repo to aws ec2 using github action workflow.
I chose Node.js in action and wrote yml as following
name: Node.js CI
on:
push:
branches: [ staging ]
pull_request:
branches: [ staging ]
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Deployment
timeout: 40
uses: appleboy/ssh-action#master
with:
node-version: 10.x
cache: 'npm'
host: ${{ secrets.SECRET_LINK }}
key: ${{ secrets.SECRET_KEY }}
username: ${{ secrets.SECRET_NAME }}
script: |
cd /var/www/html/
git checkout staging
git pull
npm install
npm run build
When deploying, I face this error after 10 minutes from the start.
err: Run Command Timeout!
enter image description here
I set timeout-minutes as 30 minutes but it always failed.

Deploy to AWS S3 sync with github

I am trying to deploy static site to AWS S3 and Cloudfront with github action. My Github Action code is:
name: deploy-container
on:
push:
branches:
- master
paths:
- 'packages/container/**'
defaults:
run:
working-directory: packages/container
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: chrislennon/action-aws-cli#v1.1
- run: aws s3 sync dist s3://${{secrets.AWS_S3_BUCKET_NAME}}/container/latest
env:
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY}}
But when I try to build I got these errors
GitHub will redeploy your application only if you did some change on a file inside of your application directory.
I suppose that you have changed only your yml file and tried to rerun the job on GitHub.
But from the error message, this is an unsecure method to use the tag ACTIONS_ALLOW_UNSECURE_COMMANDS.
It is best to consider using the Official AWS for GitHub Actions instead of using the ACTIONS_ALLOW_UNSECURE_COMMANDS.
name: deploy-container
on:
push:
branches:
- master
paths:
- 'packages/container/**'
defaults:
run:
working-directory: packages/container
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets. AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets. AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-1
- name: Copy files to the s3 website content bucket
run:
aws s3 sync dist s3://${{ secrets.AWS_S3_BUCKET_NAME }}/container/latest
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: chrislennon/action-aws-cli#v1.1
env:
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
You may want to restore modification time of the files so that only modified files are synced. For example using git-restore-mtime. Alternatively use something like dandelion though I haven't tried it.