Deploy to AWS S3 sync with github - amazon-web-services

I am trying to deploy static site to AWS S3 and Cloudfront with github action. My Github Action code is:
name: deploy-container
on:
push:
branches:
- master
paths:
- 'packages/container/**'
defaults:
run:
working-directory: packages/container
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: chrislennon/action-aws-cli#v1.1
- run: aws s3 sync dist s3://${{secrets.AWS_S3_BUCKET_NAME}}/container/latest
env:
AWS_ACCESS_KEY_ID: ${{secrets.AWS_ACCESS_KEY_ID}}
AWS_SECRET_ACCESS_KEY: ${{secrets.AWS_SECRET_ACCESS_KEY}}
But when I try to build I got these errors

GitHub will redeploy your application only if you did some change on a file inside of your application directory.
I suppose that you have changed only your yml file and tried to rerun the job on GitHub.
But from the error message, this is an unsecure method to use the tag ACTIONS_ALLOW_UNSECURE_COMMANDS.
It is best to consider using the Official AWS for GitHub Actions instead of using the ACTIONS_ALLOW_UNSECURE_COMMANDS.
name: deploy-container
on:
push:
branches:
- master
paths:
- 'packages/container/**'
defaults:
run:
working-directory: packages/container
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets. AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets. AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-1
- name: Copy files to the s3 website content bucket
run:
aws s3 sync dist s3://${{ secrets.AWS_S3_BUCKET_NAME }}/container/latest

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm install
- run: npm run build
- uses: chrislennon/action-aws-cli#v1.1
env:
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'

You may want to restore modification time of the files so that only modified files are synced. For example using git-restore-mtime. Alternatively use something like dandelion though I haven't tried it.

Related

Unable to load AWS credentials in github actions

I'm running a github actions pipeline to deploy a react project to an S3 bucket in AWS and recieve the following error when running the action:
Run aws-actions/configure-aws-credentials#v1
Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers
Here's my .github/workflows/main.yaml
name: S3 Pipeline
on:
push:
branches:
- DEV
permissions:
id-token: write
contents: read
jobs:
Deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 16
cache: 'npm'
- name: install
run: npm ci
- name: Run build
run: npm run build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.REGION}}
- name: Deploy static site to S3 bucket
run: aws s3 sync ./build s3://${{secrets.BUCKET}}
I've tried adding, with no success. Also played around with the aws-actions version
permissions:
id-token: write
contents: read

AWS sync to deploy only new or updated files to s3

I've written a Github actions script that takes files from a folder migrations and uploads it to s3. The problem with this pipeline is that all other files in the directory also get updated. How can I go about only updating new or updated files?
Here's the current script as it stands.
name: function-name
on:
push:
branches:
- dev
jobs:
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#master
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Install Dependencies
run: npm install
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-central-1
- name: Deploy file to s3
run: aws s3 sync ./migration/ s3://s3_bucket
You could try the GitHub Action jakejarvis/s3-sync-action, which uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.
It is based on aws s3 sync, which should enable an incremental backup, instead of copying/modifying every files.
Add as "source_dir" the migration folder
steps:
...
- uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-west-1' # optional: defaults to us-east-1
SOURCE_DIR: 'migration' # optional: defaults to entire repository
However, taseenb comments:
This does not work as intended (like an incremental backup).
S3 sync cli command will copy all files every time when run inside a GitHub Action.
I believe this happens when we clone the repository inside a Docker image to execute the operation (this is what jakejarvis/s3-sync-action does).
I don't think there is a perfect solution using S3 sync.
But if you are sure that your files always change size you can use --size-only in the args.
It will ignore files with the same size, so probably not safe in most cases.

Timeout error after 10 minutes when deploying from github repo to aws ec2 using github action workflow

I am trying to do auto deploy react app from Github repo to aws ec2 using github action workflow.
I chose Node.js in action and wrote yml as following
name: Node.js CI
on:
push:
branches: [ staging ]
pull_request:
branches: [ staging ]
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Deployment
timeout: 40
uses: appleboy/ssh-action#master
with:
node-version: 10.x
cache: 'npm'
host: ${{ secrets.SECRET_LINK }}
key: ${{ secrets.SECRET_KEY }}
username: ${{ secrets.SECRET_NAME }}
script: |
cd /var/www/html/
git checkout staging
git pull
npm install
npm run build
When deploying, I face this error after 10 minutes from the start.
err: Run Command Timeout!
enter image description here
I set timeout-minutes as 30 minutes but it always failed.

Send argument to yml anchor for a step in bitbucket-pipelines.yml

I would like to send arguments when I call an anchor with bitbucket pipelines
Here is the file I am using, I have to call after-script because I need to push to a certain S3 bucket
definitions:
steps:
- step: &node-build
name: Build React app
image: node:lts-alpine
script:
- npm install --no-optional
- npm run build
artifacts:
- build/**
- step: &aws-ecr-s3
name: AWS S3 deployment
image: amazon/aws-cli
script:
- aws configure set aws_access_key_id "${AWS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET}"
pipelines:
branches:
master:
- step: *node-build
- step:
<<: *aws-ecr-s3
after-script:
- aws s3 cp ./build s3://my-app-site-dev --recursive
staging:
- step: *node-build
- step:
<<: *aws-ecr-s3
after-script:
- aws s3 cp ./build s3://my-app-site-uat --recursive
I am trying to do something like the following to not have to use that after-script part
definitions:
steps:
- step: &node-build
name: Build React app
image: node:lts-alpine
script:
- npm install --no-optional
- npm run build
artifacts:
- build/**
- step: &aws-ecr-s3 $FIRST-ARGUMENT
name: AWS S3 deployment
image: amazon/aws-cli
script:
- aws configure set aws_access_key_id "${AWS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET}"
- aws s3 cp ./build s3://${FIRST-ARGUMENT} --recursive
pipelines:
branches:
master:
- step: *node-build
- step: *aws-ecr-s3 my-app-site-dev
staging:
- step: *node-build
- step: *aws-ecr-s3 my-app-site-uat
To the best of my knowledge, you can only override particular values of YAML anchors. Attempts to 'pass arguments' won't work.
Instead, Bitbucket Pipelines provide Deployments - an ad-hoc way to assign different values to your variables depending on the environment. You'll need to create two deployments (say, dev and uat), and use them when referring to a step:
pipelines:
branches:
master:
- step: *node-build
<<: *pushImage
deployment: uat
staging:
- step: *node-build
<<: *pushImage
deployment: dev
More on Bitbucket Deployments:
https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#Deployment-variables
https://support.atlassian.com/bitbucket-cloud/docs/set-up-and-monitor-deployments/

Howto run a script within a pipe?

I am using the pipe atlassian/aws-s3-deploy:0.4.0 in my Bitbucket pipeline to deploy to aws s3. This works well, but I need to set Cache-Control only for the index.html
How do I run code within the pipe, so that the aws cli tool is still available? It should not be another step, as the deployment process should be a single one.
My Current Script looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
caches:
- node
script:
- npm install
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
trigger: manual
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store
Credentials are provided via project secret variables.
Thank you!!
You could just install the aws cli in the same step:
- step:
name: Deploy
trigger: manual
# use python docker image so pip is available
image: python:3.7
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
# install the aws cli
- pip3 install awscli --upgrade --user
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store