Howto run a script within a pipe? - amazon-web-services

I am using the pipe atlassian/aws-s3-deploy:0.4.0 in my Bitbucket pipeline to deploy to aws s3. This works well, but I need to set Cache-Control only for the index.html
How do I run code within the pipe, so that the aws cli tool is still available? It should not be another step, as the deployment process should be a single one.
My Current Script looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
caches:
- node
script:
- npm install
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
trigger: manual
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store
Credentials are provided via project secret variables.
Thank you!!

You could just install the aws cli in the same step:
- step:
name: Deploy
trigger: manual
# use python docker image so pip is available
image: python:3.7
script:
- pipe: atlassian/aws-s3-deploy:0.4.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
S3_BUCKET: '***'
LOCAL_PATH: 'dist'
# install the aws cli
- pip3 install awscli --upgrade --user
- aws s3 cp dist/index.html s3://***/index.html --cache-control no-cache,no-store

Related

Send argument to yml anchor for a step in bitbucket-pipelines.yml

I would like to send arguments when I call an anchor with bitbucket pipelines
Here is the file I am using, I have to call after-script because I need to push to a certain S3 bucket
definitions:
steps:
- step: &node-build
name: Build React app
image: node:lts-alpine
script:
- npm install --no-optional
- npm run build
artifacts:
- build/**
- step: &aws-ecr-s3
name: AWS S3 deployment
image: amazon/aws-cli
script:
- aws configure set aws_access_key_id "${AWS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET}"
pipelines:
branches:
master:
- step: *node-build
- step:
<<: *aws-ecr-s3
after-script:
- aws s3 cp ./build s3://my-app-site-dev --recursive
staging:
- step: *node-build
- step:
<<: *aws-ecr-s3
after-script:
- aws s3 cp ./build s3://my-app-site-uat --recursive
I am trying to do something like the following to not have to use that after-script part
definitions:
steps:
- step: &node-build
name: Build React app
image: node:lts-alpine
script:
- npm install --no-optional
- npm run build
artifacts:
- build/**
- step: &aws-ecr-s3 $FIRST-ARGUMENT
name: AWS S3 deployment
image: amazon/aws-cli
script:
- aws configure set aws_access_key_id "${AWS_KEY}"
- aws configure set aws_secret_access_key "${AWS_SECRET}"
- aws s3 cp ./build s3://${FIRST-ARGUMENT} --recursive
pipelines:
branches:
master:
- step: *node-build
- step: *aws-ecr-s3 my-app-site-dev
staging:
- step: *node-build
- step: *aws-ecr-s3 my-app-site-uat
To the best of my knowledge, you can only override particular values of YAML anchors. Attempts to 'pass arguments' won't work.
Instead, Bitbucket Pipelines provide Deployments - an ad-hoc way to assign different values to your variables depending on the environment. You'll need to create two deployments (say, dev and uat), and use them when referring to a step:
pipelines:
branches:
master:
- step: *node-build
<<: *pushImage
deployment: uat
staging:
- step: *node-build
<<: *pushImage
deployment: dev
More on Bitbucket Deployments:
https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/#Deployment-variables
https://support.atlassian.com/bitbucket-cloud/docs/set-up-and-monitor-deployments/

Error while creating lambda function via CLI using gitlab CI (Error parsing parameter '--zip-file': Unable to load paramfile)

I am deploying aws handler scripts as zip files in S3 bucket. Now, I want to use this deployed zipped file in lambda function. I am doing all of this via gitlab CI.
I have the following CI:
image: ubuntu:18.04
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account'
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
- aws lambda create-function --function-name "${LAMBDA_NAME}-2" --runtime python3.7 --role arn:aws:iam::xxxxxxxxxxxx:role/abc_scripts --handler ${HANDLER_SCRIPT_NAME}.${HANDLER_FUNCTION} --memory-size 1024 --zip-file "fileb://$BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip"
my_job:
variables:
LAMBDA_NAME: my_lambda
HANDLER_SCRIPT_NAME: my_aws_handler
HANDLER_FUNCTION: my_handler_function
<<: *before_script_definition
script:
# - move scripts around and install requirements and zip the file for lambda deployment
<<: *after_script_definition
For the CI, I have added Environment variables $BUCKET_TRIAL and is of the form s3://my-folder
When the CI file is run, it throws the following error in the end:
Error parsing parameter '--zip-file': Unable to load paramfile
fileb://s3://my-folder/my_lambda-deployment.zip: [Errno 2] No such
file or directory: 's3://my-folder/my_lambda-deployment.zip'
I also tried changing the --zip-file in the last line of the after_script as:
- aws lambda create-function --function-name "${LAMBDA_NAME}-2" --runtime python3.7 --role arn:aws:iam::xxxxxxxxxxxx:role/abc_scripts --handler ${HANDLER_SCRIPT_NAME}.${HANDLER_FUNCTION} --memory-size 1024 --zip-file "fileb:///my-folder/${LAMBDA_NAME}-deployment.zip"
But it still throws the same error.
Am I missing something here?
Based on the discussion in chat.
The solution was to use
--zip-file fileb:///root/forlambda/archive.zip
instead of
--zip-file "fileb://$BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip"
The reason is that --zip-file requires a local path to a zip deployment package, rather than a remote location in s3.
From docs:
--zip-file (blob): The path to the zip file of the code you are uploading. Example: fileb://code.zip

Gitlab CI failing cannot find aws command

I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.

AWS EC2 Bitbucket Pipeline is not executing the latest code deployed

I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.
As per bitbucket, I've added variables under "Repository variables" including:
S3_BUCKET
DEPLOYMENT_GROUP_NAME
DEPLOYMENT_CONFIG
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and also I've added three required files:
codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default
appspec.yml -
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/aok
permissions:
- object: /home/ubuntu/aok
owner: ubuntu
group: ubuntu
hooks:
AfterInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
3. **bitbucket-pipelines.yml**
mage: node:10.15.1
pipelines:
default:
- step:
script:
- apt-get update && apt-get install -y python-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- pip install awscli
- python codedeploy_deploy.py
- aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/aok.zip --ignore-hidden-files
- aws deploy create-deployment --application-name $APPLICATION_NAME --s3-location bucket=$S3_BUCKET,key=aok.zip,bundleType=zip --deployment-group-name $DEPLOYMENT_GROUP_NAME
On the Pipeline tab on Bitbucket when I am pushing the code is showing the Successful message and also in S3 when I am downloading the latest version, the changes that I pushed are there. The problem is the website is not showing the new changes, there is still the initial version that I cloned before implementing the PIPELINE.
This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html

Gitlab CI : generating javaScript bundles failed. During build - GitLab CI + AWS s3, cloudfront, gatsby.js

After configuring AWS and gitLab, the page deploy successfully but I'm getting the following error after each commit. What could be the cause of this problem ? below is my gitlab-ci.yml and an error.
image: docker:latest
stages:
- build
- deploy
build:
stage: build
image: node:8.11.3
script:
- export API_URL="https://xxxxxxxxxxxxxxxxxx.cloudfront.net/"
- npm install
- npm run build
- echo "BUILD SUCCESSFULLY"
artifacts:
paths:
- public/
expire_in: 20 mins
environment:
name: production
only:
- master
deploy:
stage: deploy
image: python:3.5
dependencies:
- build
script:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=kC0DF4gdVhB2Oahxxxxxxxxxxxxxxxxxxxxxxxxxx1
- export S3_BUCKET_NAME=s3://name-s3s3s3
#- export DISTRIBUTION_ID=$DISTRIBUTION_ID
- pip install awscli --upgrade --user
- export PATH=~/.local/bin:$PATH
- aws s3 sync --acl public-read --delete public $S3_BUCKET_NAME
#- aws cloudfront create-invalidation --distribution-id $DISTRIBUTION_ID --paths '/*'
- echo "DEPLOYED SUCCESSFULLY"
environment:
name: production
only:
- master