I'm configuring CirclCI. And trying to sync Github to AWS-EC2.
When I committed to pushed to github repo, CircleCI showed some error like this.
Here's circle.yml config.
test:
override:
- exit 0
deployment:
staging:
branch: develop-citest
region: ap-northeast-1
codedeploy:
w***r-ma:
application_root: /home/adbase/
revision_location:
revision_type: S3
s3_location:
bucket: w****r-vm-dev
key_pattern: w****r-{BRANCH}-{SHORT_COMMIT}
deployment_group: staging-instance-group
deployment_config: CodeDeployDefault.AllAtOnce
What should I do against this problem?
Related
I have created a CircleCI pipeline to provision an S3 bucket on AWS.
I would like to configure this pipeline to provision S3 on multiple environments like DEV, SIT, UAT etc.
I am not sure how to configure this pipeline to run for multiple env (using different tfvars for each env)
So i would be using s3_dev.tfvars to provision dev env, s3_sit.tfvars to provision sit and so on.
Is there a way to pass a parameter at runtime (containing the env which i would like to provision) which I could check in config.yml and accordngly select the tfvars for that env and run the pipeline using that tfvars?
Please advise.
version: '2.1'
orbs:
terraform: 'circleci/terraform#2.1'
jobs:
single-job-lifecycle:
executor: terraform/default
steps:
- checkout
- run:
command: >-
GIT_SSH_COMMAND='ssh -vv -i ~/.ssh/id_rsa'
git clone https://<url>/Tfvars.git
name: GIT Clone TFvars repository
- terraform/init:
path: .
- terraform/validate:
path: .
- run:
name: "terraform plan"
command: terraform plan -var-file "./Tfvars/tfvars/dev/s3.tfvars"
- run:
name: "terraform apply"
command: terraform apply -auto-approve -var-file "./Tfvars/tfvars/dev/s3.tfvars"
working_directory: ~/src
workflows:
single-job-lifecycle:
jobs:
- single-job-lifecycle
I am trying to run Terraform code through a CircleCI IaC pipeline to provision an S3 bucket in AWS.
I have Terraform code to provision S3 bucket s3.tf inside a repo named terraform
I have runtime variables in an s3.tfvars file in a repo named tfvars
So I would like to do these steps in my IaC pipeline:
Clone terraform repo
Clone tfvars repo
Run terraform init
Run terraform plan
Run terraform apply
I have a config.yaml that looks like this below. I am not sure how to clone 2 repos in CircleCI pipeline (terraform and tfvars). Any pointers on how to do this?
version: '2.1'
parameters:
ENV:
type: string
default: ""
orbs:
terraform: 'circleci/terraform#2.1'
workflows:
deploy_infrastructure:
jobs:
- terraform/init:
path: .
- terraform/validate:
path: .
checkout: true
context: terraform
- terraform/plan:
path: .
checkout: true
context: terraform
persist-workspace: true
requires:
- terraform/validate
workspace: parameters.ENV
- terraform/apply:
attach-workspace: true
context: terraform
filters:
branches:
only: 'circleci-project-setup'
requires:
- terraform/plan
This solved the issue:
version: '2.1'
orbs:
terraform: 'circleci/terraform#2.1'
jobs:
single-job-lifecycle:
executor: terraform/default
steps:
- checkout
- run:
command: >-
GIT_SSH_COMMAND='ssh -vv -i ~/.ssh/id_rsa'
git clone https://<url>/Tfvars.git
name: GIT Clone TFvars repository
- terraform/init:
path: .
- terraform/validate:
path: .
- run:
name: "terraform plan"
command: terraform plan -var-file "./Tfvars/tfvars/dev/s3.tfvars"
- run:
name: "terraform apply"
command: terraform apply -auto-approve -var-file "./Tfvars/tfvars/dev/s3.tfvars"
working_directory: ~/src
workflows:
single-job-lifecycle:
jobs:
- single-job-lifecycle````
I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.
I've followed all the steps of implementing the Bitbucket pipeline in order to have continuous development in AWS EC2. I've used the Code Deploy Application tool together with all configuration that needs to be done in AWS. I'm using EC2, Ubuntu and I'm trying to deploy a MEAN app.
As per bitbucket, I've added variables under "Repository variables" including:
S3_BUCKET
DEPLOYMENT_GROUP_NAME
DEPLOYMENT_CONFIG
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and also I've added three required files:
codedeploy_deploy.py - that I've got from this link: https://bitbucket.org/awslabs/aws-codedeploy-bitbucket-pipelines-python/src/73b7c31b0a72a038ea0a9b46e457392c45ce76da/codedeploy_deploy.py?at=master&fileviewer=file-view-default
appspec.yml -
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/aok
permissions:
- object: /home/ubuntu/aok
owner: ubuntu
group: ubuntu
hooks:
AfterInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
3. **bitbucket-pipelines.yml**
mage: node:10.15.1
pipelines:
default:
- step:
script:
- apt-get update && apt-get install -y python-dev
- curl -O https://bootstrap.pypa.io/get-pip.py
- python get-pip.py
- pip install awscli
- python codedeploy_deploy.py
- aws deploy push --application-name $APPLICATION_NAME --s3-location s3://$S3_BUCKET/aok.zip --ignore-hidden-files
- aws deploy create-deployment --application-name $APPLICATION_NAME --s3-location bucket=$S3_BUCKET,key=aok.zip,bundleType=zip --deployment-group-name $DEPLOYMENT_GROUP_NAME
On the Pipeline tab on Bitbucket when I am pushing the code is showing the Successful message and also in S3 when I am downloading the latest version, the changes that I pushed are there. The problem is the website is not showing the new changes, there is still the initial version that I cloned before implementing the PIPELINE.
This codedeploy_deploy.py script is not supported anymore. The recommended way is to migrate from the CodeDeploy addon to aws-code-deploy Bitbucket Pipe. There is a deployment guide from Atlassian that will help you to get started with the pipe: https://confluence.atlassian.com/bitbucket/deploy-to-aws-with-codedeploy-976773337.html
when i try pushing image using drone plugin for amazon ECR i'm getting the following message:
"no basic auth credentials"
my .drone.yml file pipline:
publish-to-ecr:
image: plugins/ecr
repo: foo
registry: xxx.dkr.ecr.us-west-1.amazonaws.com
dockerfile: ./Dockerfile
tags:
- latest
access_key: xxx
secret_key: xxx
region: xxx
i am using the creds for pushing my local env and it is working
The problem was that the role I configured to the machine was not configured in the repository side as well
go to the repository, and under permissions add the role the following permissions: PutImage, CompleteLayerUpload, InitiateLayerUplaod
and it worked