I am using terraform inside codebuild along with codepipeline ( CI CD ) to deploy my resources. The resources ( all the tf files) are present as a zip file.
This deployment for CI/CD ( codebuild + codepipeline ) is being done by CDK.
Now I am confused how and where do I implement terraform s3 backend, because I am using 2 codebuild stages: Plan stage for terraform planning -> manual approval ( intermediate) -> Deploy stager for terraform apply )
Conceptually I am not able to understand where should I implement s3 backend.
Plan stage code build spec
pre_build:
commands:
- terraform init
build:
commands:
- echo '{"fruit":{"name":"apple","color":"green","price":1.20}}' | jq '.'
- terraform plan -no-color -input=false
Deploy stage buidspec
pre_build:
commands:
- terraform init
build:
commands:
- echo '{"fruit":{"name":"apple","color":"green","price":1.20}}' | jq '.'
- terraform apply -auto-approve -no-color -input=false
Related
I know that within CodePipeline I can add a manual approval stage (which I can use to publish an sns topic/get an approval from someone) - is it possible to codify this within the buildspec.yml alone? As it stands my buildspec.yml looks like this:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 16
commands:
- ASSUME_ROLE_ARN="arn:aws:iam::accountnum:role/ServerlessAssumeRole"
- TEMP_ROLE=$(aws sts assume-role --role-arn $ASSUME_ROLE_ARN --role-session-name codebuild)
- export TEMP_ROLE
- export AWS_ACCESS_KEY_ID=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.AccessKeyId')
- export AWS_SECRET_ACCESS_KEY=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SecretAccessKey')
- export AWS_SESSION_TOKEN=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SessionToken')
- npm install -g serverless
- npm install
- npx tsc
build:
commands:
- sls deploy --stage dev --region eu-west-2 --verbose
post_build:
commands:
- echo build complete
is there a step I can add within the "pre-build" phase which pauses the build, creates a manual approval stage and publishes it to sns before continuing the build?
thanks
No, individual CodeBuild actions cannot have a manual approval process inside of them. The manual approval process is a different type of pipeline action. You would need to place the manual approval action in the pipeline before the CodeBuild action.
When i push changes in AWS CodeCommit Repo, I want to make JAR file with mvn install command for that Java Code and upload it to AWS Lambda function. Location of that Jar file should be inside src/main/target. Can anyone suggest buildspec.yaml file?
Assuming that you're using AWS SAM (Serverless Application Model), this is as simple as calling a single command in the post_build section of your buildspec.yaml. Example:
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
pre_build:
commands:
- mvn clean
build:
commands:
- mvn install
post_build:
commands:
- sam deploy --stack-name lambda-java --no-confirm-changeset
artifacts:
files:
- target/lambda-java.jar
discard-paths: no
Please note though that you'll also have to set up a mechanism that kicks off the build process when you push any changes to your repository. The easiest way doing this is using AWS CodePipeline, as that nicely integrates with CodeCommit. Simply create a new pipeline, choose your existing CodeCommit repository where the Java-based Lambda is stored, and select CodeBuild as the build provider (skip the deploy stage).
Also note that your CodeBuild service role will have to have the appropriate permissions to deploy the Lambda function. As SAM is leveraged, this includes permissions to upload to S3 and update the corresponding CloudFormation stack (see stack-name parameter above).
From here on, whenever you push any changes to your repo, CodePipeline will trigger a build using CodeCommit, which will then deploy a new version of your Lambda via the sam deploy command in your buildspec.yaml.
I have created a CircleCI pipeline to provision an S3 bucket on AWS.
I would like to configure this pipeline to provision S3 on multiple environments like DEV, SIT, UAT etc.
I am not sure how to configure this pipeline to run for multiple env (using different tfvars for each env)
So i would be using s3_dev.tfvars to provision dev env, s3_sit.tfvars to provision sit and so on.
Is there a way to pass a parameter at runtime (containing the env which i would like to provision) which I could check in config.yml and accordngly select the tfvars for that env and run the pipeline using that tfvars?
Please advise.
version: '2.1'
orbs:
terraform: 'circleci/terraform#2.1'
jobs:
single-job-lifecycle:
executor: terraform/default
steps:
- checkout
- run:
command: >-
GIT_SSH_COMMAND='ssh -vv -i ~/.ssh/id_rsa'
git clone https://<url>/Tfvars.git
name: GIT Clone TFvars repository
- terraform/init:
path: .
- terraform/validate:
path: .
- run:
name: "terraform plan"
command: terraform plan -var-file "./Tfvars/tfvars/dev/s3.tfvars"
- run:
name: "terraform apply"
command: terraform apply -auto-approve -var-file "./Tfvars/tfvars/dev/s3.tfvars"
working_directory: ~/src
workflows:
single-job-lifecycle:
jobs:
- single-job-lifecycle
so far in my buildspec.yml file I can create a docker image and store it in the ECR repository (I am using codepipeline). My question is how do I deploy it to my ECS instance through the buildspec.yml using the aws cli commands?
i am sharing buildspec.yaml file have a look
version: 0.1
phases:
pre_build:
commands:
- echo Setting timestamp for container tag
- echo `date +%s` > timestamp
- echo Logging into Amazon ECR...
- $(aws ecr get-login --region $AWS_DEFAULT_REGION)
build:
commands:
- echo Building and tagging container
- docker build -t $REPOSITORY_NAME .
- docker tag $REPOSITORY_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
post_build:
commands:
- echo Pushing docker image
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
- echo Preparing CloudFormation Artifacts
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_KEY task-definition.template
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_PARAMS_KEY cf-config.json
artifacts:
files:
- task-definition.template
- cf-config.json
You can edit this more command for ECS instance i have return template which goes to cloud formation.
you can write simple awscli command to create cluster and pull images check this aws documentation: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
sharing my own git check it out for more info: https://github.com/harsh4870/ECS-CICD-pipeline
Running terraform deploy in codebuild with the following buildspec.yml.
Seems terraform isn't picking up the IAM permissions provided by the codebuild role.
We're using terraform's remote state (state file is stored in s3), when terraform attempts to contact the S3 bucket containing the state file it dies asking for the terraform provider to be configured:
Downloading modules (if any)...
Get: file:///tmp/src486521661/src/common/byu-aws-accounts-tf
Get: file:///tmp/src486521661/src/common/base-aws-account-
...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Here's the buildspec.yml:
version: 0.1
phases:
install:
commands:
- cd common && git clone https://eric.w.nord#gitlab.com/aws-account-tools/acs.git
- export TerraformVersion=0.9.3 && cd /tmp && curl -o terraform.zip https://releases.hashicorp.com/terraform/${TerraformVersion}/terraform_${TerraformVersion}_linux_amd64.zip && unzip terraform.zip && mv terraform /usr/bin
build:
commands:
- cd accounts/00/dev-stack-oit-byu && terraform init && terraform plan && echo terraform apply
EDIT: THE BUG HAS BEEN FIXED SO PLEASE DELETE these lines below if you added them on your buildspec file.
Before terraform init, add these lines:
export AWS_ACCESS_KEY_ID=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.SecretAccessKey'`
export AWS_SESSION_TOKEN=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.Token'`
It is more readable.
In you buildspec.yml try:
env:
variables:
AWS_METADATA_ENDPOINT: "http://169.254.169.254:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
You need this is because TF will look for the meta data in the env var that is not set in the container.
I hate to post this but it will allow terraform to access the codebuild IAM STS access keys and execute terraform commands from within codebuild as a buildspec.yml
It's pretty handy for automated deploys of AWS infrastructure as you can drop a CodeBuild into all your AWS accounts and fire them with a CodePipeline.
Please note the version: 0.2
This passes envars between commands where as version 0.1 had a clean shell for each command
Please update if you find something better:
version: 0.2
env:
variables:
AWS_DEFAULT_REGION: "us-west-2"
phases:
install:
commands:
- apt-get -y update
- apt-get -y install jq
pre_build:
commands:
# load acs submodule (since codebuild doesn't pull the .git folder from the repo
- cd common
- git clone https://gituser#gitlab.com/aws-account-tools/acs.git
- cd ../
#install terraform
- other/install-tf-linux64.sh
- terraform --version
#set env variables for terraform provider
- curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn")) | not) ] | map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/cred.txt # work around https://github.com/hashicorp/terraform/issues/8746
- chmod +x /tmp/cred.txt
- . /tmp/cred.txt
build:
commands:
- ls
- cd your/repo's/folder/with/main.tf
- terraform init
- terraform plan
- terraform apply
Terraform AWS provider offers the following method of authentication:
Static credentials
In this case you can add the access and secrete keys directly into the tf config file as follow:
provider "aws" {
region = "us-west-2"
access_key = "anaccesskey"
secret_key = "asecretkey"
}
Environment variables
You import the access and secrete key into the the environment variable. Do this using the export command
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credentials file
If Terraform fail to detect credentials inline, or in the environment, Terraform will check this location, $HOME/.aws/credentials in which case you don't need to mention or put the credential in your Terraform config
EC2 Role
If you're running Terraform from an EC2 instance with IAM Instance Profile using IAM Role, Terraform will just ask the metadata API endpoint for credentials. In which case, you don't have to mention the access and secrete keys in any config. This is the preferred way
https://www.terraform.io/docs/providers/aws/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials