Running terraform deploy in codebuild with the following buildspec.yml.
Seems terraform isn't picking up the IAM permissions provided by the codebuild role.
We're using terraform's remote state (state file is stored in s3), when terraform attempts to contact the S3 bucket containing the state file it dies asking for the terraform provider to be configured:
Downloading modules (if any)...
Get: file:///tmp/src486521661/src/common/byu-aws-accounts-tf
Get: file:///tmp/src486521661/src/common/base-aws-account-
...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Here's the buildspec.yml:
version: 0.1
phases:
install:
commands:
- cd common && git clone https://eric.w.nord#gitlab.com/aws-account-tools/acs.git
- export TerraformVersion=0.9.3 && cd /tmp && curl -o terraform.zip https://releases.hashicorp.com/terraform/${TerraformVersion}/terraform_${TerraformVersion}_linux_amd64.zip && unzip terraform.zip && mv terraform /usr/bin
build:
commands:
- cd accounts/00/dev-stack-oit-byu && terraform init && terraform plan && echo terraform apply
EDIT: THE BUG HAS BEEN FIXED SO PLEASE DELETE these lines below if you added them on your buildspec file.
Before terraform init, add these lines:
export AWS_ACCESS_KEY_ID=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.SecretAccessKey'`
export AWS_SESSION_TOKEN=`curl --silent 169.254.170.2:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq -r '.Token'`
It is more readable.
In you buildspec.yml try:
env:
variables:
AWS_METADATA_ENDPOINT: "http://169.254.169.254:80$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
You need this is because TF will look for the meta data in the env var that is not set in the container.
I hate to post this but it will allow terraform to access the codebuild IAM STS access keys and execute terraform commands from within codebuild as a buildspec.yml
It's pretty handy for automated deploys of AWS infrastructure as you can drop a CodeBuild into all your AWS accounts and fire them with a CodePipeline.
Please note the version: 0.2
This passes envars between commands where as version 0.1 had a clean shell for each command
Please update if you find something better:
version: 0.2
env:
variables:
AWS_DEFAULT_REGION: "us-west-2"
phases:
install:
commands:
- apt-get -y update
- apt-get -y install jq
pre_build:
commands:
# load acs submodule (since codebuild doesn't pull the .git folder from the repo
- cd common
- git clone https://gituser#gitlab.com/aws-account-tools/acs.git
- cd ../
#install terraform
- other/install-tf-linux64.sh
- terraform --version
#set env variables for terraform provider
- curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI | jq 'to_entries | [ .[] | select(.key | (contains("Expiration") or contains("RoleArn")) | not) ] | map(if .key == "AccessKeyId" then . + {"key":"AWS_ACCESS_KEY_ID"} else . end) | map(if .key == "SecretAccessKey" then . + {"key":"AWS_SECRET_ACCESS_KEY"} else . end) | map(if .key == "Token" then . + {"key":"AWS_SESSION_TOKEN"} else . end) | map("export \(.key)=\(.value)") | .[]' -r > /tmp/cred.txt # work around https://github.com/hashicorp/terraform/issues/8746
- chmod +x /tmp/cred.txt
- . /tmp/cred.txt
build:
commands:
- ls
- cd your/repo's/folder/with/main.tf
- terraform init
- terraform plan
- terraform apply
Terraform AWS provider offers the following method of authentication:
Static credentials
In this case you can add the access and secrete keys directly into the tf config file as follow:
provider "aws" {
region = "us-west-2"
access_key = "anaccesskey"
secret_key = "asecretkey"
}
Environment variables
You import the access and secrete key into the the environment variable. Do this using the export command
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credentials file
If Terraform fail to detect credentials inline, or in the environment, Terraform will check this location, $HOME/.aws/credentials in which case you don't need to mention or put the credential in your Terraform config
EC2 Role
If you're running Terraform from an EC2 instance with IAM Instance Profile using IAM Role, Terraform will just ask the metadata API endpoint for credentials. In which case, you don't have to mention the access and secrete keys in any config. This is the preferred way
https://www.terraform.io/docs/providers/aws/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
Related
I am using terraform inside codebuild along with codepipeline ( CI CD ) to deploy my resources. The resources ( all the tf files) are present as a zip file.
This deployment for CI/CD ( codebuild + codepipeline ) is being done by CDK.
Now I am confused how and where do I implement terraform s3 backend, because I am using 2 codebuild stages: Plan stage for terraform planning -> manual approval ( intermediate) -> Deploy stager for terraform apply )
Conceptually I am not able to understand where should I implement s3 backend.
Plan stage code build spec
pre_build:
commands:
- terraform init
build:
commands:
- echo '{"fruit":{"name":"apple","color":"green","price":1.20}}' | jq '.'
- terraform plan -no-color -input=false
Deploy stage buidspec
pre_build:
commands:
- terraform init
build:
commands:
- echo '{"fruit":{"name":"apple","color":"green","price":1.20}}' | jq '.'
- terraform apply -auto-approve -no-color -input=false
I am using the bitbucket pipeline to publish the artifacts to AWS code artifact, everything is running perfectly but 12 hours validity of the token needs me to update the password every time. Could anyone guide me on how I can automate this process?
EDIT: finally was able to solve it myself.
pipelines:
default:
- step:
name: test
image: atlassian/pipelines-awscli
script:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- aws codeartifact get-authorization-token --domain XXXXX --domain-owner XXXXXx --query authorizationToken --output text > pass.txt
- value=$(<pass.txt)
- echo $value
- echo "export value=$value" set_env.sh
- printenv > set_env.sh
artifacts:
- set_env.sh
- step:
name: maven
image: maven:3.8.1
caches:
- maven
script: # Modify the commands below to build your repository.
- source set_env.sh
- echo $value
- sed -i 's/passwd12/'"$value"'/g' ./settings.xml
- cat settings.xml
- mvn clean deploy -s settings.xml -P snapshot
I didn't realize BitBucket had global account-wide Workspace Variables. Some were already defined for our other repos. I added some to hold the values for AccessKeyId and SecretAccessKey for our npm registry at CodeArtifact.
Prior to npm install, I create a named AWS profile in the pipelines.yml file:
- aws configure --profile codeartifactuser set aws_access_key_id $AWS_ACCESS_KEY_ID_NPM
- aws configure --profile codeartifactuser set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_NPM
Then use that to make the call to authenticate and get a new token:
aws codeartifact login --tool npm --repository <repository> --domain <domain> --namespace #<namespace> --profile codeartifactuser
Now our npm install, etc... works as expected.
I used a named profile in case other parts of the build script expect different credentials. Just seems cleaner.
I am running CI/CD in codebuild project and I have configured a role for codebulid project to allow it to deploy resources e.g. lambda to AWS account.
But when I run the deploy command from docker container in the codebuild project, I got this error:
AWS provider credentials not found. Learn how to set up AWS provider credentials in our docs here: <http://slss.io/aws-creds-setup>.
I have searched that people says to use env var or aws credential profile. But my script is running from codebuild project with IAM authentication. How can I pass it to docker container?
I would not give Codebuild direct access to modify resources, you can easily separate that out via a separate role to deploy stuff and make sure you have added the necessary permissions to assume the role. Below is the approach recommended by AWS.
version: 0.2
phases:
install:
runtime-versions:
nodejs: 8
commands:
- ASSUME_ROLE_ARN="arn:aws:iam::$account_id:role/Secretassumerole"
- TEMP_ROLE=`aws sts assume-role --role-arn $ASSUME_ROLE_ARN --role-session-name test`
- export TEMP_ROLE
- echo $TEMP_ROLE
- export AWS_ACCESS_KEY_ID=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.AccessKeyId')
- export AWS_SECRET_ACCESS_KEY=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SecretAccessKey')
- export AWS_SESSION_TOKEN=$(echo "${TEMP_ROLE}" | jq -r '.Credentials.SessionToken')
- echo $AWS_ACCESS_KEY_ID
- echo $AWS_SECRET_ACCESS_KEY
- echo $AWS_SESSION_TOKEN
pre_build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build --build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY --build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN
Inside Dockefile
FROM amazonlinux:latest
RUN yum -y install aws-cli
ARG AWS_DEFAULT_REGION
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_SESSION_TOKEN
RUN echo $AWS_DEFAULT_REGION
RUN echo $AWS_ACCESS_KEY_ID
RUN echo $AWS_SECRET_ACCESS_KEY
RUN echo $AWS_SESSION_TOKEN
RUN aws sts get-caller-identity
RUN aws secretsmanager get-secret-value --secret-id tutorials/AWSExampleSecret
How do I pass temporary credentials for AssumeRole into the Docker runtime with AWS CodeBuild?
OR
If you still wanna use the CodeBuild IAM permissions then you can parse the call to metadata service in buildspec.yml for your Codebuild project which will give you the credentials of your
Codebuild IAM Service Role, eventually being passed to docker build command in a similar manner as above. Or if you wish you can store that in a credentials file and share them with the docker environment, where you can run commands by providing the profile.
version: 0.2
phases:
install:
commands:
- TOKEN=$(curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
- echo $TOKEN
- export AWS_ACCESS_KEY_ID=$(echo "${TOKEN}" | jq -r '.AccessKeyId')
- export AWS_SECRET_ACCESS_KEY=$(echo "${TOKEN}" | jq -r '.SecretAccessKey')
- export AWS_SESSION_TOKEN=$(echo "${TOKEN}" | jq -r '.SessionToken')
pre_build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build --build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY --build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN
This will give you the credentials:
{
"RoleArn": "AQICAHi8hGr15WsKx4aqJ3PRJImmR37T8bWHAVZQA8s9Lug",
"AccessKeyId": "ASIA2WXKNDTKPASDADRT",
"SecretAccessKey": "***",
"Token": "IQoJb3JpZ2luX2VjENH//////////wEaCXVzLWVhc3QtMSJ",
"Expiration": "2021-03-05T10:02:01Z"
}
A variation on the second of samtoddler's answers:
docker build --build-arg AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI --build-arg AWS_REGION=$AWS_REGION
and in the docker file:
ARG AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
ARG AWS_REGION
Basically CodeBuild/Docker are clever enough to do the commands Sam has in the install section automatically :)
I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )
I am trying to get travis-ci to run a custom deploy script that uses awscli to push a deployment up to my staging server.
In my .travis.yml file I have this:
before_deploy:
- 'curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"'
- 'unzip awscli-bundle.zip'
- './awscli-bundle/install -b ~/bin/aws'
- 'export PATH=~/bin:$PATH'
- 'aws configure'
And I have set up the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
with their correct values in the travis-ci web interface.
However when the aws configure runs, it stops and waits for user input. How can I tell it to use the environment variables I have defined?
Darbio's solution works fine but it's not taking into consideration that you may end up pushing your AWS credentials in your repository.
That is a bad thing especially if docker is trying to pull a private image from one of your ECR repositories. It would mean that you probably had to store your AWS production credentials in the .travis.yml file and that is far from ideal.
Fortunately Travis gives you the possibility to encrypt environment variables, notification settings, and deploy api keys.
gem install travis
Do a travis login first of all, it will ask you for your github credentials. Once you're logged in get in your project root folder (where your .travis.yml file is) and encrypt your access key id and secret access key.
travis encrypt AWS_ACCESS_KEY_ID="HERE_PUT_YOUR_ACCESS_KEY_ID" --add
travis encrypt AWS_SECRET_ACCESS_KEY="HERE_PUT_YOUR_SECRET_ACCESS_KEY" --add
Thanks to the --add option you'll end up with two new (encrypted) environment variables in your configuration file. Now just open your .travis.yml file and you should see something like this:
env:
global:
- secure: encrypted_stuff
- secure: encrypted_stuff
Now you can make travis run a shell script that creates the ~/.aws/credentials file for you.
ecr_credentials.sh
#!/usr/bin/env bash
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOL
[default]
aws_access_key_id = ${AWS_ACCESS_KEY_ID}
aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
EOL
Then you just need to run the ecr_credentials.sh script from your .travis.yml file:
before_install:
- ./ecr_credentials.sh
Done! :-D
Source: Encription keys on Travis CI
You can set these in a couple of ways.
Firstly, by creating a file at ~/.aws/config (or ~/.aws/credentials).
For example:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Secondly, you can add environment variables for each of your settings.
For example, create the following environment variables:
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Thirdly, you can pass region in as a command line argument. For example:
aws eb deploy --region us-west-2
You won't need to run aws configure in these cases as the cli is configured.
There is further AWS documentation on this page.
Following the advice from #Darbio, I came up with this solution:
- stage: deploy
name: "Deploy to AWS EKS"
language: minimal
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
deploy:
- provider: script
# bash script containing the kubectl commands to setup the cluster
script: bash k8s-config/deployment.sh
on:
branch: master
It is also possible to avoid installing AWS CLI altogether. Then you need to configure kubectl:
kubectl config set-cluster --server= --certificate-authority=
kubectl config set-credentials --client-certificate= --client-key=
kubectl config set-context myContext --cluster= --namespace= --user=
kubectl config use-context myContext
You can find most of the needed values in your users home directory in /.kube/config, after you performed the aws eks update-kubeconfig command on your local machine.
Except for the client certificate and key. I couldn't figure out where to get them from and therefore needed to install AWS CLI in the pipeline as well.