I've a bitbucket pipeline that must have multiple aws credentials for different duties.
In the first lines, I have custom ECR image. To pull it, I created an AWS user for only ECR read only permissions. access-key and secret-key parameters are the keys of that user.
And in this ECR image, I embedded another AWS user's credentials to do the rest of the work (image push etc). But somehow, the credentials that I used for pulling base image running in steps too. Because of this situation, image push is being denied.
I tried to use export AWS_PROFILE=deployment but it doesn't help.
Is the credentials for base image pull being applied pipeline-wide?
And how can I overcome with this situation?
Thank you.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
pipelines:
- step:
name: "Image Build & Push"
services:
-docker
script:
- export AWS_PROFILE=deployment
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test
- docker tag ....
- docker push .....
What I would do here instead of baking credentials inside the images:
Use one credential for pulling/pushing/taggin the image, why not use the same for pushing the image.
If that is something you don't wanna do:
Create an IAM role and give that permission to tag/push the images and assume this role from the earlier credentials being exported, No need to bake credentials in the images.
I found the following example in the documentation
script:
# build the image
- docker build -t my-docker-image .
# use the pipe to push to AWS ECR
- pipe: atlassian/aws-ecr-push-image:1.2.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: my-docker-image
TAGS: '${BITBUCKET_TAG} latest'G
The OpenID Connect is nice feature https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/
Related
After a pile of troubleshooting, I managed to get my gitlab CICD pipeline to connect to GCP without requiring my service account to use a JSON key. However, I'm unable to do anything with Terraform in my pipeline using a remote statefile because of the following error:
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: Insufficient Permission, insufficientPermissions
My gitlab-ci.yml file is defined as follows:
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
- gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
- gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
- gcloud auth list
tf-stuff:
stage: validate
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
My gcp-auth job is running successfully from what I can see:
Authenticated with external account credentials for: [[MASKED]].
I've also went as far as adding in a gsutil cp command inside the gcp-auth job to make sure I can access the desired bucket as expected, which I can. I can successfully edit the contents of the bucket where my terraform statefile is stored.
I'm fairly new to gitlab CICD pipelines. Is there something I need to do to have the gcp-auth job tied to the tf-stuff job? It's like that job does not know the pipeline was previously authenticated using the service account.
Thanks!
Like mentioned by other posters, gitlab jobs run independently and dont share env variables or filesystem. So to preserve login state betwen jobs you have to preserve the state somehow.
I wrote a blog with a working example: https://ael-computas.medium.com/gcp-workload-identity-federation-on-gitlab-passing-authentication-between-jobs-ffaa2d51be2c
I have done it like github actions is doing it, by storing (tmp) credentials as artifacts. By setting correct env variables you should be able to "keep" the logged in state (gcp will implicitly refresh your token) without you having to create a base image containing everything. All jobs must run the gcp_auth_before method, or extend the auth job for this to work. and also have _auth/ artifacts preserved between jobs
In the sample below you can see that login state is preserved over two jobs, but only actuallt signing in on the first one. I have used this together with terraform images for further steps and it works like a charm so far.
This is very early so there might be hardening required for production.
Hope this example gives you some ideas on how to solve this!
.gcp_auth_before: &gcp_auth_before
- export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_GHA_CREDS_PATH=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_CLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_CORE_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCP_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
.gcp-auth:
artifacts:
paths:
- _auth/
before_script:
*gcp_auth_before
stages:
- auth
- debug
auth:
stage: auth
image: "google/cloud-sdk:slim"
variables:
SERVICE_ACCOUNT_EMAIL: "... service account email ..."
WORKLOAD_IDENTITY_PROVIDER: "projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL/providers/PROVIDER"
GOOGLE_CLOUD_PROJECT: "... project id ...."
artifacts:
paths:
- _auth/
script:
- |
mkdir -p _auth
echo "$CI_JOB_JWT_V2" > $CI_PROJECT_DIR/_auth/.ci_job_jwt_file
echo "$GOOGLE_CLOUD_PROJECT" > $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT
gcloud iam workload-identity-pools create-cred-config \
$WORKLOAD_IDENTITY_PROVIDER \
--service-account=$SERVICE_ACCOUNT_EMAIL \
--service-account-token-lifetime-seconds=600 \
--output-file=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json \
--credential-source-file=$CI_PROJECT_DIR/_auth/.ci_job_jwt_file
gcloud config set project $GOOGLE_CLOUD_PROJECT
- "export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json"
- "gcloud auth login --cred-file=$GOOGLE_APPLICATION_CREDENTIALS"
- gcloud auth list # DEBUG!!
debug:
extends: .gcp-auth
stage: debug
image: "google/cloud-sdk:slim"
script:
- env
- gcloud auth list
- gcloud storage ls
Your two Gitlab job run on a separated pod for the Kubernetes runner.
The tf-stuff job doesn't see the authentication done in the job gcp-auth.
To solve this issue, you can add the authentication code logic in a separated Shell script, then reuse this script in the two Gitlab jobs, example :
Authentication Shell script gcp_authentication.sh :
echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
gcloud auth list
# Check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on `pwd`/.gcp_temp_cred.json
For the tf-stuff, you can create a custom Docker image containing gcloud and Terraform because the image hashicorp/terraform doesn't contains gcloud cli natively.
Your Docker image can be added in Gitlab registry
Your Gitlab yml file :
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- . ./gcp_authentication.sh
tf-stuff:
stage: validate
image:
name: yourgitlabregistry/your-custom-image:1.0.0
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- . ./gcp_authentication.sh
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
Some explanations :
The same Shell script has been used in the 2 Gitlab jobs : gcp_authentication.sh
A custom Docker image has been created with Terraform and gcloud cli in the job concerning the Terraform part. This image can be added to the Gitlab registry
In the authentication Shell script, check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on pwd/.gcp_temp_cred.json
You have to give the needed permission to your Service Account to use Gitlab with Workload Identity :
roles/iam.workloadIdentityUser
You can check this example project and the documentation
I am running a docker compose network on AWS CodeBuild and I need to pass AWS credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to the docker containers as they need to interact with AWS SSM. What is the best way to get these credentials from CodeBuild and pass them to the docker containers?
Initially, I thought of mounting the credentials directory from CodeBuild as a volume by adding this to each service in the docker-compose.yml file
volumes:
- '${HOME}/.aws/credentials:/root/.aws/credentials'
but that did not work as it seems the ${HOME}/.aws/ folder on the CodeBuild environment did not have any credentials in it
Using Docker secret, you may create your secrets:
docker secret create credentials.cnf credentials.cnf
define your Keys in the credentials.cnf file, and include it in your compose file as below:
services:
example:
image:
environment:
secrets:
- credentials.cnf
secrets:
- AWS_KEY:
file: credentials.cnf
- AWS_SECRET:
file: credentials.cnf
You can view your secrets with docker secrets ls
In the environment section of the CodeBuild project you have an option to set the environment variable from the value stored in Parameter Store.
I have a line in my Dockerfile like this:
FROM 6*********.dkr.ecr.ap-southeast-1.amazonaws.com/*************:ff03401
This ECR is owned by another user.
As recommended in this question, I am trying to log in by using these commands in the build section of my buildspec.yml, and then immediately pull this docker image:
- aws configure set aws_access_key_id $ECR_ACCESS_KEY
- aws configure set aws_secret_access_key $ECR_SECRET_KEY
- eval aws ecr get-login --no-include-email --region ap-southeast-1 --registry-ids 6***********
- docker pull 6***********.dkr.ecr.ap-southeast-1.amazonaws.com/****************:ff03401
When I look at the Codebuild logs, I see that eval aws ecr get-login... outputs a docker login ... command which, if I run it on my local machine, logs me in successfully, and lets me do the docker pull 6******....
In Codebuild, however, docker pull says:
Error response from daemon: Get https://6**********.dkr.ecr.ap-southeast-1.amazonaws.com/v2/******************/manifests/ff03401: no basic auth credentials
I have also tried adding --profile ecrproduction to the first three commands, without success.
I'm using Circleci to deploy my project on my AWS S3 bucket.
After many attempts I was able to finally made my config.yml work and according to Circleci interface everything is running succefully.
The problem is that when I access my bucket there's nothing there.
I already tried this:
-
run:
command: "aws s3 sync myAppPath s3://myBucketName"
Anyone could help? I have no errors and everything is done successully but no file on my bucket.
Thanks in advance
You have to add credentials.
add environmental variables in the project https://circleci.com/docs/2.0/env-vars/
screen:
And then configure config .circleci/config.yml :
# deploy to aws s3
deploy:
docker:
- image: cibuilds/aws:1.15.73
environment:
aws_access_key_id: $AWS_ACCESS_KEY_ID
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY
steps:
- attach_workspace:
at: ./workspace
- run:
name: Deploy to S3 if tests pass and branch is develop
command: aws s3 sync workspace/public s3://your.bucket/ --delete
also just let you know to debug aws cli use circleci cli:
And in your terminal, once you connect by ssh to circleci
try:
aws s3 sync workspace/public s3://your.bucket/ --debug
I have been using my Docker-hub account till now in CircleCI, and now for some reason I'm trying to use my ECR repository image in the same place as build image in CircleCI (2.0)
But I see ECR doesn't support public images. So I can't mention my image as below as I did for Dockerhub image,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: <dockerhub-name>/<image>
as,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: aws-id.dkr.ecr.eu-central-1.amazonaws.com/image
It will throw error,
no basic auth credentials
In a straight forward operation it needs to get authenticated via command,
aws ecr get-login --region <region-name>
and then running,
docker login -u AWS -p <password> -e none https://aws-id.dkr.ecr.eu-central-1.amazonaws.com
I tried putting this commands in Pre-dependency commands section of CircleCI plan settings and didn't work.
Ideas?
What "Pre-dependency commands"? That sounds like you're referring to configuration structure from CircleCI 1.0, which you don't seem to be using.
Because of the way AWS requires you to authenticate with ECR, I wouldn't use an image from there with the docker executor. Either use some random image, and then use setup_remote_docker or use the machine executor.
This doc shows the former, and this one covers the latter.