I am running a docker compose network on AWS CodeBuild and I need to pass AWS credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to the docker containers as they need to interact with AWS SSM. What is the best way to get these credentials from CodeBuild and pass them to the docker containers?
Initially, I thought of mounting the credentials directory from CodeBuild as a volume by adding this to each service in the docker-compose.yml file
volumes:
- '${HOME}/.aws/credentials:/root/.aws/credentials'
but that did not work as it seems the ${HOME}/.aws/ folder on the CodeBuild environment did not have any credentials in it
Using Docker secret, you may create your secrets:
docker secret create credentials.cnf credentials.cnf
define your Keys in the credentials.cnf file, and include it in your compose file as below:
services:
example:
image:
environment:
secrets:
- credentials.cnf
secrets:
- AWS_KEY:
file: credentials.cnf
- AWS_SECRET:
file: credentials.cnf
You can view your secrets with docker secrets ls
In the environment section of the CodeBuild project you have an option to set the environment variable from the value stored in Parameter Store.
Related
I am using Single sign-on (SSO) authentication with AWS.
In the terminal, I sign into my SSO account, successfully:
aws sso login --profile dev
Navigating to the directory of the docker-compose.yml file, and using Docker in an Amazon ECS context, the command docker compose up -d fails with:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have deleted the old (non-SSO) access keys and profiles in:
~/.aws/config
~/.aws/credentials
So now all that is present in the above directories is my SSO account.
Before SSO (using IAM users), docker compose up -d worked as expected, so I believe the problem is that Docker is having difficulty connecting to AWS via SSO on the CLI.
Any help here is much appreciated.
Docs on Docker ECS integration: https://docs.docker.com/cloud/ecs-integration/
The docker-compose.yml file looks like this:
version: "3.4"
x-aws-vpc: "vpc-xxxxx"
x-aws-cluster: "test"
x-aws-loadbalancer: "test-nlb"
services:
test:
build:
context: ./
dockerfile: Dockerfile
target: development
image: xxx.dkr.ecr.eu-west-1.amazonaws.com/xxx:10
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- ENABLE_SWAGGER=${ENABLE_SWAGGER:-true}
- LOGGING_LEVEL=${LOGGING_LEVEL:-INFO}
ports:
- "9090:9090"
I've installed the credential helper GitHub on our ec2 instance and got it working for my account. What I want to do is to use it during my GitLab CI/CD pipeline, where my gitlab-runner is actually running inside a docker container, and spawns new containers for the build, test & deploy phases. This is what our test phase looks like now:
image: docker:stable
run_tests:
stage: test
tags:
- test
before_script:
- echo "Starting tests for CI_COMMIT_SHA=$CI_COMMIT_SHA"
- docker run --rm mikesir87/aws-cli aws ecr get-login-password | docker login --username AWS --password-stdin $IMAGE_URL
script:
- docker run --rm $IMAGE_URL:$CI_COMMIT_SHA npm test
This works fine, but what I'd like to see if I could get working is the following:
image: docker:stable
run_tests:
image: $IMAGE_URL:$CI_COMMIT_SHA
stage: test
tags:
- test
script:
- npm test
When I try the 2nd option it I get the no basic auth credentials. So I'm wondering if there is a way to get the credential helper to map to the docker container without having to have the credential helper installed on the image itself.
Configure your runner to use the credential helper with DOCKER_AUTH_CONFIG environment variable. A convenient way to do this is to bake it all into your image.
So, your gitlab-runner image should include the the docker-credential-ecr-login binary (or you should mount it in from the host).
FROM gitlab/gitlab-runner:v14.3.2
COPY bin/docker-credential-ecr-login /usr/local/bin/docker-credential-ecr-login
Then when you call gitlab-runner register pass in the DOCKER_AUTH_CONFIG environment variable using --env flag as follows:
AUTH_ENV="DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"
gitlab-runner register \
--non-interactive \
...
--env "${AUTH_ENV}" \
--env "AWS_SDK_LOAD_CONFIG=true" \
...
You can also set this equivalently in the config.toml, instance CI/CD variables, or anywhere CI/CD variables are set (group, project, yaml, trigger, etc).
As long as your EC2 instance (or ECS task role if running the gitlab-runner as an ECS task) has permission to pull the image, your jobs will be able to pull down images from ECR declared in image: sections.
However this will NOT necessarily let you automatically pull images using docker-in-docker (e.g. invoking docker pull within the script: section of a job). This can be configured (as it seems you already have working), but may require additional setup, depending on your runner and IAM configuration.
I have an app using:
SAM
AWS S3
AWS Lambda based on Docker
AWS SAM pipeline
Github function
In the Dockerfile I have:
RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
Resulting in the error message:
Step 6/8 : RUN aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz
---> Running in 786873b916db
fatal error: Unable to locate credentials
Error: InferenceFunction failed to build: The command '/bin/sh -c aws s3 cp s3://mylambda/distilBERT distilBERT.tar.gz' returned a non-zero code: 1
I need to find a way to store the credential in a secured manner. Is it possible with GitHub secrets or something?
Thanks
My solution may be a bit longer but I feel it solves your problem, and
It does not expose any secrets
It does not require any manual work
It is easy to change your AWS keys later if required.
Steps:
You can add the environment variables in Github actions(since you already mentioned Github actions) as secrets.
In your Github CI/CD flow, when you build the Dockerfile, you can create a aws credentials file.
- name: Configure AWS credentials
echo "
[default]
aws_access_key_id = $ACCESS_KEY
aws_secret_access_key = $SECRET_ACCESS_KEY
" > credentials
with:
ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
In your Dockerfile, you can add instructions to COPY this credentials file and store it
COPY credentials credentials
RUN mkdir ~/.aws
RUN mv credentials ~/.aws/credentials
Changing your credentials requires just changing your github actions.
Docker by default does not have access to the .aws folder running on the host machine. You could either pass the AWS credentials as environment variables to the Docker image:
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=...
Keep in mind, hardcoding AWS credentials in a Dockerfile is a bad practice. In order to avoid this, you can pass the environment variables at runtime with using docker run -e MYVAR1 or docker run --env MYVAR2=foo arguments. Other solution would be to use an .env file for the environment variables.
A more involved solution would be to map a volume for the ~/.aws folder from the host machine in the Docker image.
I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).
I've a bitbucket pipeline that must have multiple aws credentials for different duties.
In the first lines, I have custom ECR image. To pull it, I created an AWS user for only ECR read only permissions. access-key and secret-key parameters are the keys of that user.
And in this ECR image, I embedded another AWS user's credentials to do the rest of the work (image push etc). But somehow, the credentials that I used for pulling base image running in steps too. Because of this situation, image push is being denied.
I tried to use export AWS_PROFILE=deployment but it doesn't help.
Is the credentials for base image pull being applied pipeline-wide?
And how can I overcome with this situation?
Thank you.
image:
name: <ECR Image>
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_ACCESS_KEY
pipelines:
- step:
name: "Image Build & Push"
services:
-docker
script:
- export AWS_PROFILE=deployment
- export ENVIRONMENT=beta
- echo "Environment is ${ENVIRONMENT}"
- export DOCKER_IMAGE_BUILDER="${BITBUCKET_REPO_SLUG}:builder"
- make clean
- make build BUILD_VER=${BITBUCKET_TAG}.${BITBUCKET_BUILD_NUMBER} \ APP_NAME=${BITBUCKET_REPO_SLUG} \
DOCKER_IMAGE_BUILDER=${DOCKER_IMAGE_BUILDER}
- make test
- docker tag ....
- docker push .....
What I would do here instead of baking credentials inside the images:
Use one credential for pulling/pushing/taggin the image, why not use the same for pushing the image.
If that is something you don't wanna do:
Create an IAM role and give that permission to tag/push the images and assume this role from the earlier credentials being exported, No need to bake credentials in the images.
I found the following example in the documentation
script:
# build the image
- docker build -t my-docker-image .
# use the pipe to push to AWS ECR
- pipe: atlassian/aws-ecr-push-image:1.2.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: my-docker-image
TAGS: '${BITBUCKET_TAG} latest'G
The OpenID Connect is nice feature https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/