The CodeBuild portion of my pipeline keeps failing with the following error:
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image. CannotPullContainerError: Error response from daemon: pull access denied for 123456789.dkr.ecr.us-east-1.amazonaws.com/diag_test, repository does not exist or may require 'docker login': denied: User: CodeBuild
I did some beginning research and saw that maybe the IAM role it was using didn't have enough permissions so I attached the AmazonEC2ContainerRegistryFullAccess policy to the role and attempted again - same results.
I verified the URI is correct.
What am I missing?
buildspec.yaml below:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 12345678.dkr.ecr.us-east-1.amazonaws.com
- REPOSITORY_URI=12345678.dkr.ecr.us-east-1.amazonaws.com/diag_test
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"diag_test","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
Thanks in advance for the assist! :)
If you pull the ECR image in the CodeBuild pipeline, you should add this line:
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ACCOUNT_NUMBER.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
You need to login like you did with Docker login.
If you use custom image for CodeBuild, you should add ECR policy
I am using the bitbucket pipeline to publish the artifacts to AWS code artifact, everything is running perfectly but 12 hours validity of the token needs me to update the password every time. Could anyone guide me on how I can automate this process?
EDIT: finally was able to solve it myself.
pipelines:
default:
- step:
name: test
image: atlassian/pipelines-awscli
script:
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- aws codeartifact get-authorization-token --domain XXXXX --domain-owner XXXXXx --query authorizationToken --output text > pass.txt
- value=$(<pass.txt)
- echo $value
- echo "export value=$value" set_env.sh
- printenv > set_env.sh
artifacts:
- set_env.sh
- step:
name: maven
image: maven:3.8.1
caches:
- maven
script: # Modify the commands below to build your repository.
- source set_env.sh
- echo $value
- sed -i 's/passwd12/'"$value"'/g' ./settings.xml
- cat settings.xml
- mvn clean deploy -s settings.xml -P snapshot
I didn't realize BitBucket had global account-wide Workspace Variables. Some were already defined for our other repos. I added some to hold the values for AccessKeyId and SecretAccessKey for our npm registry at CodeArtifact.
Prior to npm install, I create a named AWS profile in the pipelines.yml file:
- aws configure --profile codeartifactuser set aws_access_key_id $AWS_ACCESS_KEY_ID_NPM
- aws configure --profile codeartifactuser set aws_secret_access_key $AWS_SECRET_ACCESS_KEY_NPM
Then use that to make the call to authenticate and get a new token:
aws codeartifact login --tool npm --repository <repository> --domain <domain> --namespace #<namespace> --profile codeartifactuser
Now our npm install, etc... works as expected.
I used a named profile in case other parts of the build script expect different credentials. Just seems cleaner.
so far in my buildspec.yml file I can create a docker image and store it in the ECR repository (I am using codepipeline). My question is how do I deploy it to my ECS instance through the buildspec.yml using the aws cli commands?
i am sharing buildspec.yaml file have a look
version: 0.1
phases:
pre_build:
commands:
- echo Setting timestamp for container tag
- echo `date +%s` > timestamp
- echo Logging into Amazon ECR...
- $(aws ecr get-login --region $AWS_DEFAULT_REGION)
build:
commands:
- echo Building and tagging container
- docker build -t $REPOSITORY_NAME .
- docker tag $REPOSITORY_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
post_build:
commands:
- echo Pushing docker image
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
- echo Preparing CloudFormation Artifacts
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_KEY task-definition.template
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_PARAMS_KEY cf-config.json
artifacts:
files:
- task-definition.template
- cf-config.json
You can edit this more command for ECS instance i have return template which goes to cloud formation.
you can write simple awscli command to create cluster and pull images check this aws documentation: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
sharing my own git check it out for more info: https://github.com/harsh4870/ECS-CICD-pipeline
I setup a docker registry (ECR) on AWS. From my gitlab repository I'd like to setup a CI to automatically create images and push them to the repository.
I was following the following tutorial to setup everything, but when running the example, I receive the error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My yml file looks like this
image: docker:latest
variables:
REPOSITORY_URL: <aws-url>/<registry>/outsite-slackbot
services:
- docker:dind
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
stages:
- build
build:
stage: build
script:
- $(aws ecr get-login --no-include-email --region eu-west-1)
There is no problem with the Dockerfile, you can't be connected to docker daemon by the way. So check these steps:
Are you logged in as a root? (sudo su or sudo -i)
Start Docker service (service docker start)
Then follow the tutorial :)
Im new to AWS. I want to set up a private docker repository on an AWS ECS container instance. I created a repository named name. The example push commands shown by AWS are working.
aws ecr get-login --region us-west-2
docker build -t name .
docker tag name:latest ############.dkr.ecr.us-west-2.amazonaws.com/name:latest
docker push ############.dkr.ecr.us-west-2.amazonaws.com/name:latest
But with this commands I build and pushed an image named name and I want to build an image named foo. So I altered the commands to:
docker build -t foo .
docker tag foo ###########.dkr.ecr.us-west-2.amazonaws.com/name/foo
docker push ###########.dkr.ecr.us-west-2.amazonaws.com/name/foo
This should work, but it doesn't. After a period of retrys I get the error:
The push refers to a repository [###########.dkr.ecr.us-west-2.amazonaws.com/name/foo]
8cc63cf4528f: Retrying in 1 second
...
name unknown: The repository with name 'name/foo' does not exist in the registry with id '############'
Does AWS really require a dedicated repository for every image i want to push?
The EC2 Container Registry requires an image Repository to be setup for each image "name" or "namespace/name" you want to publish to the registry.
You can publish any :tags you want in each Repository though (The default limit is 100 tags).
I haven't seen anywhere in the AWS documentation that specifically states the repository -> image name mapping but it's implied by Creating a Repository - Section 6d in the ECR User Guide
The Docker Image spec includes it's definition of a Repository
Repository
A collection of tags grouped under a common prefix (the name component before :). For example, in an image tagged with the name
my-app:3.1.4, my-app is the Repository component of the name. A
repository name is made up of slash-separated name components,
optionally prefixed by a DNS hostname. The hostname must comply with
standard DNS rules, but may not contain _ characters. If a hostname is
present, it may optionally be followed by a port number in the format
:8080. Name components may contain lowercase characters, digits, and
separators. A separator is defined as a period, one or two
underscores, or one or more dashes. A name component may not start or
end with a separator.
You need to create a repository for each image name, but the image name can be of the form "mycompanyname/helloworld". So you create mycompanyname/app1, mycompanyname/app2, etc
aws ecr create-repository --repository-name mycompanyname/helloworld
aws ecr create-repository --repository-name mycompanyname/app1
aws ecr create-repository --repository-name mycompanyname/app2
docker tag helloworld:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/helloworld:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/helloworld:latest
docker tag app1:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/app1:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/app1:latest
I tried the following steps and confirmed working for me:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com
aws ecr create-repository --repository-name test
docker build -t test .
docker tag test:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/test:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/test:latest
Addition to the above answer, I came across here today, as the login command change with aws-cli v2, posting as an answer might help others.
as aws-cli v1 login command no longer work.
V1
$(aws ecr get-login --no-include-email)
To push image to ECR using aws-cli v2 you need
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
Then you are okay to build and push
docker build -t myrepo .
docker tag myrepo:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myrepo
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myrepot
Typically One image per registry is a clean approach, that why AWS increase image per repository and repository per region from 1000 to 10,000.
For this i automated the script that can read your public images from csv file and pull them. After that it will try to create repository in ECR and push to registry.
Prepare CSV file ecr-images.csv
docker.io/amazon/aws-for-fluent-bit,2.13.0
docker.io/couchdb,3.1
docker.io/bitnami/elasticsearch,7.13.1-debian-10-r0
k8s.gcr.io/kube-state-metrics/kube-state-metrics,v2.0.0
k8s.gcr.io/metrics-server-amd64,v0.3.6
--------------------KEEP THIS LINE AT END-------------------------
Automated script ecr.sh that will copy images to ecr
#!/bin/bash
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
assert_value() {
if [ -z "$1" ]; then
echo "No args: $2"
exit 1
fi
}
repository_uri=$1
assert_value "$repository_uri" "repository_uri"
create_repo() {
## try to create & failure will ignored by <|| true>
aws ecr create-repository --repository-name "$1" --output text || true
}
## Copy Docker Images to ECR
COUNTER=0
while IFS=, read -r dockerImage tag; do
outputImage=$(echo "$dockerImage" | sed -E 's/(\w+?\.)+\w+?\///')
outputImageUri="$repository_uri/$outputImage"
# shellcheck disable=SC2219
let COUNTER=COUNTER+1
echo "--------------------------------------------------------------------------"
echo "$COUNTER => $dockerImage:$tag pushing to $outputImageUri:$tag"
echo "--------------------------------------------------------------------------"
docker pull "$dockerImage:$tag"
docker tag "$dockerImage:$tag" "$outputImageUri:$tag"
create_repo "$outputImage"
docker push "$outputImageUri:$tag"
done <"$SCRIPT_DIR/ecr-images.csv"
Run
repository_uri=<ecr_account_id>.dkr.ecr.<ecr_region>.amazonaws.com
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin $repository_uri
./ecr.sh $repository_uri
# Build an image from azure devops pipeline to aws eks
parameters:
- name: succeed
displayName: Succeed or fail
type: boolean
default: false
trigger:
- main
- releases/*
pool:
vmImage: "windows-latest"
stages:
- stage: init
jobs:
- job: init
continueOnError: false
steps:
- task: Docker#2
inputs:
containerRegistry: 'docker'
repository: 'ecr-name'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: 'latest'
- task: ECRPushImage#1
inputs:
awsCredentials: 'aws credentilas'
regionName: 'us-east-1'
imageSource: 'any-name'
sourceImageName: 'ecr-name'
sourceImageTag: 'latest'
repositoryName: 'ecr-name'
pushTag: 'latest'
Create a repo per application:
aws ecr create-repository --repository-name worker --region us-east-1
aws ecr create-repository --repository-name gateway --region us-east-1
Login to registry
The AWS usr name is fixed for all registry logins
aws ecr get-login-password \
--region us-east-1 \
| docker login \
--username AWS \
--password-stdin <aws_12_digit_account_number>.dkr.ecr.us-east-1.amazonaws.com
Push image
docker build -f Dockerfile -t <123456789012>.dkr.ecr.us-east-1.amazonaws.com/worker:v1.0.0
docker push <123456789012>.dkr.ecr.us-east-1.amazonaws.com/worker:v1.0.0