Push docker image to amazon ecs repository - amazon-web-services

Im new to AWS. I want to set up a private docker repository on an AWS ECS container instance. I created a repository named name. The example push commands shown by AWS are working.
aws ecr get-login --region us-west-2
docker build -t name .
docker tag name:latest ############.dkr.ecr.us-west-2.amazonaws.com/name:latest
docker push ############.dkr.ecr.us-west-2.amazonaws.com/name:latest
But with this commands I build and pushed an image named name and I want to build an image named foo. So I altered the commands to:
docker build -t foo .
docker tag foo ###########.dkr.ecr.us-west-2.amazonaws.com/name/foo
docker push ###########.dkr.ecr.us-west-2.amazonaws.com/name/foo
This should work, but it doesn't. After a period of retrys I get the error:
The push refers to a repository [###########.dkr.ecr.us-west-2.amazonaws.com/name/foo]
8cc63cf4528f: Retrying in 1 second
...
name unknown: The repository with name 'name/foo' does not exist in the registry with id '############'
Does AWS really require a dedicated repository for every image i want to push?

The EC2 Container Registry requires an image Repository to be setup for each image "name" or "namespace/name" you want to publish to the registry.
You can publish any :tags you want in each Repository though (The default limit is 100 tags).
I haven't seen anywhere in the AWS documentation that specifically states the repository -> image name mapping but it's implied by Creating a Repository - Section 6d in the ECR User Guide
The Docker Image spec includes it's definition of a Repository
Repository
A collection of tags grouped under a common prefix (the name component before :). For example, in an image tagged with the name
my-app:3.1.4, my-app is the Repository component of the name. A
repository name is made up of slash-separated name components,
optionally prefixed by a DNS hostname. The hostname must comply with
standard DNS rules, but may not contain _ characters. If a hostname is
present, it may optionally be followed by a port number in the format
:8080. Name components may contain lowercase characters, digits, and
separators. A separator is defined as a period, one or two
underscores, or one or more dashes. A name component may not start or
end with a separator.

You need to create a repository for each image name, but the image name can be of the form "mycompanyname/helloworld". So you create mycompanyname/app1, mycompanyname/app2, etc
aws ecr create-repository --repository-name mycompanyname/helloworld
aws ecr create-repository --repository-name mycompanyname/app1
aws ecr create-repository --repository-name mycompanyname/app2
docker tag helloworld:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/helloworld:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/helloworld:latest
docker tag app1:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/app1:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/mycompanyname/app1:latest

I tried the following steps and confirmed working for me:
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com
aws ecr create-repository --repository-name test
docker build -t test .
docker tag test:latest xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/test:latest
docker push xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/test:latest

Addition to the above answer, I came across here today, as the login command change with aws-cli v2, posting as an answer might help others.
as aws-cli v1 login command no longer work.
V1
$(aws ecr get-login --no-include-email)
To push image to ECR using aws-cli v2 you need
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
Then you are okay to build and push
docker build -t myrepo .
docker tag myrepo:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/myrepo
docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/myrepot
Typically One image per registry is a clean approach, that why AWS increase image per repository and repository per region from 1000 to 10,000.

For this i automated the script that can read your public images from csv file and pull them. After that it will try to create repository in ECR and push to registry.
Prepare CSV file ecr-images.csv
docker.io/amazon/aws-for-fluent-bit,2.13.0
docker.io/couchdb,3.1
docker.io/bitnami/elasticsearch,7.13.1-debian-10-r0
k8s.gcr.io/kube-state-metrics/kube-state-metrics,v2.0.0
k8s.gcr.io/metrics-server-amd64,v0.3.6
--------------------KEEP THIS LINE AT END-------------------------
Automated script ecr.sh that will copy images to ecr
#!/bin/bash
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
assert_value() {
if [ -z "$1" ]; then
echo "No args: $2"
exit 1
fi
}
repository_uri=$1
assert_value "$repository_uri" "repository_uri"
create_repo() {
## try to create & failure will ignored by <|| true>
aws ecr create-repository --repository-name "$1" --output text || true
}
## Copy Docker Images to ECR
COUNTER=0
while IFS=, read -r dockerImage tag; do
outputImage=$(echo "$dockerImage" | sed -E 's/(\w+?\.)+\w+?\///')
outputImageUri="$repository_uri/$outputImage"
# shellcheck disable=SC2219
let COUNTER=COUNTER+1
echo "--------------------------------------------------------------------------"
echo "$COUNTER => $dockerImage:$tag pushing to $outputImageUri:$tag"
echo "--------------------------------------------------------------------------"
docker pull "$dockerImage:$tag"
docker tag "$dockerImage:$tag" "$outputImageUri:$tag"
create_repo "$outputImage"
docker push "$outputImageUri:$tag"
done <"$SCRIPT_DIR/ecr-images.csv"
Run
repository_uri=<ecr_account_id>.dkr.ecr.<ecr_region>.amazonaws.com
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin $repository_uri
./ecr.sh $repository_uri

# Build an image from azure devops pipeline to aws eks
parameters:
- name: succeed
displayName: Succeed or fail
type: boolean
default: false
trigger:
- main
- releases/*
pool:
vmImage: "windows-latest"
stages:
- stage: init
jobs:
- job: init
continueOnError: false
steps:
- task: Docker#2
inputs:
containerRegistry: 'docker'
repository: 'ecr-name'
command: 'build'
Dockerfile: '**/Dockerfile'
tags: 'latest'
- task: ECRPushImage#1
inputs:
awsCredentials: 'aws credentilas'
regionName: 'us-east-1'
imageSource: 'any-name'
sourceImageName: 'ecr-name'
sourceImageTag: 'latest'
repositoryName: 'ecr-name'
pushTag: 'latest'

Create a repo per application:
aws ecr create-repository --repository-name worker --region us-east-1
aws ecr create-repository --repository-name gateway --region us-east-1
Login to registry
The AWS usr name is fixed for all registry logins
aws ecr get-login-password \
--region us-east-1 \
| docker login \
--username AWS \
--password-stdin <aws_12_digit_account_number>.dkr.ecr.us-east-1.amazonaws.com
Push image
docker build -f Dockerfile -t <123456789012>.dkr.ecr.us-east-1.amazonaws.com/worker:v1.0.0
docker push <123456789012>.dkr.ecr.us-east-1.amazonaws.com/worker:v1.0.0

Related

building a docker image in AWS with CLI

I was using gcloudbefore and could build a docker image on a GCP machine as follows:
gcloud builds submit ./my-docker-dir/ -t eu.gcr.io/<path>/<component>:<tag> --timeout 30m --machine-type e2-highcpu-32\n
Is there a similar AWS equivalent?
No, there's no such alternative. With AWS you use docker to build & push to ECR.
An example workflow would be:
Create a docker file and build it with:
docker build -t hello-world .
Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image.
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
Create an ECR repository
aws ecr create-repository \
--repository-name hello-world \
--image-scanning-configuration scanOnPush=true \
--region region
Tag the docker image you've built.
docker tag hello-world:latest aws_account_id.dkr.ecr.region.amazonaws.com/hello-world:latest
Push it to your ECR repo.
docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-world:latest

Automate Docker Run command on Sagemaker's Notebook Instance

I have a Docker image in AWS ECR and I open my Sagemaker Notebook instance--->go to terminal-->docker run....
This is how I start my Docker container.
Now, I want to automate this process(running my docker image on Sagemaker Notebook Instance) instead of typing the docker run commands.
Can I create a cron job on Sagemaker? or Is there any other approach?
Thanks
For this you can create an inline Bash shell in your SageMaker notebook as follows. This will take your Docker container, create the image, ECR repo if it does not exist and push the image.
%%sh
# Name of algo -> ECR
algorithm_name=your-algo-name
cd container #your directory with dockerfile and other sm components
chmod +x randomForest-Petrol/train #train file for container
chmod +x randomForest-Petrol/serve #serve file for container
account=$(aws sts get-caller-identity --query Account --output text)
# Region, defaults to us-west-2
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region ${region}|docker login --username AWS --password-stdin ${fullname}
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
I am contributing this on behalf of my employer, AWS. My contribution is licensed under the MIT license. See here for a more detailed explanation
https://aws-preview.aka.amazon.com/tools/stackoverflow-samples-license/
SageMaker Notebook instance lifecycle configuration script can be used to run a script when you create a notebook or at start time. In this script, you access other AWS resources from your notebook at create time or start time, say access your ECR images and automate starting docker container using a shell script. This script show an example of how you can use cron to schedule certain actions, can be modified per your usecase
Refer more lifecycle config samples in this github page

Trying to BUILD & PUSH 'tfrecord-processing' Docker image AWS - User denied

I am following this tutorial right here: https://aws.amazon.com/blogs/machine-learning/training-and-deploying-models-using-tensorflow-2-with-the-object-detection-api-on-amazon-sagemaker/ and I am trying to build and push tfrecord-processing docker image by executing following command:
!sh ./docker/build_and_push.sh $image_name
Everything seems to go fine until very end:
Step 6/7 : COPY code /opt/program
---> 68bc931b454c
Step 7/7 : ENTRYPOINT ["python3", "/opt/program/prepare_data.py"]
---> Running in 68fa1cac7cae
Removing intermediate container 68fa1cac7cae
---> 769c873f471c
Successfully built 769c873f471c
Successfully tagged tfrecord-processing:latest
Pushing image to ECR 382599840224.dkr.ecr.us-east-2.amazonaws.com/tfrecord-processing:latest
The push refers to repository [382599840224.dkr.ecr.us-east-2.amazonaws.com/tfrecord-processing]
f2a18981: Preparing
0de55568: Preparing
2361f986: Preparing
4b3288d4: Preparing
e55f84c6: Preparing
b0f92c14: Preparing
cf4cd527: Preparing
c1f74e01: Preparing
9e4b0fc9: Preparing
e3b79e0a: Preparing
e43735a0: Preparing
3918ca41: Preparing
768f66a4: Preparing
d332a58a: Preparing
f11cbf29: Preparing
a4b22186: Preparing
afb09dc3: Preparing
b5a53aac: Preparing
c8e5063e: Preparing
e4b0fc9: Waiting g denied: User: arn:aws:sts::382599840224:assumed-role/AmazonSageMaker-ExecutionRole-20210306T151543/SageMaker is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:us-east-2:382599840224:repository/tfrecord-processing
Here is the code for build_and_push.sh
#!/usr/bin/env bash
# This script shows how to build the Docker image and push it to ECR to be ready for use
# by SageMaker.
# The argument to this script is the image name. This will be used as the image on the local
# machine and combined with the account and region to form the repository name for ECR.
image=$1
if [[ "$image" == "" ]]
then
echo "Usage: $0 <image-name>"
exit 1
fi
# Get the account number associated with the current IAM credentials
account=$(aws sts get-caller-identity --query Account --output text)
if [[ $? -ne 0 ]]
then
exit 25
fi
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1
if [[ $? -ne 0 ]]
then
aws ecr create-repository --repository-name "${image}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
cd docker/
echo "Building image with name ${image}"
docker build --no-cache -t ${image} -f Dockerfile .
docker tag ${image} ${fullname}
echo "Pushing image to ECR ${fullname}"
docker push ${fullname}
# Writing the image name to let the calling process extract it without manual intervention:
echo "${fullname}" > ecr_image_fullname.txt
I guess I need to set some roles for my user, but not sure which or where. Please help.
I am wondering if the problem you are seeing is due to:
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
This is supposed to spit the docker login command and execute it directly (as the comment says).
You may want to try it outside of the script and see if it generates any error or constructive message.
A reason why this may not work is because this cli command (aws ecr get-login) is only available in the CLI v1. If you are using the CLI v2 version then you need to use the aws ecr get-login-password command. See here for full syntax.
[UPDATE] I reached out to the team that wrote the blog/repo and they fixed the command to reflect the AWS CLI v2 syntax. Apparently what happened is that the SM Notebook was updated to include the new CLI after the blog was published and that command needed an update. The repo should have the "fix" now.
Per https://stackoverflow.com/a/50684081/11262633, add Elastic Container Registry to the policy AmazonSageMaker-ExecutionPolicy in IAM.
I had to manually edit the JSON - the Visual Editor did not save my change.

Build a docker image on AWS Codebuild based on an image pulled from an ECR of another user: "no basic auth credentials"

I have a line in my Dockerfile like this:
FROM 6*********.dkr.ecr.ap-southeast-1.amazonaws.com/*************:ff03401
This ECR is owned by another user.
As recommended in this question, I am trying to log in by using these commands in the build section of my buildspec.yml, and then immediately pull this docker image:
- aws configure set aws_access_key_id $ECR_ACCESS_KEY
- aws configure set aws_secret_access_key $ECR_SECRET_KEY
- eval aws ecr get-login --no-include-email --region ap-southeast-1 --registry-ids 6***********
- docker pull 6***********.dkr.ecr.ap-southeast-1.amazonaws.com/****************:ff03401
When I look at the Codebuild logs, I see that eval aws ecr get-login... outputs a docker login ... command which, if I run it on my local machine, logs me in successfully, and lets me do the docker pull 6******....
In Codebuild, however, docker pull says:
Error response from daemon: Get https://6**********.dkr.ecr.ap-southeast-1.amazonaws.com/v2/******************/manifests/ff03401: no basic auth credentials
I have also tried adding --profile ecrproduction to the first three commands, without success.

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )