Docker run using AWS ECR Public Gallery - amazon-web-services

I'm currently building a Lambda layer using
#!/usr/bin/env bash
build=("pip install -r requirements.txt -t python/ && "
"lots &&",
"more &&",
"commands &&",
"exit")
docker run -v "$PWD/":/var/task \
"amazon/aws-sam-cli-build-image-python3.7" \
/bin/sh -c "${build[*]}"
I'm getting throttled by dockerhub, so I'd like to use the AWS ECR Public Gallery.
I tried:
docker run -v "$PWD/":/var/task \
"public.ecr.aws/lambda/python:3.7" \
/bin/sh -c "${build[*]}"
But I get public.ecr.aws/lambda/python:3.7: No such file or directory
How can I do a docker run and have it pull from the AWS ECR Public Gallery?

check if you are already logged in to docker hub in file ~/.docker/config
{
"auths": {
"https://index.docker.io/v1/": {}
},
...
if yes then logout via
$docker logout
Removing login credentials for https://index.docker.io/v1/
AWS ECR Public Gallery anyone can pull images.
Just a side note the image you are trying to pull will not have sam cli installed. No official image yet published on gallery.ecr.aws with sam cli yet.
You have to bake your own image with sam cli.

Related

Pushing a docker image to aws ecr gives no basic auth credentials

when I try to push a docker image to aws ecr it fails giving the following
sudo docker push xxxxxxx.dkr.ecr.us-east-2.amazonaws.com/my-app:1.0
7d9a9c94af8d: Preparing
f77d412f54b5: Preparing
629960860aca: Preparing
f019278bad8b: Preparing
8ca4f4055a70: Preparing
3e207b409db3: Waiting
no basic auth credentials
although logging in is done successfully
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin xxxx.dkr.ecr.us-east-2.amazonaws.com
Login Succeeded
And the /home/[my user]/.docker/config.json file has the following data
{
"auths": {
"xxxx.dkr.ecr.us-east-2.amazonaws.com": {
"auth": "QVsVkhaRT...."
}
}
}
I am using aws cli version 2.3.5
aws --version
aws-cli/2.3.5 Python/3.8.8 Linux/5.8.0-63-generic exe/x86_64.ubuntu.20 prompt/off
I am using docker version 20.10.10
docker --version
Docker version 20.10.10, build b485636
How can I solve this problem?
You're running sudo docker push.
This means that the credentials in your account won't be used. Instead, Docker is trying to use (nonexistent) credentials in the root user account.
Changing your command to docker push should suffice.

AWS lambda: how can I run aws cli commands in lambda

I want to run aws cli commands from lambda
I have a Pull request event that triggers when the approval state changes and whenever it's changed I need to run an aws CLI command from lambda but the lambda function says aws not found!
how do I get the status on PR's in my lambda function?
Create a lambda function, build an image to ecr, have the lambda function reference the image, and then test the image with an event. This is a good way to run things like aws s3 sync.
Testing local:
docker run -p 9000:8080 repo/lambda:latest
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
app.py
import subprocess
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def run_command(command):
try:
logger.info('Running shell command: "{}"'.format(command))
result = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
logger.info(
"Command output:\n---\n{}\n---".format(result.stdout.decode("UTF-8"))
)
except Exception as e:
logger.error("Exception: {}".format(e))
return False
return True
def handler(event, context):
run_command('aws s3 ls')
Dockerfile (awscliv2, can make requirements file if needed)
FROM public.ecr.aws/lambda/python:3.9
RUN yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.0.30.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install
COPY app.py ${LAMBDA_TASK_ROOT}
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "app.handler" ]
Makefile (make all - login,build,tag,push to ecr repo)
ROOT:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
IMAGE_NAME:=repo/lambda
ECR_TAG:="latest"
AWS_REGION:="us-east-1"
AWS_ACCOUNT_ID:="xxxxxxxxx"
REGISTRY_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${IMAGE_NAME}
REGISTRY_URI_WITH_TAG=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${IMAGE_NAME}:${ECR_TAG}
# Login to AWS ECR registry (must have docker running)
login:
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY_URI}
build:
docker build --no-cache -t ${IMAGE_NAME}:${ECR_TAG} .
# Tag docker image
tag:
docker tag ${IMAGE_NAME}:${ECR_TAG} ${REGISTRY_URI_WITH_TAG}
# Push to ECR registry
push:
docker push ${REGISTRY_URI_WITH_TAG}
# Pull version from ECR registry
pull:
docker pull ${REGISTRY_URI_WITH_TAG}
# Build docker image and push to AWS ECR registry
all: login build tag push
The default lambda environment doesn't provide the awscli. In fact, the idea of using it there is quite awkward. You can call any command the aws cli can via an sdk like boto3 for example, which is provided in that environment.
You can however include binaries in your lambda, if you please, then execute them.
You also consider using a container image for your lambda. You can find information here: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html.

Automate Docker Run command on Sagemaker's Notebook Instance

I have a Docker image in AWS ECR and I open my Sagemaker Notebook instance--->go to terminal-->docker run....
This is how I start my Docker container.
Now, I want to automate this process(running my docker image on Sagemaker Notebook Instance) instead of typing the docker run commands.
Can I create a cron job on Sagemaker? or Is there any other approach?
Thanks
For this you can create an inline Bash shell in your SageMaker notebook as follows. This will take your Docker container, create the image, ECR repo if it does not exist and push the image.
%%sh
# Name of algo -> ECR
algorithm_name=your-algo-name
cd container #your directory with dockerfile and other sm components
chmod +x randomForest-Petrol/train #train file for container
chmod +x randomForest-Petrol/serve #serve file for container
account=$(aws sts get-caller-identity --query Account --output text)
# Region, defaults to us-west-2
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region ${region}|docker login --username AWS --password-stdin ${fullname}
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
I am contributing this on behalf of my employer, AWS. My contribution is licensed under the MIT license. See here for a more detailed explanation
https://aws-preview.aka.amazon.com/tools/stackoverflow-samples-license/
SageMaker Notebook instance lifecycle configuration script can be used to run a script when you create a notebook or at start time. In this script, you access other AWS resources from your notebook at create time or start time, say access your ECR images and automate starting docker container using a shell script. This script show an example of how you can use cron to schedule certain actions, can be modified per your usecase
Refer more lifecycle config samples in this github page

How can I solve getting Unauthorized Access 401 error when pulling aws deep learning container with docker?

I tried building a detectron2 image with docker, in order to use with AWS SageMaker. The dockerfile looks like this:
ARG REGION="eu-central-1"
FROM 763104351884.dkr.ecr.$REGION.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04
RUN pip install --upgrade torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
############# Detectron2 section ##############
RUN pip install \
--no-cache-dir pycocotools~=2.0.0 \
--no-cache-dir https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/detectron2-0.4%2Bcu101-cp36-cp36m-linux_x86_64.whl
ENV FORCE_CUDA="1"
# Build D2 only for Volta architecture - V100 chips (ml.p3 AWS instances)
# ENV TORCH_CUDA_ARCH_LIST="Volta"
# Set a fixed model cache directory. Detectron2 requirement
ENV FVCORE_CACHE="/tmp"
############# SageMaker section ##############
COPY container_training/sku-110k /opt/ml/code
WORKDIR /opt/ml/code
ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code
ENV SAGEMAKER_PROGRAM training.py
WORKDIR /
ENTRYPOINT ["bash", "-m", "start_with_right_hostname.sh"]
The problem is that when I run the docker build command, it fails at pulling the image from the AWS ECR repository. It throws the error
ERROR [internal] load metadata for
763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu 0.4s ------ > [internal] load metadata for 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04:
------ failed to solve with frontend dockerfile.v0: failed to create LLB
definition: unexpected status code [manifests
1.6.0-gpu-py36-cu101-ubuntu16.04]: 401 Unauthorized
I have to mention that I successfully login before trying to build and I have full ECR permissions on my user.
you probably logged in in your ECR private account but not in docker & shared ECR repo to retrieve Pytorch base image like this :
Enter your region and account id below, and then execute the following cell to do it.
%%bash
REGION=YOUR_REGION
ACCOUNT=YOUR_ACCOUNT_ID
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin 763104351884.dkr.ecr.$REGION.amazonaws.com
# loging to your private ECR
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com

how to use aws jenkins plugin "ecrLogin" in a jenkins step

I am trying to push a docker image that i build in one Jenkins steps, i have read some tutorials that use
Amazon ECR plugin (but it seems is not been develop anymore/adoption, the Jenkins official AWS plugin from amazon does come with ecrLogin. but not sure how to use it.
Do i need to put this code into a script{} ?
withAWS(credentials: 'my_credentials'){
my_loging = ecrLogin()
sh 'docker --login ${my_loging}'
sh "docker push my_image_tag"
}
or just pretend that i am doing it like from my local computer
withAWS(credentials: 'my_credentials'){
sh "aws ecr get-login-password --region my_region | docker login --username AWS --password-stdin ecr_url"
sh "docker push ${docker_full_tag}"
}
The 2nd approach is what I've been using, and it works great.
Just make sure that you've properly setup AWSCLI on the user which Jenkins uses to execute it's pipeline/shell commands.