AWS lambda: how can I run aws cli commands in lambda - amazon-web-services

I want to run aws cli commands from lambda
I have a Pull request event that triggers when the approval state changes and whenever it's changed I need to run an aws CLI command from lambda but the lambda function says aws not found!
how do I get the status on PR's in my lambda function?

Create a lambda function, build an image to ecr, have the lambda function reference the image, and then test the image with an event. This is a good way to run things like aws s3 sync.
Testing local:
docker run -p 9000:8080 repo/lambda:latest
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
app.py
import subprocess
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def run_command(command):
try:
logger.info('Running shell command: "{}"'.format(command))
result = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
logger.info(
"Command output:\n---\n{}\n---".format(result.stdout.decode("UTF-8"))
)
except Exception as e:
logger.error("Exception: {}".format(e))
return False
return True
def handler(event, context):
run_command('aws s3 ls')
Dockerfile (awscliv2, can make requirements file if needed)
FROM public.ecr.aws/lambda/python:3.9
RUN yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.0.30.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install
COPY app.py ${LAMBDA_TASK_ROOT}
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
CMD [ "app.handler" ]
Makefile (make all - login,build,tag,push to ecr repo)
ROOT:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
IMAGE_NAME:=repo/lambda
ECR_TAG:="latest"
AWS_REGION:="us-east-1"
AWS_ACCOUNT_ID:="xxxxxxxxx"
REGISTRY_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${IMAGE_NAME}
REGISTRY_URI_WITH_TAG=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${IMAGE_NAME}:${ECR_TAG}
# Login to AWS ECR registry (must have docker running)
login:
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY_URI}
build:
docker build --no-cache -t ${IMAGE_NAME}:${ECR_TAG} .
# Tag docker image
tag:
docker tag ${IMAGE_NAME}:${ECR_TAG} ${REGISTRY_URI_WITH_TAG}
# Push to ECR registry
push:
docker push ${REGISTRY_URI_WITH_TAG}
# Pull version from ECR registry
pull:
docker pull ${REGISTRY_URI_WITH_TAG}
# Build docker image and push to AWS ECR registry
all: login build tag push

The default lambda environment doesn't provide the awscli. In fact, the idea of using it there is quite awkward. You can call any command the aws cli can via an sdk like boto3 for example, which is provided in that environment.
You can however include binaries in your lambda, if you please, then execute them.
You also consider using a container image for your lambda. You can find information here: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html.

Related

Automate Docker Run command on Sagemaker's Notebook Instance

I have a Docker image in AWS ECR and I open my Sagemaker Notebook instance--->go to terminal-->docker run....
This is how I start my Docker container.
Now, I want to automate this process(running my docker image on Sagemaker Notebook Instance) instead of typing the docker run commands.
Can I create a cron job on Sagemaker? or Is there any other approach?
Thanks
For this you can create an inline Bash shell in your SageMaker notebook as follows. This will take your Docker container, create the image, ECR repo if it does not exist and push the image.
%%sh
# Name of algo -> ECR
algorithm_name=your-algo-name
cd container #your directory with dockerfile and other sm components
chmod +x randomForest-Petrol/train #train file for container
chmod +x randomForest-Petrol/serve #serve file for container
account=$(aws sts get-caller-identity --query Account --output text)
# Region, defaults to us-west-2
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region ${region}|docker login --username AWS --password-stdin ${fullname}
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
I am contributing this on behalf of my employer, AWS. My contribution is licensed under the MIT license. See here for a more detailed explanation
https://aws-preview.aka.amazon.com/tools/stackoverflow-samples-license/
SageMaker Notebook instance lifecycle configuration script can be used to run a script when you create a notebook or at start time. In this script, you access other AWS resources from your notebook at create time or start time, say access your ECR images and automate starting docker container using a shell script. This script show an example of how you can use cron to schedule certain actions, can be modified per your usecase
Refer more lifecycle config samples in this github page

How can I solve getting Unauthorized Access 401 error when pulling aws deep learning container with docker?

I tried building a detectron2 image with docker, in order to use with AWS SageMaker. The dockerfile looks like this:
ARG REGION="eu-central-1"
FROM 763104351884.dkr.ecr.$REGION.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04
RUN pip install --upgrade torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
############# Detectron2 section ##############
RUN pip install \
--no-cache-dir pycocotools~=2.0.0 \
--no-cache-dir https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.6/detectron2-0.4%2Bcu101-cp36-cp36m-linux_x86_64.whl
ENV FORCE_CUDA="1"
# Build D2 only for Volta architecture - V100 chips (ml.p3 AWS instances)
# ENV TORCH_CUDA_ARCH_LIST="Volta"
# Set a fixed model cache directory. Detectron2 requirement
ENV FVCORE_CACHE="/tmp"
############# SageMaker section ##############
COPY container_training/sku-110k /opt/ml/code
WORKDIR /opt/ml/code
ENV SAGEMAKER_SUBMIT_DIRECTORY /opt/ml/code
ENV SAGEMAKER_PROGRAM training.py
WORKDIR /
ENTRYPOINT ["bash", "-m", "start_with_right_hostname.sh"]
The problem is that when I run the docker build command, it fails at pulling the image from the AWS ECR repository. It throws the error
ERROR [internal] load metadata for
763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu 0.4s ------ > [internal] load metadata for 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu101-ubuntu16.04:
------ failed to solve with frontend dockerfile.v0: failed to create LLB
definition: unexpected status code [manifests
1.6.0-gpu-py36-cu101-ubuntu16.04]: 401 Unauthorized
I have to mention that I successfully login before trying to build and I have full ECR permissions on my user.
you probably logged in in your ECR private account but not in docker & shared ECR repo to retrieve Pytorch base image like this :
Enter your region and account id below, and then execute the following cell to do it.
%%bash
REGION=YOUR_REGION
ACCOUNT=YOUR_ACCOUNT_ID
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin 763104351884.dkr.ecr.$REGION.amazonaws.com
# loging to your private ECR
aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT.dkr.ecr.$REGION.amazonaws.com

Trying to BUILD & PUSH 'tfrecord-processing' Docker image AWS - User denied

I am following this tutorial right here: https://aws.amazon.com/blogs/machine-learning/training-and-deploying-models-using-tensorflow-2-with-the-object-detection-api-on-amazon-sagemaker/ and I am trying to build and push tfrecord-processing docker image by executing following command:
!sh ./docker/build_and_push.sh $image_name
Everything seems to go fine until very end:
Step 6/7 : COPY code /opt/program
---> 68bc931b454c
Step 7/7 : ENTRYPOINT ["python3", "/opt/program/prepare_data.py"]
---> Running in 68fa1cac7cae
Removing intermediate container 68fa1cac7cae
---> 769c873f471c
Successfully built 769c873f471c
Successfully tagged tfrecord-processing:latest
Pushing image to ECR 382599840224.dkr.ecr.us-east-2.amazonaws.com/tfrecord-processing:latest
The push refers to repository [382599840224.dkr.ecr.us-east-2.amazonaws.com/tfrecord-processing]
f2a18981: Preparing
0de55568: Preparing
2361f986: Preparing
4b3288d4: Preparing
e55f84c6: Preparing
b0f92c14: Preparing
cf4cd527: Preparing
c1f74e01: Preparing
9e4b0fc9: Preparing
e3b79e0a: Preparing
e43735a0: Preparing
3918ca41: Preparing
768f66a4: Preparing
d332a58a: Preparing
f11cbf29: Preparing
a4b22186: Preparing
afb09dc3: Preparing
b5a53aac: Preparing
c8e5063e: Preparing
e4b0fc9: Waiting g denied: User: arn:aws:sts::382599840224:assumed-role/AmazonSageMaker-ExecutionRole-20210306T151543/SageMaker is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:us-east-2:382599840224:repository/tfrecord-processing
Here is the code for build_and_push.sh
#!/usr/bin/env bash
# This script shows how to build the Docker image and push it to ECR to be ready for use
# by SageMaker.
# The argument to this script is the image name. This will be used as the image on the local
# machine and combined with the account and region to form the repository name for ECR.
image=$1
if [[ "$image" == "" ]]
then
echo "Usage: $0 <image-name>"
exit 1
fi
# Get the account number associated with the current IAM credentials
account=$(aws sts get-caller-identity --query Account --output text)
if [[ $? -ne 0 ]]
then
exit 25
fi
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1
if [[ $? -ne 0 ]]
then
aws ecr create-repository --repository-name "${image}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
cd docker/
echo "Building image with name ${image}"
docker build --no-cache -t ${image} -f Dockerfile .
docker tag ${image} ${fullname}
echo "Pushing image to ECR ${fullname}"
docker push ${fullname}
# Writing the image name to let the calling process extract it without manual intervention:
echo "${fullname}" > ecr_image_fullname.txt
I guess I need to set some roles for my user, but not sure which or where. Please help.
I am wondering if the problem you are seeing is due to:
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
This is supposed to spit the docker login command and execute it directly (as the comment says).
You may want to try it outside of the script and see if it generates any error or constructive message.
A reason why this may not work is because this cli command (aws ecr get-login) is only available in the CLI v1. If you are using the CLI v2 version then you need to use the aws ecr get-login-password command. See here for full syntax.
[UPDATE] I reached out to the team that wrote the blog/repo and they fixed the command to reflect the AWS CLI v2 syntax. Apparently what happened is that the SM Notebook was updated to include the new CLI after the blog was published and that command needed an update. The repo should have the "fix" now.
Per https://stackoverflow.com/a/50684081/11262633, add Elastic Container Registry to the policy AmazonSageMaker-ExecutionPolicy in IAM.
I had to manually edit the JSON - the Visual Editor did not save my change.

Docker run using AWS ECR Public Gallery

I'm currently building a Lambda layer using
#!/usr/bin/env bash
build=("pip install -r requirements.txt -t python/ && "
"lots &&",
"more &&",
"commands &&",
"exit")
docker run -v "$PWD/":/var/task \
"amazon/aws-sam-cli-build-image-python3.7" \
/bin/sh -c "${build[*]}"
I'm getting throttled by dockerhub, so I'd like to use the AWS ECR Public Gallery.
I tried:
docker run -v "$PWD/":/var/task \
"public.ecr.aws/lambda/python:3.7" \
/bin/sh -c "${build[*]}"
But I get public.ecr.aws/lambda/python:3.7: No such file or directory
How can I do a docker run and have it pull from the AWS ECR Public Gallery?
check if you are already logged in to docker hub in file ~/.docker/config
{
"auths": {
"https://index.docker.io/v1/": {}
},
...
if yes then logout via
$docker logout
Removing login credentials for https://index.docker.io/v1/
AWS ECR Public Gallery anyone can pull images.
Just a side note the image you are trying to pull will not have sam cli installed. No official image yet published on gallery.ecr.aws with sam cli yet.
You have to bake your own image with sam cli.

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )