AWS Lambda doesn't automatically pick up the latest image? - amazon-web-services

I have a Lambda deployed on AWS. My Lambda is deployed uses a container to run my code. Whenever after we deploy a new image, we have to manually copy paste the URL in Lambda's configuration. Even if in ECR latest image has the URI which is already configured in Lambda, Lambda used the image from when configuration was last manually done. I was wondering if there is a way to automatically have lambda use the latest image that is deployed in ECR ?
Things I have tried:
Keeping the tags and image name same during deployment, so the URI of image stays the same. I then use that URI to configure my Lambda.
Used "latest" as a tag for my image.
Note: Image is being pushed to ECR by Bitbucket.

This is expected as the Lambda isn't aware a new image was pushed.
For a function defined as a container image, Lambda resolves the image
tag to an image digest. In Amazon ECR, if you update the image tag to
a new image, Lambda does not automatically update the function.
https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html#update-function-code
After pushing the image:
docker tag my-image:latest 123456789.dkr.ecr.eu-west-1.amazonaws.com/my-image:latest
docker push 123456789.dkr.ecr.eu-west-1.amazonaws.com/my-image:latest
Also update your Lambda with the new image:
aws lambda update-function-code \
--function-name my-lambda \
--image-uri 123456789.dkr.ecr.eu-west-1.amazonaws.com/my-image:latest

Answer Stephan gave, guided me to achieve the same using Bitbucket Pipelines (My problem needed to be solved on Bitbucket). Here is a code sample:
- pipe: atlassian/aws-lambda-deploy:1.7.0
variables:
AWS_DEFAULT_REGION: 'YOPUR_LAMBDA_REGION'
AWS_OIDC_ROLE_ARN: "ARN_FOR_YOUR_IAM_ROLE"
FUNCTION_NAME: 'YOUR_FUNCTION'
COMMAND: 'update'
IMAGE_URI: 'YOUR_IMAGE_URI'
For this to work, your Lambda has to be setup already since this code just updates your Lambda.

Have few solutions for this strange case.
Remove docker image <-- This is trick, then get new docker image by docker pull.
Use SHA256 for compare hash string of docker image. If hash strings are different, these are different docker image versions/tags.
You can leverage sha256 hash string, for example
docker pull ubuntu#sha256:26c68657ccce2cb0a31b330cb0be2b5e108d467f641c62e13ab40cbec258c68d
You can use specific tag, not latest.
Reference document https://docs.docker.com/engine/reference/commandline/pull/

Related

AWS Lambdas with Docker images: do I have to create a different docker image per lambda?

I have a big number of lambdas, all sharing the same libraries. Due to size constraints I can not package the libraries together with the lambda neither use the Lambda Layers, so I have created a Docker image (let's call it lambda_base:latest) with all the required libraries installed and deployed it in ECR.
Now, for every lambda, I have created a new Docker image based on lambda_base:latest where the only difference is that includes the lambda's code and it is working fine.
My question is, am I proceeding ok? I would expect to deploy the lambda a one and being able to chose as "runtime" lambda:latest instead whatever image that AWS uses to run the lambda but I don't find how to do that.
Maybe what I am doing is ok but I found weird to create a image for every single lambda.
Thanks a lot!!!
I have created
First, your application Docker image would not be stored inside of the Lambda. The Docker image would be stored in AWS ECR, which is the Container Registry that AWS provides for its customers. You would build your image, tag your image and then publish your image to an ECR repository that you create. The image in this ECR repository can be utilized by any AWS Service that accepts a Docker image, whether it is Lambda, ECS, EKS, Batch, etc. It is not something specific to Lambda, in other words.
Second, I would create an ECR repo per application. I would not think of it as a 1:1 between lambda and ecr, but rather a 1:1 between application and ecr repo. Think of the ECR Repo as the container for a given application. So each repo would have a Dockerfile, which uses the From instruction as so:
FROM public.ecr.aws/lambda/python:3.8
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
COPY app.py ${LAMBDA_TASK_ROOT}
So this app.py might be MyApp1. and it can be importing a bunch of other files that comprise of MyApp1. This corresponds to an ECR Repo you can call my-app-1. Then you will have a second application, where the app.py and imported files are vastly different and so you would have a second ecr repo to hold the layers of that application.
Then for your Lambda function parameters, you will specify the image uri, which refers to the ECR URI. You will specify your package type as "Image". Here is a rudimentary example in Terraform Infrastructure As Code to illustrate the point:
resource "aws_lambda_function" "function" {
function_name = "${var.name_prefix}-my-lambda${var.name_suffix}"
description = "My Lambda Function"
image_uri = var.image_uri
package_type = "Image"
timeout = var.timeout
memory_size = var.memory_size
role = var.role_arn
tags = var.tags
}
The image_uri variable would come from the ECR Repo that was created. So the variable would look something like this:
resource "aws_ecr_repository" "repo" {
name = "Your Repo Name"
}
resource "null_resource" "ecr_image" {
triggers = {
docker_file = md5(file("${path.module}/../../Dockerfile"))
app_file = md5(file("${path.module}/../../app.py"))
}
provisioner "local-exec" {
command = <<EOF
aws ecr get-login-password --region ${var.region} | docker login --username AWS --password-stdin ${local.account_id}.dkr.ecr.${var.region}.amazonaws.com
cd ${path.module}/../../
docker build -f Dockerfile -t ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag} .
docker push ${aws_ecr_repository.repo.repository_url}:${local.ecr_image_tag}
EOF
}
}
data "aws_ecr_image" "lambda_image" {
depends_on = [
"null_resource.ecr_image"
]
repository_name = "Your Repo Name"
image_tag = local.ecr_image_tag
}
output "image_uri" {
value = aws_ecr_repository.repo.repository_url}#${data.aws_ecr_image.lambda_image.id
}
In the above example, I am using Terraform, but you could just as easily reproduce the same scenario in CloudFormation or directly through the AWS CLI.
you don't need to use different images for every Lambda function. It is possible to override command, so you can have the same image for all functions and just override command to point to specific handler for each of the functions. Here are the docs for Serverless Framework where it is specified how you can override command: https://www.serverless.com/framework/docs/providers/aws/guide/functions/#referencing-container-image-as-a-target
If you are not using Serverless Framework, you can override it in similar way in raw CloudFormation or manually via AWS UI.

How do I get the docker image ID of a Fargate task using the CLI/SDK?

I want to make sure that the task is running the latest image.
Within the container, I can get the docker image ID (such as 183f9552f0a5) by calling http://169.254.170.2/v2/metadata, however I am looking for a way to get it on my laptop.
Is this possible with AWS CLI or SDK?
You first need to get the Task Definition ARN for the Task using describe_tasks. You can skip this step if you already know the ARN.
aws ecs describe-tasks --tasks TASK_ARN
Then you can use describe_task_definition to get the image name.
aws ecs describe-task-definition --task-definition TASKDEF_ARN

Is it possible to pull images from ECR without using docker login

I have an ECR and EC2 instance running docker. What I want to do is to pull images without doing docker login first.
Is it possible at all? If yes what kind of policy should I attach to EC2 instance and/or ECR repo? I did a lot of experiments, but did not succeed.
And please - no suggestions on how to use aws get-login. My aim is to get rid of it by using IAM policy/roles.
To use an EC2 Role without having to use docker login, https://github.com/awslabs/amazon-ecr-credential-helper can be used.
Place the docker-credential-ecr-login binary on your PATH and set the contents of your ~/.docker/config.json file to be:
{
"credsStore": "ecr-login"
}
Now commands such as docker pull or docker push will work transparently.
My aim is to get rid of it by using IAM policy/roles.
I don't see how this is possible since some form of authentication is required.

AWS ECS - Ways to deploy containers

The use case is like - developer makes some code changes and the below things happen automatically -
build runs, application artifact created, docker image generated with the artifact, image pushed to Docker registry, AWS ECS tasks and ECS services updated.
I want to know what are the ways to achieve the above automation of update of AWS ECS services. Till now I have implemented AWS ECS update from Jenkins build using -
1>run post build AWS CLi scripts from Jenkins to update ECS
2>post build action or pipeline step to invoke AWS Lambda function. I have created one Lambda function in Java to implement that.
Please let me the other ways we can achieve the above. Thanks.
I'm continuously deploying Docker containers from CircleCI to AWS ECS.
The outline of the deployment flow is as follows:
Build and tag a new Docker image
Login to AWS ECR and push the image
Update task definitions and services of ECS with ecs-deploy
ecs-deploy is a useful script that updates Docker images in ECS.
https://github.com/silinternational/ecs-deploy
You could use a shell script that calls aws cli commands to create cloudformation stacks or directly call the create commands in the aws cli for the ECR repository, Task Definition, Events rule and target(for scheduling).
then you just call this script on your terminal using this command: ./setup.sh and it should execute all your commands at once.
aws ecr create-repository \
--repository-name tasks-${TASK_NAME}-${TASK_ENV} \
;
or if you want to set up your resources via cloudformation templates, you can launch them using this command as long as the template exists at file://name.yml:
aws cloudformation create-stack \
--stack-name stack-name \
--capabilities CAPABILITY_IAM \
--template-body file://name.yml \
--parameters
ParameterKey=ParamName,ParameterValue=${PARAM_NAME} \
;
Take a look at Codefresh - https://docs.codefresh.io/docs/amazon-ecs
You can build your pipeline
Build Step
Push to Registry
Deply to ECS
That easy
While there are a ton of CI/CD tools out there, since I am early in my rollout, I decided to write a small script instead of having CI/CD pipelines do it.
Here is a one-click deploy script I wrote using the ecs-deploy script as a dependency to achieve a rolling deploy of a docker image to ECS.
You can run this locally from your dev or build/deployment box or use Jenkins or some local build tool.
#!/bin/bash
# automatically login to AWS
eval $(aws ecr get-login)
# build local docker image and push repo to AWS
docker build -t <yourlocaldockerimagetag> .
docker tag <yourlocaldockerimagetag>:latest <yourECSRepoURL>:latest
docker -D -l debug push <yourECSRepoURL>:latest
# deploy to ECS
ecs-deploy/ecs-deploy -m 50 -k <access-key> -s <secret-key> -r <aws-region> -c <cluster-name> -n <service-name> -i <yourECSRepoURL>:latest
Parameters:
cluster-name: Your cluster name in ECS
service-name: Your service name that you had created in ECS
yourECSRepoURL: ECS Repository URL
yourlocaldockerimagetag: Any local image tag name
access-key: your AWS access key for deployments
secret-key: your AWS secret key
Make sure you install ecs-deploy before this script.
The -m 50 tells it that it can deploy even if the number of nodes drops to 50%. Ideally you would have an extra node to do deployments, but if you can't afford that setting this would ensure that deployments continue to happen.
If you are also using an ELB (load balancer), then the default deregistration delay for target groups is 5 minutes which is a bit excessive. The deregistration delay is the time to wait for existing requests to complete BEFORE ECS sends a SIGTERM or SIGINT to your docker container. You should lower this by going to the Target Groups in EC2 dashboard and click the Edit Attributes to edit it. Otherwise your deployments may take forever.
I think nobody has mentioned CodePipeline from AWS, it really integrates easilly with many AWS Services including ECS and CodeCommit:
Push commit to CodeCommit Repo, triggering the pipeline execution.
(Optional) Configure a Manual Approval step that needs you to take an action before Build.
Run a CodeBuild Project that builds your Dockerfile and push the image to an ECR Repo.
Run a "Deploy" step that deploys to a specific ECS Service. It updates the services with a new Task Definition that points to the new ECR Image.
I have used this flow with BitBucket also, just configure a BitBucket pipeline that pushes all new code to a CodeCommit Repo as a previous step.
Exactly as #minamiyojo and #astav answers, we ended up glueing ecs-deploy with a template engine to power up our CD pipeline with some reusable component, we just open-sourced as well:
https://github.com/GuccioGucci/yoke
Please refer to Motivation section in README, hope this would help your scenario too.

How to deploy docker container image updates from AWS ECR to ECS?

I’m new to both Amazon’s ECS and docker, and I don’t know how to deploy new images.
I currently create a new image in ECR with
NAME_TAG=my-image-name:my-tag-v1
ECR=my-acct-number.dkr.ecr.us-east-1.amazonaws.com
docker build -t $NAME_TAG .
docker tag -f $NAME_TAG $ECR/$NAME_TAG
$(aws ecr get-login --region us-east-1) #log in
docker push $ECR/$NAME_TAG
At this point I don't know how to deploy the new container from ECR to my cluster.
I created the cluster, task and service using a Cloud Formation template, but updating the TaskDefinition image to $ECR/$NAME_TAG and running a stack update eventually times out and fails with a “service did not stabilize” error.
If I push to my-image-name:latest, my cluster instances do pull down the new image, but they don’t run it, and in any case I want to avoid using the mysterious latest tag.
How am I supposed to deploy new images to ECS?
You should be able to deploy your image using a new task definition every time you deploy.
The task definition lets you set the image version using the attribute "image"
"image":"my-acct-number.dkr.ecr.us-east-1.amazonaws.com/my-image-name:my-tag-v1"
In case you want to use only one task definition, you will have to build you image and tag it with whatever is defined in the the definition.