Centrally update multiple AWS Lambda functions - amazon-web-services

I am creating multiple (dozens) of lambda functions with identical code and only env vars changing. WHY? to create least privilege in accessing resources and in being accessed themselves.
I want to be able to centrally update the code for them, without redeploying every such lambda. I found that Lambda layers updates require redeployment of the lambdas to take effect.
I see though that I can also use containers, with the lambda pointing to an image registry (ECR) image. Are the images fetched dynamically at the Lambda invokation time (which would enable central updates) or packaged into the lambda resource at deployment (which would require redeployment jsut like layers)?

Found the relevant docs
After you deploy a container image to a function, the image is read-only. To update the function code, you must first deploy a new image version. Create a new image version, and then store the image in the Amazon ECR repository.
Seems that what I had in mind is prevented by design by AWS Lambda

Related

Avoid expiring any ECR image if an ECS cluster is still referencing it

We are implementing lifecycle policies to clean up old ECR images but we want to avoid expiring any image if a Fargate ECS cluster is still referencing it. How can we best do that?
I am thinking about adding a "live" tag that is being set and unset by the Blue-Green Switch, but there is a problem - 2 or more environments in our AWS CodePipeline might be using the same image so I would need to implement some kind of reference counting.
Is there a better way, or should I go with this approach?
Unfortunately using lifecycle rules to manage images is just not ideal with AWS.
Multiplatform images (amd64 + arm64) are managed in registries by having a top level meta image that points to other images depending on platform (this is a bit of a simplification). When you tag your image it only tags the top level meta one, not the lower level ones. As such it's possible to accidentally erase images.
As you've discovered images can be erased even if they are referenced somewhere.
Instead of using lifecycle rules I use a simple script, which in turn includes any logic I want. You can run a filter to see what ECS services exist and are using the image and then exclude any where that comes back empty. This is much simpler if everything is in the same account.
To put this another way- instead of having your services push tags to your images, have your deletion script explicitly confirm that no images which are being used get deleted. Then you don't have to worry about how many environments are running it.

How can you use AWS Lambda scripts to deploy AWS Infrastructure with Terraform

I have already my whole AWS infrastructure set up in Terraform and everything works fine. So now, instead of deploying it from my local machine running terraform apply, I want to deploy my Infrastructure with an AWS Lambda Script completely serverless. Is there anyone who knows how to do this or where to read about this concept? Didn't find anything useful on the internet until now.
I think my sourcecode could lie on a S3 Bucket and the Lambda function grabs it, and runs it in terraform also set up in the function itself i guess due to terraform is such a small program.
I would attempt that as follows:
Create a lambda container image which would include official terraform binary. The actual lambda function code would use, lets say, python's python-terraform package to interact with the binary. Or directly invoke the binary using subprocess.run.
Setup a lambda execution role with all the permissions needed for creation of your resources.
Create a lambda function using the container image.
I haven't tried that personally yet, but I think it is something that should work.

Is there a way to containerized a normal AWS Lambda function?

My AWS lambda functions have input from AWS SNS (Topic subscription) and output will go to CRUD in NoSQL Database (likewise MongoDB).
So currently I have the SNS & Lambda function setup in AWS Cloud and they are working fine. However, I would like to containerize the lambda function as well as the MongoDB database and host them on AWS EKS using Docker + Kubernetes service. (So the functions will be a Docker image)
I am totally new to this container thing and I searched online though I could not found any that mentions how to containerized AWS Lambda Functions.
Is this possible? If it is what are the ways to do it?
Thank you.
The docker environment for AWS lambda function already exist and it is lambci/lambda. So if you want to run/test your functions locally, this is the tool normally used for that:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.
Since its open-sourced, you can also modify it if it does not suit your needs.
Lambda already uses Firecracker a microVM technology. So, not really sure why it's required to create a container out of Lambda.
The beauty of Lambda/Serverless is to simply write the function code and forget about the rest. If it's all about more control, then look at Knative which runs on top of K8S.

Google Container Registry images lifecycle

I would like to know if there is a way to setup an objects lifecycle in GCP Container Registry?
I would like to keep the last n versions of an image, automatically deleting the older ones as new ones are pushed online.
I can't work directly on the Cloud Storage bucket because, having multiple images saved, the storage objects are not recognizable.
Seth Vargo, a Google Cloud developer advocate has release GCRCleaner.
Follow the instruction for setting up a scheduler and a Cloud Run for cleaning the GCR.
Unfortunaltely, there is no concept pf managed lifecycle management of images managed in GCR just like there is in AWS which allows creating policies to manage images in the registry.
You have to plan this yourself i.e. a script which emulates the following behavior and runs periodically.
gcloud container images delete -q --force-delete-tags "${IMAGE}#${digest}"
Unfortunately, at this time there’s no feature able to do such in GCR, however there’s already a feature request created. You can follow on it and write comments.
Also check this example, where image deletion was implemented in specific time.

Can a Lambda container host more than one function at once?

Is it possible for multiple Lambda function definitions to be deployed to the same container instance?
I understand that a given Lambda container will only execute at most one function at a time, but wanted to understand the composition relationship between functions and the host container.
For example, in the Serverless App project type for Visual Studio with the AWS Toolkit Extensions, it's possible to define multiple functions in a single project, but do these get deployed via CloudFormation into separate containers or a single container representing the project?
I think it might help to separate out the process:
A lambda deployment is a zip file of code, and a matching configuration. In the case of your Serverless App project type, when you have multiple lambda functions to a project, you're creating multiple deployments.
A lambda instance is a running version of a deployment hosted inside of a container. Only one lambda instance is allowed in a container, that is an AWS guarantee. This means that you can never get access to code/memory/files outside of the currently running instance (either yours or anyone else's!)
As an optimisation AWS does re-use instances by freezing and thawing the container. This is because it's expensive to start a fresh container, copy the deployment, and do the init code of the deployment (known as the cold start).
There's not an official document on the matter, but I will share what I have gathered the past years from Docs/Posts/Conferences:
Lambda functions are "framework" (or something like it, just that word is the closest I can think of and the only one I have heard from AWS representative) on top of a Containerization service. Every time you call lambda function a container is being run (it could be an existing one, put on hold - adds performance boost, or an entirely new one, you can never know which one it is.)
You could assume, the container instance (using instance as it is used in ECS, a host machine) is the same, having in mind there are some workarounds for DB Connection pooling and things of the like, but nobody guarantees you that.
CloudFormation will deploy the functions to the same AWS account. This is happening at the AWS user account level. It's not running the functions.
Lambdas are event-driven and only run when they are triggered. Every instance of a Lambda is standalone as far as the user experiences it, and is "deployed in its own container" when triggered.
Maybe something deeper is happening under the abstraction layers but that's how you'll experience it.
I did this by using ExpressJS+NodeJS. Every function is a different ExpressJS route. However, so far I have not been able to do it with Spring Cloud Function.