AWS Lambda Function to Container - amazon-web-services

I have a deployed AWS Lambda function that I would like to run in a container. There are no restrictions on where this container is run (AWS Fargate, EC2, My Local Machine, etc. are perfect) There are no restrictions on which containerization software is used, though docker is preferred.
I have access to everything about the lambda function returned from this Lambda API function. I do not have access to the raw code.
I looked into AWS SAM, though it appears that i need a copy of the raw code in order to create a container.
Is this possible to do? If so, how can I do it?

Related

Centrally update multiple AWS Lambda functions

I am creating multiple (dozens) of lambda functions with identical code and only env vars changing. WHY? to create least privilege in accessing resources and in being accessed themselves.
I want to be able to centrally update the code for them, without redeploying every such lambda. I found that Lambda layers updates require redeployment of the lambdas to take effect.
I see though that I can also use containers, with the lambda pointing to an image registry (ECR) image. Are the images fetched dynamically at the Lambda invokation time (which would enable central updates) or packaged into the lambda resource at deployment (which would require redeployment jsut like layers)?
Found the relevant docs
After you deploy a container image to a function, the image is read-only. To update the function code, you must first deploy a new image version. Create a new image version, and then store the image in the Amazon ECR repository.
Seems that what I had in mind is prevented by design by AWS Lambda

How can you use AWS Lambda scripts to deploy AWS Infrastructure with Terraform

I have already my whole AWS infrastructure set up in Terraform and everything works fine. So now, instead of deploying it from my local machine running terraform apply, I want to deploy my Infrastructure with an AWS Lambda Script completely serverless. Is there anyone who knows how to do this or where to read about this concept? Didn't find anything useful on the internet until now.
I think my sourcecode could lie on a S3 Bucket and the Lambda function grabs it, and runs it in terraform also set up in the function itself i guess due to terraform is such a small program.
I would attempt that as follows:
Create a lambda container image which would include official terraform binary. The actual lambda function code would use, lets say, python's python-terraform package to interact with the binary. Or directly invoke the binary using subprocess.run.
Setup a lambda execution role with all the permissions needed for creation of your resources.
Create a lambda function using the container image.
I haven't tried that personally yet, but I think it is something that should work.

Is there a way to containerized a normal AWS Lambda function?

My AWS lambda functions have input from AWS SNS (Topic subscription) and output will go to CRUD in NoSQL Database (likewise MongoDB).
So currently I have the SNS & Lambda function setup in AWS Cloud and they are working fine. However, I would like to containerize the lambda function as well as the MongoDB database and host them on AWS EKS using Docker + Kubernetes service. (So the functions will be a Docker image)
I am totally new to this container thing and I searched online though I could not found any that mentions how to containerized AWS Lambda Functions.
Is this possible? If it is what are the ways to do it?
Thank you.
The docker environment for AWS lambda function already exist and it is lambci/lambda. So if you want to run/test your functions locally, this is the tool normally used for that:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.
Since its open-sourced, you can also modify it if it does not suit your needs.
Lambda already uses Firecracker a microVM technology. So, not really sure why it's required to create a container out of Lambda.
The beauty of Lambda/Serverless is to simply write the function code and forget about the rest. If it's all about more control, then look at Knative which runs on top of K8S.

Can a Lambda container host more than one function at once?

Is it possible for multiple Lambda function definitions to be deployed to the same container instance?
I understand that a given Lambda container will only execute at most one function at a time, but wanted to understand the composition relationship between functions and the host container.
For example, in the Serverless App project type for Visual Studio with the AWS Toolkit Extensions, it's possible to define multiple functions in a single project, but do these get deployed via CloudFormation into separate containers or a single container representing the project?
I think it might help to separate out the process:
A lambda deployment is a zip file of code, and a matching configuration. In the case of your Serverless App project type, when you have multiple lambda functions to a project, you're creating multiple deployments.
A lambda instance is a running version of a deployment hosted inside of a container. Only one lambda instance is allowed in a container, that is an AWS guarantee. This means that you can never get access to code/memory/files outside of the currently running instance (either yours or anyone else's!)
As an optimisation AWS does re-use instances by freezing and thawing the container. This is because it's expensive to start a fresh container, copy the deployment, and do the init code of the deployment (known as the cold start).
There's not an official document on the matter, but I will share what I have gathered the past years from Docs/Posts/Conferences:
Lambda functions are "framework" (or something like it, just that word is the closest I can think of and the only one I have heard from AWS representative) on top of a Containerization service. Every time you call lambda function a container is being run (it could be an existing one, put on hold - adds performance boost, or an entirely new one, you can never know which one it is.)
You could assume, the container instance (using instance as it is used in ECS, a host machine) is the same, having in mind there are some workarounds for DB Connection pooling and things of the like, but nobody guarantees you that.
CloudFormation will deploy the functions to the same AWS account. This is happening at the AWS user account level. It's not running the functions.
Lambdas are event-driven and only run when they are triggered. Every instance of a Lambda is standalone as far as the user experiences it, and is "deployed in its own container" when triggered.
Maybe something deeper is happening under the abstraction layers but that's how you'll experience it.
I did this by using ExpressJS+NodeJS. Every function is a different ExpressJS route. However, so far I have not been able to do it with Spring Cloud Function.

Accessing files in EC2 from Lambda

I have few EC2 servers in AWS. Whenever the disk space exceeds a limit, i want to delete some files (may be logs folder) in EC2 instance automatically. I am planning to use Lambda and cloudwatch for this. Can i use Lambda to interact with EC2. If not possible, what is the alternate approach to achieve this functionality.
This is not an appropriate use-case for an AWS Lambda function.
AWS Lambda is suitable for tasks where compute is required in response to an event. Your use-case, however, is to manipulate information on an EC2 instance, which does not need cloud compute.
You could run a script on each each computer, triggered by a Scheduled Task.
Alternatively, you could use the Systems Manager Run Command (also known as the EC2 Run Command), which allows you to run commands on multiple Amazon EC2 instances and view the results. This could be used to trigger a local script, or it could pass the whole command to run (including the script). It is purpose-built for the type of task you describe.
AWS Lambda has access to your instances if they are available in the internet. If they are not available in the internet, it is possible to give access to AWS lambda using a NAT or instance Gateway in your VPC.
The problem is: access to your instance does not means access to the instances filesystems. To delete the files from Lambda you can use two alternatives:
Configure a network filesystem service in your instances an connect
to this services in your lambda function. Using windows you would
just "share" your disks, but in that case you would use some SMB
library in your lambda code, that "I think" did not have native SMB
support. Just keep in mind that your security guy will scream out
loud when you propose this alternative.
Create a "agent" in your EC2 instances and keep it running as a
Windows Service and call this agent from your lambda function. In
that case, the lambda will start the execution of the agent that
will be responsible for the file deletion.
Another option, is to follow Ramesh's suggestion and create a Powershell script and configure a cron job. To be easy, you can create a Image with this Powershell script and use the image to initialize each instance. The same solution would be applicable to "the agent" solution in the lambda alternantives.
I think that, in any case, you will need to change something in your 150 servers. Using a customized image can help you to simplify this a little bit, but you will not get a solution without some changes.
According to the following thread, you cannot access files inside a EC2 VM unless you are exposing files to the public using different methodology.
AWS Forum
Quoting from the forum
If you are talking about the underlying EC2 instance, answer is No, you cannot access those files.
However as a solution for your problem, you can used scheduled job to cleanup your files depending your usage. You can use a service or cron job.