Can a Lambda container host more than one function at once? - amazon-web-services

Is it possible for multiple Lambda function definitions to be deployed to the same container instance?
I understand that a given Lambda container will only execute at most one function at a time, but wanted to understand the composition relationship between functions and the host container.
For example, in the Serverless App project type for Visual Studio with the AWS Toolkit Extensions, it's possible to define multiple functions in a single project, but do these get deployed via CloudFormation into separate containers or a single container representing the project?

I think it might help to separate out the process:
A lambda deployment is a zip file of code, and a matching configuration. In the case of your Serverless App project type, when you have multiple lambda functions to a project, you're creating multiple deployments.
A lambda instance is a running version of a deployment hosted inside of a container. Only one lambda instance is allowed in a container, that is an AWS guarantee. This means that you can never get access to code/memory/files outside of the currently running instance (either yours or anyone else's!)
As an optimisation AWS does re-use instances by freezing and thawing the container. This is because it's expensive to start a fresh container, copy the deployment, and do the init code of the deployment (known as the cold start).

There's not an official document on the matter, but I will share what I have gathered the past years from Docs/Posts/Conferences:
Lambda functions are "framework" (or something like it, just that word is the closest I can think of and the only one I have heard from AWS representative) on top of a Containerization service. Every time you call lambda function a container is being run (it could be an existing one, put on hold - adds performance boost, or an entirely new one, you can never know which one it is.)
You could assume, the container instance (using instance as it is used in ECS, a host machine) is the same, having in mind there are some workarounds for DB Connection pooling and things of the like, but nobody guarantees you that.

CloudFormation will deploy the functions to the same AWS account. This is happening at the AWS user account level. It's not running the functions.
Lambdas are event-driven and only run when they are triggered. Every instance of a Lambda is standalone as far as the user experiences it, and is "deployed in its own container" when triggered.
Maybe something deeper is happening under the abstraction layers but that's how you'll experience it.

I did this by using ExpressJS+NodeJS. Every function is a different ExpressJS route. However, so far I have not been able to do it with Spring Cloud Function.

Related

Does AWS lambda (serverless) support the installation of applications the same way it supports one-off functions?

I understand that using AWS Lambda allows us to submit single functions to a runtime and have them execute when needed. But what about the software these functions depend on? Where do these get installed? Does the installation and configuration happen every time the lambda instance gets spun up? Wouldn't this take a while for larger applications/detailed configurations?
Or does the installed software sit on the server (say on an EC2 instance) and then simply gets called upon as needed by the lambda functions?
There are essentially two ways to manage dependencies of a Lambda function.
Using lambda layers: A Lambda layer is an archive containing additional code, such as libraries, dependencies, or even custom runtimes. When you include a layer in a function, the contents are extracted to the /opt directory in the execution environment. You can include up to five layers per function, which count towards the standard Lambda deployment size limits. Have a look at this article for more details.
Using container images: You can package your code and dependencies as a container image using tools such as the Docker command line interface (CLI). You can then upload the image to your container registry hosted on Amazon Elastic Container Registry (Amazon ECR). See the official docs here.
Because Lambda can scale to zero, it suffers from a so-called cold star issues. This means that unless there is a warm, running container instance available, Lambda has to "cold start" a new container causing some delay, especially for large footprint applications stacks such as JVM based.
Best, Stefan

Is there a way to containerized a normal AWS Lambda function?

My AWS lambda functions have input from AWS SNS (Topic subscription) and output will go to CRUD in NoSQL Database (likewise MongoDB).
So currently I have the SNS & Lambda function setup in AWS Cloud and they are working fine. However, I would like to containerize the lambda function as well as the MongoDB database and host them on AWS EKS using Docker + Kubernetes service. (So the functions will be a Docker image)
I am totally new to this container thing and I searched online though I could not found any that mentions how to containerized AWS Lambda Functions.
Is this possible? If it is what are the ways to do it?
Thank you.
The docker environment for AWS lambda function already exist and it is lambci/lambda. So if you want to run/test your functions locally, this is the tool normally used for that:
A sandboxed local environment that replicates the live AWS Lambda environment almost identically – including installed software and libraries, file structure and permissions, environment variables, context objects and behaviors – even the user and running process are the same.
Since its open-sourced, you can also modify it if it does not suit your needs.
Lambda already uses Firecracker a microVM technology. So, not really sure why it's required to create a container out of Lambda.
The beauty of Lambda/Serverless is to simply write the function code and forget about the rest. If it's all about more control, then look at Knative which runs on top of K8S.

What to do with an AWS Elastic Beanstalk environment when not in use?

I have an AWS Elastic Beanstalk env for a dev version of my application. I don't need it to be up all the time, but want to use it every now and again.
What can I do so that I will not be billed for it, but I don't have to keep remaking it over and over?
Thanks!!
You’ll have to stop the instances. Then restart them when you want them.
But, that comes with issues.
Your time is better spent making sure it can be rebuilt over and over again cleanly. Why do you want to avoid that rebuilding? Takes too long to rebuild? Or are there errors each time?
Are you using the eb-cli to make destroy and rebuild quicker and easier? https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3.html
The most cost-effective way is to setup the full environment in CloudFormation (or eb cli as #TomHarvey pointed out), which would fully automate provisioning and tearing down all the resources.
This way whenever you need it, you just provision it from scratch. When you are done with it, you just delete it fully and don't pay anything for it when not in use.
Also since its CloudFormation you can parametrize it to provision easily its different variants. For example, with tiny instances without load balancer for initial testing or development. At other times you can provision bigger environment if needed.
If you absolutely don't want to do anything manually, you can setup automatic provisioning of your stack at regular schedule, e.g. only on Monday during work ours. For this you could use CloudWatch Event scheduled rule with a lambda function which would deploy your template automatically as well delete it after hours.

Accessing files in EC2 from Lambda

I have few EC2 servers in AWS. Whenever the disk space exceeds a limit, i want to delete some files (may be logs folder) in EC2 instance automatically. I am planning to use Lambda and cloudwatch for this. Can i use Lambda to interact with EC2. If not possible, what is the alternate approach to achieve this functionality.
This is not an appropriate use-case for an AWS Lambda function.
AWS Lambda is suitable for tasks where compute is required in response to an event. Your use-case, however, is to manipulate information on an EC2 instance, which does not need cloud compute.
You could run a script on each each computer, triggered by a Scheduled Task.
Alternatively, you could use the Systems Manager Run Command (also known as the EC2 Run Command), which allows you to run commands on multiple Amazon EC2 instances and view the results. This could be used to trigger a local script, or it could pass the whole command to run (including the script). It is purpose-built for the type of task you describe.
AWS Lambda has access to your instances if they are available in the internet. If they are not available in the internet, it is possible to give access to AWS lambda using a NAT or instance Gateway in your VPC.
The problem is: access to your instance does not means access to the instances filesystems. To delete the files from Lambda you can use two alternatives:
Configure a network filesystem service in your instances an connect
to this services in your lambda function. Using windows you would
just "share" your disks, but in that case you would use some SMB
library in your lambda code, that "I think" did not have native SMB
support. Just keep in mind that your security guy will scream out
loud when you propose this alternative.
Create a "agent" in your EC2 instances and keep it running as a
Windows Service and call this agent from your lambda function. In
that case, the lambda will start the execution of the agent that
will be responsible for the file deletion.
Another option, is to follow Ramesh's suggestion and create a Powershell script and configure a cron job. To be easy, you can create a Image with this Powershell script and use the image to initialize each instance. The same solution would be applicable to "the agent" solution in the lambda alternantives.
I think that, in any case, you will need to change something in your 150 servers. Using a customized image can help you to simplify this a little bit, but you will not get a solution without some changes.
According to the following thread, you cannot access files inside a EC2 VM unless you are exposing files to the public using different methodology.
AWS Forum
Quoting from the forum
If you are talking about the underlying EC2 instance, answer is No, you cannot access those files.
However as a solution for your problem, you can used scheduled job to cleanup your files depending your usage. You can use a service or cron job.

How to run a service on AWS ECS with container overrides?

On AWS ECS you can run a task, or a service.
If you run a task with run_task(**kwargs), you have the option to override some task options, for example the container environment variables, this way you can configure the thing inside the container for example. That's great.
Now, I can't find a way how to do the same with create_service(**kwargs). You can only specify a task, so the created container runs with configuration as specified in the task definition. No way to configure it.
Is there a way how to modify task in a service, or this is not possible with the AWS ECS service?
This is not possible. If you think how services work, they create X number of replicas of the task. All instances of the task have the same parameters, because the purpose is scaling out the task - they should do the same job. Often the traffic is load-balanced (part of service configuration), so it is undesirable that a user will get different response next time than the previous request due to ending up on a task which is configured differently. So bottom line is - that's by design.
Because parameters are shared, if you need to change a parameter, you create a new definition of the task, and then launch that as a service (or update an existing service).
If you want the tasks to be aware of other tasks (and thus behave differently), for example to write data to different shards of a sharded store, you have to implement that in the task's logic.