I'm trying to develop a data pipeline using AWS lambda and I needed to know if it auto-scales immediately or does it require a warm-up time?
Lambda has this concept of Provisioned concurrency. From the docs:
Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
You can set a value for how many execution environments be prepared for parallel invocations. This will guarantee that your lambda wont require a warm-up time as long as you stay inside that value and you wont have more parallel executions.
Otherwise, it wont be guaranteed that your function will be warmed-up. If you have nothing set for provisioned concurrency, you most likely will have cold-starts.
Related
I was trying to set up lambda with provisioned concurrency. I enabled this feature for the latest version of my lambda function.
After that, i ran this function and watched logs in AWS X-RAY. I saw that my function, still has initialization, but it should become warm with provisioned concurrency.
Without latency, after first start, i ran it twice and it became warm as expected (because it is a default behaviour when lambda became warm after first start without provisioned concurrency).
I waited 15 minutes and was trying to start my lambda again and still it still has initialization time in the logs. It doesn't become warm with provisioned concurrency as expected and always has initialization time.
How can i resolve it?
I am aware of the cold-start and warm-start in AWS Lambda.
However, I am not sure during the warm-start if the Lambda architecture reuses the Firecracker VM in the backend? Or does it do the invocation in a fresh new VM?
Is there a way to enforce VM level isolation for every invocation through some other AWS solution?
Based on what stated on the documentation for Lambda execution context, Lambda tries to reuse the execution context between subsequent executions, this is what leads to cold-start (when the context is spun up) and warm-start (when an existing context is reused).
You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the execution context for subsequent invocations of the Lambda function.
This is corroborated by another statement in the documentation for the Lambda Runtime Environment where it's stated that:
When a Lambda function is invoked, the data plane allocates an execution environment to that function, or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment.
A later passage of the same page gives a bit more info on how environments/resources are shared among functions and executions in the same AWS Account:
Execution environments run on hardware virtualized virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. [...] Execution environments are never shared across functions, and microVMs are never shared across AWS accounts.
Additionally, there's also another doc page that gives some more details on isolation among environments but again, no mention to the ability to enforce 1 execution per environment.
As far as I know there's no way to make it so that a new execution will use a new environment rather than an existing one. AWS doesn't provide much insight in this but the wording around the topic seems to suggest that most people actually try to do the opposite of what you're looking for:
When you write your Lambda function code, do not assume that AWS Lambda automatically reuses the execution context for subsequent function invocations. Other factors may dictate a need for AWS Lambda to create a new execution context, which can lead to unexpected results, such as database connection failures.
I would say that if your concern is isolation from other customers/accounts, AWS guarantees isolation by means of virtualisation that although not being at the physical level, depending on their SLAs and your SLAs/requirements might be enough. If instead you're thinking on doing some kind of multi-tenant infrastructure that requires Lambda executions to be isolated from one another then this component might not be what you're looking for.
I have one cloud watch event set per minute which triggers AWS Lambda.I have set concurrent executions of lambda to 10 however it's only triggering a single instance per minute. I want it to run 10 concurrent instances per minute.
Concurrency in Lambda is managed pretty differently from what you expect.
In your case you want a single CloudWatch Event to trigger multiple instances each minute.
However, Concurrency in Lambda is working as follows: think you have CloudWatch Event triggering your Lambda and also other AWS services (e.g. S3 and DynamoDB) which trigger your Lambda. What happens when one of your triggers activate the Lambda is that a Lambda instance is active and is consumed until the Lambda finishes its work/computation. During that period of time, the total concurrency units will be decreased by one. At that very moment if another trigger activates the Lambda, the total concurrency units will be decreased again. And this will happen until your Lambda instances are being executed.
So, in your case there will be always a single event (CloudWatch) triggering a single Lambda instance, causing the system not to trigger multiple instances, as for its operation this is the correct way to work. In other words, you do not want to increase concurrent lambda execution to 10 (or whatever) to reach your goal of running 10 parallel instances per minute.
In order to do so, it's probably better for you to create a Lambda orchestrator which calls multiple instances of your Lambda and then setting the Lambda Concurrency in this last Lambda higher than 10 (if you do not want the Lambda to throttle). This way is also pretty good in order to manage the execution of your multiple instances and to catch errors atomically with a greater error flow control.
You can refer to this article in order to get the Lambda Concurrency behavior. The implementation of Lambda orchestrator to manage the multiple instances execution, instead is pretty straightforward.
So our project was using Hangfire to dynamically schedule tasks but keeping in mind auto scaling of server instances we decided to do away with it. I was looking for cloud native serverless solution and decided to use CloudWatch Events with Lambda. I discovered later on that there is an upper limit on the number of Rules that can be created (100 per account) and that wouldn't scale automatically. So now I'm stuck and any suggestions would be great!
As per CloudWatch Events documentation you can request a limit increase.
100 per region per account. You can request a limit increase. For
instructions, see AWS Service Limits.
Before requesting a limit increase, examine your rules. You may have
multiple rules each matching to very specific events. Consider
broadening their scope by using fewer identifiers in your Event
Patterns in CloudWatch Events. In addition, a rule can invoke several
targets each time it matches an event. Consider adding more targets to
your rules.
If you're trying to create a serverless task scheduler one possible way could be:
CloudWatch Event that triggers a lambda function every minute.
Lambda function reads a DynamoDB table and decide which actions need to be executed at that time.
Lambda function could dispatch the execution to other functions or services.
So I decided to do as Diego suggested, use CloudWatch Events to trigger a Lambda every minute which would query DynamoDB to check for the tasks that need to be executed.
I had some concerns regarding the data that would be fetched from dynamoDb (duplicate items in case of longer than 1 minute of execution), so decided to set the concurrency to 1 for that Lambda.
I also had some concerns regarding executing those tasks directly from that Lambda itself (timeouts and tasks at the end of a long list) so what I'm doing is pushing the tasks to SQS each separately and another Lambda is triggered by the SQS to execute those tasks parallely. So far results look good, I'll keep updating this thread if anything comes up.
I have an AWS Lambda Function setup with a trigger from a SQS queue. Current the queue has about 1.3m messages available. According to CloudWatch the Lambda function has only ever reached 431 invocations in a given minute. I have read that Lambda supports 1000 concurrent functions running at a time, so I'm not sure why it would be maxing out at 431 in a given minute. As well it looks like my function only runs for about 5.55s or so on average, so each one of those 1000 available concurrent slots should be turning over multiple times per minute, therefor giving a much higher rate of invocations.
How can I figure out what is going on here and get my Lambda function to process through that SQS queue in a more timely manner?
The 1000 concurrent connection limit you mention assumes that you have provided enough capacity.
Take a look at this, particularly the last bit.
https://docs.aws.amazon.com/lambda/latest/dg/vpc.html
If your Lambda function accesses a VPC, you must make sure that your
VPC has sufficient ENI capacity to support the scale requirements of
your Lambda function. You can use the following formula to
approximately determine the ENI capacity.
Projected peak concurrent executions * (Memory in GB / 3GB)
Where:
Projected peak concurrent execution – Use the information in Managing Concurrency to determine this value.
Memory – The amount of memory you configured for your Lambda function.
The subnets you specify should have sufficient available IP addresses
to match the number of ENIs.
We also recommend that you specify at least one subnet in each
Availability Zone in your Lambda function configuration. By specifying
subnets in each of the Availability Zones, your Lambda function can
run in another Availability Zone if one goes down or runs out of IP
addresses.
Also read this article which points out many things that might be affecting you: https://read.iopipe.com/5-things-to-know-about-lambda-the-hidden-concerns-of-network-resources-6f863888f656
As a last note, make sure your SQS Lambda trigger has a batchSize of 10 (max available).