How does AWS Lambda determine if messages are still in SQS queue? - amazon-web-services

When using AWS Lambda with a SQS queue (as event source), it is written in the doc
If messages are still available, Lambda increases the number of
processes that are reading batches by up to 60 more instances per
minute.
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
My question here is how does the Lambda service determine "If messages are still available" ?

Answering the "how" question in a slightly different way:
Behind the scenes, Lambda operates a "State Manager" control-plane service that discovers work from the queue. State Manager also manages scaling of the fleet of "Poller" workers that do the actual retrieving, batching, invoking, and deleting.
These implementation details are from the Event Source Mapping section of the re:Invent 2022 video A closer look at AWS Lambda (SVS404-R). Here is a screenshot:

One of the calls to the SQS API is to get queue attributes (Java API, others similar). This returns a response and one of the attributes of the response is "approximate number of messages". With this you or AWS can determine about how many messages are in the queue.
From this, AWS can determine if it's worth spinning up additional instances. You too can get this information from the queue.

I imagine it uses the ApproximateNumberOfMessagesVisible metric on the SQS queue to check how many messages are available, and uses that number, plus your batch size configuration, to determine how many more Lambda instances your function needs to be scaled out to.

I believe the documentation refers to Lambda polling the queue to know whether there are still messages. Read more about it here.
Lambda polls the queue and invokes your Lambda function synchronously
with an event that contains queue messages. Lambda reads messages in
batches and invokes your function once for each batch. When your
function successfully processes a batch, Lambda deletes its messages
from the queue.
Event Source Mapping:
Lambda only sees messages that are visible, via the visibility timeout setting on the SQS queue. This is to prevent other queue consumers processing the message. I believe as an event-source, Lambda receives messages from the SQS queue, via being mapped to it.

As per the documentation you shared,for standard queues, Long Polling is in effect. Long polling basically waits for a certain amount of time to verify if there is a message in the queue. refer to the following docs :
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/confirm-queue-is-empty.html

Related

AWS Lambda read from SQS without concurrency

My requirement is like this.
Read from a SQS every 2 hours, take all the messages available and then process it.
Processing includes creating a file with details from SQS messages and sending it to an sftp server.
I implemented a AWS Lambda to achieve point 1. I have a Lambda which has an sqs trigger. I have set batch size as 50 and then batch window as 2 hours. My assumption was that Lambda will get triggered every 2 hours and 50 messages will be delivered to the lambda function in one go and I will create a file for every 50 records.
But I observed that my lambda function is triggered with varied number of messages(sometimes 50 sometimes 20, sometimes 5 etc) even though I have configured batch size as 50.
After reading some documentation I got to know(I am not sure) that there are 5 long polling connections which lambda spawns to read from SQS and this is causing this behaviour of lambda function being triggered with varied number of messages.
My question is
Is my assumption on 5 parallel connections being established correct? If yes, is there a way I can control it? I want this to happen in a single thread / connection
If 1 is not possible, what other alternative do I have here. I do not want to have one file created for every few records. I want one file to be generated every two hours with all the messages in sqs.
A "SQS Trigger" for Lambda is implemented with the so-called Event Source Mapping integration, which polls, batches and deletes messages from the queue on your behalf. It's designed for continuous polling, although you can disable it. You can set a maximum batch size of up to 10,000 records a function receives (BatchSize) and a maximum of 300s long polling time (MaximumBatchingWindowInSeconds). That doesn't meet your once-every-two-hours requirement.
Two alternatives:
Remove the Event Source Mapping. Instead, trigger the Lambda every two hours on a schedule with an EventBridge rule. Your Lambda is responsible for the SQS ReceiveMessage and DeleteMessageBatch operations. This approach ensures your Lambda will be invoked only once per cron event.
Keep the Event Source Mapping. Process messages as they arrive, accumulating the partial results in S3. Once every two hours, run a second, EventBridge-triggered Lambda, which bundles the partial results from S3 and sends them to the SFTP server. You don't control the number of Lambda invocations.
Note on scaling:
<Edit (mid-Jan 2023): AWS Lambda now supports SQS Maximum Concurrency>
AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source, a more direct and less fiddly way to control concurrency than with reserved concurrency. The Maximum Concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. The valid range is 2-1000 concurrent instances.
The create and update Event Source Mapping APIs now have a ScalingConfig option for SQS:
aws lambda update-event-source-mapping \
--uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
--scaling-config '{"MaximumConcurrency":2}' # valid range is 2-1000
</Edit>
With the SQS Event Source Mapping integration you can tweak the batch settings, but ultimately the Lambda service is in charge of Lambda scaling. As the AWS Blog Understanding how AWS Lambda scales with Amazon SQS standard queues says:
Lambda consumes messages in batches, starting at five concurrent batches with five functions at a time. If there are more messages in the queue, Lambda adds up to 60 functions per minute, up to 1,000 functions, to consume those messages.
You could theoretically restrict the number of concurrent Lambda executions with reserved concurrency, but you would risk dropped messages due to throttling errors.
You could try to set the ReservedConcurrency of the function to 1. That may help. See the docs for reference.
A simple solution would be to create a CloudWatch Event Trigger (similar to a Cronjob) that triggers your Lambda function every two hours. In the Lambda function, you call ReceiveMessage on the Queue until you get all messages, process them and afterward delete them from the Queue. The drawback is that there may be too many messages to process within 15 minutes so that's something you'd have to manage.

How to implement Amazon SQS (fifo)-lambda with message processing EXACTLY ONE BY ONE

I'm having a use case where I have an Amazon SQS fifo queue with lambda function. I need to make sure that fifo triggers the lambda only when the previous lambda execution is completed (also the events come in order). As from aws docs, fifo supports exactly once processing but it does not mention anywhere that it would not push more event on lambda untill the first message is completely processed.
I need to make sure that the next message is processed only when the previous message is completely processed by the lambda function.
Is there are way to ensure that message 2 is only processed by lambda when message 1 is completely processed by lambda?
fifo supports exactly once processing but it does not mention anywhere
that it would not push more event on lambda untill the first message
is completely processed.
SQS never pushes anything anywhere. You have to poll SQS for messages. When you configure Lambda integration with SQS Lambda is actually running a process behind the scenes to poll SQS for you.
AWS FIFO queues allow you to force messages to be processed in order by specifying a Message Group ID. When you specify the same Message Group ID for multiple messages, then the FIFO queue will only make one of those messages available at a time (in first-in-first-out) order. Only after the first message is removed from the queue is the second message made available, etc...
In addition to this, you should configure AWS Lambda SQS integration with a Batch Size of 1, so that it doesn't try to wait for multiple messages to be available before processing. And you could configure the Reserved Concurrency on the Lambda function to 1, as mentioned in the other answer, so that only one instance of the Lambda function can be running at a time.
It is actually pretty easy to do this. It is not clarified, since it will by default simply use up the available account concurrency and handle as many messages in parallel as is possible.
You can influence this by setting the reserved concurrency for the lambda function to 1. This will ensure no more than 1 lambda function will be executed at the same time.

AWS SQS: Will multiple lambda function workers pick up a copy of single message from multiple servers?

As per aws documentation,
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html#standard-queues-at-least-once-delivery
Amazon SQS stores copies of messages on multiple servers for redundancy and high availability.
My Case:
I have integrated my standard queue with the lambda function. When ever a new message comes to the queue lambda function will be invoked.
Since AWS Lambda will continue to increase the number of concurrent function executions according to the queue size,
If my queue size is 1000 according to that, number of concurrent executions of lambda function will also increased. In that case, is there any chance of multiple workers processing the same message copy by receiving from multiple servers at a time?
I have gone through the question:
AWS: multiple instances reading SQS
But there I didn't found the concept of storing the message copy on multiple servers.
Even a single instance of your Lambda function may receive a duplicate message. SQS makes a "best effort" to deliver a message only once, but makes no guarantees. If you want a guarantee that a message will be delivered only once, you would need to use SQS FIFO queues, but those don't support Lambda triggers at this time.
The possibility of some code receiving the same message arises from the possibility that a ReceiveMessage call might happen simultaneously and might hit separate SQS servers.
While it isn't documented, I would suspect that the mechanism that triggers a Lambda function from an SQS queue (which is quite new) would not have this behavior since AWS is responsible for triggering Lambda, as opposed to an external process calling into SQS.
However, there is no documentation either way to confirm whether it might, or won't, happen.
Standard queue can deliver duplicate message. The questions that you have posted is standard messaging queue issue that everyone face while working with jms or any other queue messaging.We also had similar situation where in destination service was ECS instead of lambda. We are using unique identifier in database to check the status of submission to verify whether the message is processed by other nodes or not.
FIFO could have been choice to solve your problem but currently it is not supported by lambda functions.

Can I limit concurrent invocations of an AWS Lambda?

I have a Lambda function that’s triggered by a PUT to an S3 bucket.
I want to limit this Lambda function so that it’s only running one instance at a time – I don’t want two instances running concurrently.
I’ve had a look through the Lambda configuration and docs, but I can’t see anything obvious. I can about writing my own locking system, but it would be nice if this was already a solved problem.
How can I limit the number of concurrent invocations of a Lambda?
AWS Lambda now supports concurrency limits on individual functions:
https://aws.amazon.com/about-aws/whats-new/2017/11/set-concurrency-limits-on-individual-aws-lambda-functions/
I would suggest you to use Kinesis Streams (or alternatively DynamoDB + DynamoDB Streams, which essentially have the same behavior).
You can see Kinesis Streams as as queue. The good part is that you can use a Kinesis Stream as a Trigger to you Lambda function. So anything that gets inserted into this queue will automatically be passed over to your function, in order. So you will be able to process those S3 events one by one, one Lambda execution after the other (one instance at a time).
In order to do that, you'll need to create a Lambda function with the simple purpose of getting S3 Events and putting them into a Kinesis Stream. Then you'll configure that Kinesis Stream as your Lambda Trigger.
When you configure the Kinesis Stream as your Lambda Trigger I suggest you to use the following configuration:
Batch size: 1
This means that your Lambda will be called with only one event from Kinesis. You can select a higher number and you'll get a list of events of that size (for example, if you want to process the last 10 events in one Lambda execution instead of 10 consecutive Lambda executions).
Starting position: Trim horizon
This means it'll behave as a queue (FIFO)
A bit more info on AWS May Webinar Series - Streaming Data Processing with Amazon Kinesis and AWS Lambda.
I hope this helps anyone with a similar problem.
P.S. Bear in mind that Kinesis Streams have their own pricing. Using DynamoDB + DynamoDB Streams might be cheaper (or even free due to the non-expiring Free Tier of DynamoDB).
No, this is one of the things I'd really like to see Lambda support, but currently it does not. One of the problems is that if there were a lot of S3 PUT operations happening AWS would have to queue up all the Lambda invocations somehow, and there is currently no support for that.
If you built a locking mechanism into your Lambda function, what would you do with the requests you don't process due to a lock? Would you just throw those S3 notifications away?
The solution most people recommend is to have S3 send the notifications to an SQS queue, and then have your Lambda function scheduled to run periodically, like once a minute, and check if there is an item in the queue that needs to be processed.
Alternatively, have S3 send the notifications to SQS and just have a t2.nano EC2 instance with a single-threaded service polling the queue.
I know this is an old thread, but I ran across it trying to figure out how to make sure my time sequenced SQS messages were processed in order coming out of a FIFO queue and not getting processed simultaneously/out-of-order via multiple Lambda threads running.
Per the documentation:
For FIFO queues, Lambda sends messages to your function in the order
that it receives them. When you send a message to a FIFO queue, you
specify a message group ID. Amazon SQS ensures that messages in the
same group are delivered to Lambda in order. Lambda sorts the messages
into groups and sends only one batch at a time for a group. If your
function returns an error, the function attempts all retries on the
affected messages before Lambda receives additional messages from the
same group.
Your function can scale in concurrency to the number of active message
groups.
Link: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
So essentially, as long as you use a FIFO queue and submit your messages that need to stay in sequence with the same MessageGroupID, SQS/Lambda automatically handles the sequencing without any additional settings necessary.
Have the S3 "Put events" cause a message to be placed on the queue (instead of involving a lambda function). The message should contain a reference to the S3 object. Then SCHEDULE a lambda to "SHORT POLL the entire queue".
PS: S3 events can not trigger a Kinesis Stream... only SQS, SMS, Lambda (see http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-destinations). Kinesis Stream are expensive and used for real-time event handling.

Read SQS queue from AWS Lambda

I have the following infrastructure:
I have an EC2 instance with a NodeJS+Express process listening on a port for messages (process 1). Every time the process receives a message it sends it to an SQS queue. Then I have another process in the same machine reading the queue using long polling (process 2). When it finds a message in the queue it inserts the data in a MariaDB database sitting on an RDS instance.
(Just to clarify, messages are generated by users, they send a chunk of data which can contain arbitrary information to the endpoint where the process 1 is listening)
Now I want to put the process that reads the SQS (process 2) in a Lambda function so that the process that writes to the queue and the one that reads from the queue are completely independent. The problem is that I don't know if this is possible.
I know that Lambda function are invoked in response to an event, and the events supported at the moment are S3, SNS, SES, DynamoDB, Kinesis, Cognito, CloudWatch and Cloudformation but NOT SQS.
I was thinking in using SNS notifications to invoke the Lambda function so that every time a message is pushed to the queue, an SNS notification is fired and invokes the Lambda function but after playing a bit with it I've realised that is not possible to create an SNS notification from SQS, it's only possible to write SNS notifications to the queue.
Right now I'm a bit stuck because I don't know how to continue. I have the feeling that is not possible to create this infrastructure due to the current limitations in the AWS services. Is there another way to do what I want or am I in a dead-end?
Just to extend my question with some research I've made, this github repo shows how to read an SQS queu from a Lambda function but the lambda function works only if is fired from the command line:
https://github.com/robinjmurphy/sqs-to-lambda
In the readme, the author mentions the following:
Update: Lambda now supports SNS notifications as an event source,
which makes this hack entirely unneccessary for SNS notifcations. You
might still find it useful if you like the idea of using a Lambda
function to process jobs on an SQS queue.
But I think this doesn't solve my problem, an SNS notification can invoke the Lambda function but I don't see how I can create a notification when a message is received in the SQS queue.
Thanks
There are couple of Strategies which can be used to connect the dots, (A)Synchronously or Run-Sleep-Run to keep the data process flow between SNS, SQS, Lambda.
Strategy 1 : Have a Lambda function listen to SNS and process it in real time [Please note that an SQS Queue can subscribe to an SNS Topic - which would may be helpful for logging / auditing / retry handling]
Strategy 2 : Given that you are getting data sourced to SQS Queue. You can try with 2 Lambda Functions [Feeder & Worker].
Feeder would be scheduled lambda function whose job is to take items
from SQS (if any) and push it as an SNS topic (and continue doing it forever)
Worker would be linked to listen the SNS topic which would do the actual data processing
We can now use SQS messages to trigger AWS Lambda Functions. Moreover, no longer required to run a message polling service or create an SQS to SNS mapping.
Further details:
https://aws.amazon.com/blogs/aws/aws-lambda-adds-amazon-simple-queue-service-to-supported-event-sources/
AWS SQS is one of the oldest products of Amazon, which only supported polling (long and short) up until June 2018. As mentioned in this answer, AWS SQS now supports the feature of triggering lambda functions on new message arrival in SQS. A complete tutorial for this is provided in this document.
I used to tackle this problem using different mechanisms, and given below are some approaches you can use.
You can develop a simple polling application in Lambda, and use AWS CloudWatch to invoke it every 5 mins or so. You can make this near real-time by using CloudWatch events to invoke lambda with short downtimes. Use this tutorial or this tutorial for this purpose. (This could cost more on Lambdas)
You can consider that SQS is redundant if you don't need to persist the messages nor guarantee the order of delivery. You can use AWS SNS (Simple Notification Service) to directly invoke a lambda function and do whatever the processing required. Use this tutorial for this purpose. This will happen in real-time. But the main drawback is the number of lambdas that can be initiated per region at a given time. Please read this and understand the limitation before following this approach. Nevertheless AWS SNS Guarantees the order of delivery. Also SNS can directly call an HTTP endpoint and store the message in your DB.
I had a similar situation (and now have a working solution deploed). I have addressed it in a following manner:
i.e. publishing events to SNS; which then get fanned-out to Lambda and SQS.
NOTE: This is not applicable to the events that have to be processed in a certain order.
That there are some gotchas (w/ possible solutions) such as:
racing condition: lambda might get invoked before messages is deposited into the queue
distributed nature of SQS queue may lead to returning no messages even though there is a message note1.
The solution to both cases would be to do long-polling of SQS queue; but this does make your lambda bill more expensive.
note1
Short poll is the default behavior where a weighted random set of machines is sampled on a ReceiveMessage call. This means only the messages on the sampled machines are returned. If the number of messages in the queue is small (less than 1000), it is likely you will get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response; in which case you should repeat the request.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html
We had some similar requirements so we ended up building a library and open sourcing it to help with SQS to Lambda async. I'm not sure if this fills your particular set of requirements, but thought it might be worth a look: https://read.iopipe.com/sqs-lambda-teaming-up-92c4096be49c