I have 3 SQS queues:
HighPQueue1
MediumPQueue2
LowPQueue3
Messages are inserted in the queue based on the API gateway REST API call. If the message is of high priority, it goes to HighPQueue1. If the message is medium, it goes to MediumPQueue2. If the message is low, it goes to LowPQueue3.
The messages from these 3 queues has to be read in priority order. How can I do that using AWS?
I have thought about creating a Lambda and then checking if message is available first in HighPQueue1, then in MediumPQueue2 and then in LowPQueue3. Would that be the right approach?
I have to trigger AWS step functions for each SQS message depending on the priority. I want to limit to 10 concurrent requests for my AWS step functions at any given point in time.
You won't be able to use the lambda integration for this, but you could still use lambda if you want to start a new invocation every so often. I think what you are suggesting for the pattern is correct (check high, then medium, then low). Here are some things to keep in mind.
Make sure when you are checking the medium and low queues that you only request one message at a time if it's important that the high queue messages are processed quickly.
If you process any message you start over. In other words don't make the mistake of processing a high item and then checking the medium queue. Always start over.
Lambda may not be your best option if you are polling queues. You'll effectively have lambda compute running all the time. That still may be okay if this is the only workload running and you are staying within, or close to within, the free tier.
Consider handling multiple requests at the same time. Is there something in your downstream infrastructure that limits you to processing one message at a time? If not, I would skip this model entirely and go with one queue backed by lambda and running processes in parallel when multiple come in.
Related
Currently I have a process where a Lambda (A) gets triggered which has logic to find out what customers need to have another lambda (B) run for (via a queue). For any run there could be 3k to 4k messages placed on the SQS Queue by Lambda A to be picked up by Lambda B to process. As Lambda B communicates with an external Api, the concurrency is set to 10 for Lambda B so as not to overload the Api. The whole process completes in 35 to 45 minutes.
My problem is how to tell when all the processing is complete?
If you don't need timely information, you could check out the CloudWatch Metrics that SQS offers, e.g.:
ApproximateNumberOfMessagesVisible
The number of messages available for retrieval from the queue.
Reporting Criteria: A non-negative value is reported if the queue is active.
and
ApproximateNumberOfMessagesNotVisible
The number of messages that are in flight. Messages are considered to be in flight if they have been sent to a client but have not yet been deleted or have not yet reached the end of their visibility window.
Reporting Criteria: A non-negative value is reported if the queue is active.
If the sum of these two metrics hits zero, no messages are in the Queue, and processing should be done.
If you need more timely information, the producer of the messages could increment a counter item in DynamoDB with the number of messages added, and each Lambda decrements that counter once it's done. You could then add a Lambda to the DynamoDB Stream of that table with a filter and do something when the value changes to zero again. This is, however, much more complex.
A third option could be to transform the whole thing into a stepfunction and use a map state with a parallelization factor to work on the tasks. The drawback is that the length of the list it can work on is limited afaik.
I'm trying to understand how SQS Lambda Triggers works when polling for messages from the Queue.
Criteria
I'm trying to make sure that not more than 3 messages are processed within a period of 1 second.
Idea
My idea is to set the trigger BatchSize to 3 and setting the ReceiveMessageWaitTimeSeconds of the queue to 1 second. Am I thinking about this correctly?
Edit:
I did some digging and looks like I can set a concurrency limit on my Lambda. If I set my Lambda concurrency limit to one that ensures only one batch of message gets processed at a time. If my lambda runs for a second, then the next batch of messages gets processed at least a second later. The gotcha here is long-polling auto scales the number of asychronous polling on the queue based on message volume. This means, the lambdas can potentailly throttle when a large number of messages comes in. When the lambdas throttle, the message goes back to the queue until it eventually goes into the DLQ.
ReceiveMessageWaitTimeSeconds is used for long polling. It is the length of time, in seconds, for which a ReceiveMessage action waits for messages to arrive (docs). Long polling does not mean that your client will wait for the full length of the time set. If you have it set to one second, but in the queue we already have enough messages, your client will consume them instantaneously and will try to consume again as soon as processing is completed.
If you want to consume certain number of messages at certain rate, you have do this on your application (for example consumes messages on a scheduled basis). SQS by itself does not provide any kind of rate limiting similar to what you would want to accomplish.
If using SQS as an event source for a Lambda function, is there a way to limit the maximum amount of "active" messages to x. So, imagine there's a SQS queue with 1000 messages but instead of trying to process as many messages as possible (up to the default concurrency limit of 1000) we only want to process up to x messages at the same time. This obviously means that it'll take more time to process all messages but it would give us a possibility to better control e.g. writes to a database.
Also, in case a message can't be processed (due to e.g. an error that occurred in the Lambda function), is the message appended to the end of the queue (so all other messages are coming first) or is there a way to prioritise them after a certain waiting time (visibility timeout)?
Many thanks
As for throttling a queue, you could of added a Delivery Delay time or make it long polling but as yours is event driven this isn't a choice. So this leaves you with throttling your lambda to x many you want done a concurrently.
As for the messages which cant be processed that depends whether you are using
- standard queue, which wont hold any prioritization which message is picked up next.
- a .fifo queue Which will try to process it again as it would be next in line chronologically.
But if you caught the error you should send it straight to a dead letter queue to prevent unnecessary retries.
Although by throttling it you're removing all scalability of AWS, which is against its native architecture. Id recommend going back to the Database and seeing if any work can be improved there instead to avoid throttling.
From Reserving Concurrency for a Lambda Function - AWS Lambda:
You can configure a function with reserved concurrency to guarantee that it can always reach a certain level of concurrency. Reserving concurrency also limits the maximum concurrency for the function.
...
Your function can't scale out of control – Reserved concurrency also limits your function from using concurrency from the unreserved pool, capping it's maximum concurrency. Reserve concurrency to prevent your function from using all the available concurrency in the region, or from overloading downstream resources.
If a message is not processed within the invisibility timeout period, it is placed back on the queue. There is no guarantee of ordering of messages in Amazon SQS unless you are using a FIFO queue, which has further limitations on in-flight messages.
I am using SQS and lambda to process some specific requests. Each request can contain messages from 1 message up to hundred of thousands messages. Its working fine the only issue is that small requests sometimes have to wait for those large requests that are already in the queue (because all concurrent lambda are taken and I don't want to increase my lambda concurrency). So I'm thinking to have two queues, one for small requests and one for large requests so the small request can be processed faster. but the challenge is how to assign the number of lambda concurrency to each queue. Right now I set the lambda concurrency to 30, but if a large request comes in all the 30 lambda would be busy. Is there any way to tell lambda to use concurrent lambda partially (let's say 20 for large an 10 for small requests) based on the SQS queue that triggers it? or is there any other best practice to implement this kind of requirement?
You can have two copies of your function: one for large requests with 20 reserved concurrency, and second for small requests with 10 reserved concurrency.
Each function triggered by corresponding queue - It is most common approach to take care of priority messages.
However, downside will be that you always reserve 10 concurrency even if priority message queue is empty.
Is there any way to tell lambda to use concurrent lambda partially
No, once deployed they will run as configured.
Plus I don't think this should be a Lambda usage problem. You can control your active queue length by having a multi-tiered queue. An overly simplified solution would be
Create 2 wait queues, one each for large & short messages.
Create one active queue which feeds message into your Lambda consumer.
Producers send requests to wait queues.
Write logic to move messages from wait queues to active queues. This piece of code should have the logic to distribute messages based on your business requirements.
I understand the concept of delay queue of Amazon SQS, but I wonder why it is useful.
What's the usage of SQS delay queue?
Thanks
One use case which i can think of is usage in distributed applications which have eventual consistency semantics. The system consuming the message may have an dependency like a co-relation identifier to be available and hence may need to wait for certain guaranteed duration of time before seeing the co-relation data. In this case, it makes sense for the message to be delayed for certain duration of time.
Like you I was confused as to a use-case for delay queues, until I stumbled across one in my own work. My application needs to have an internal queue with each item waiting at least one minute between each check for completion.
So instead of having to manage a "last-checked-time" on every object, I just shove the object's ID into an SQS queue messagewith a delay time of 60 seconds, and my main loop then becomes a simple long-poll against the queue.
A few off the top of my head:
Emails - Let's say you have a service that sends reminder emails triggered from queue messages. You'd have to delay enqueueing the message in that case.
Race conditions - Delivery delays can be used to overcome race conditions in distributed systems. For example, a service could insert a row into a table, and sends a message about its availability to other services. They can't use the new entry just yet, so you have to delay publishing the SQS message.
Handling retries - Sometimes if a message fails you want to retry with exponential backoffs. This requires re-enqueuing the message with longer delays.
I've built a suite of API's to make queue message scheduling easy. You can call our API's to schedule queue messages, cancel, edit, and check on the status of such messages. Think of it like a scheduler microservice.
www.schedulerapi.com
If you are looking for a solution, let me know. I've built these schedulers before at work for delivering emails at high scale, so I have experience with similar use cases.
One use-case can be:
Think of a time critical expression like a scheduled equity trade order.
If one of your system is fetching all the order scheduled in next 60 minutes and putting them in queue (which will be fetched by another sub system).
If you send these order directly, then they will be visible immediately to process in queue and will be processed depending upon their order.
But most likely, they will not execute in exact time (Hour:Minute:Seconds) in which Customer wanted and this will impact the outcome.
So to solve this, what first sub system will do, it will add delay seconds (difference between current and execution time) so message will only be visible after that much delay or at exact time when user wanted.