I want to have only 200 messages there.
All others should move to the dead letter queue.
We just don't have the capacity to process more messages due to dependency on other services.
I don't think it is possible to limit the number of messages in a queue. You can set a limit to the size of a message in a queue but not the number of messages.
Source: SetQueueAttributes
You definitely can't limit the number of messages in the queue.
What is the nature of your application? Maybe there is a better solution if we knew more about why you need to limit the queue size...
SQS does not have such a limiting feature.
So don't try to do it at the SQS level. Instead, implement this limiting logic as you're pulling messages from the queue.
Keep track of the messages you pull from the queue and send to the 3rd party service. Once you hit your limit (of 20?), then junk the message.
Have a counter of messages that are "being processed".
Pull a message from the queue.
Check the counter and if it's less than 20, increment the counter and send the message to the 3rd party service.
When the 3rd party service call returns, decrement the counter.
When you check the counter in #2 above and it's 20, then junk the message.
UPDATE:
If you don't have a delayed visibility set to them, you could call ApproximateNumberOfMessagesVisible via the cloudwatch API before allowing the message to go through.
The number of messages available for retrieval from the queue.
Units: Count
Valid Statistics: Average
If you do have delayed visibility greater than 0, you could do two checks, the second with ApproximateNumberOfMessagesNotVisible.
If this solution doesn't work, yes, this seems a bit much, but you could do a call on NumberOfMessagesDeleted and NumberOfMessagesSent and get the number of messages still in the queue.
So the (pseudo) "code" would look like:
if (ApproximateNumberOfMessagesVisible < 200)
//Send message
//OR
var x = NumberOfMessagesSent - NumberOfMessagesDeleted;
if (x < 200)
//Send message
HERE is the documentation of the above calls
Information Check?
After a second look, I do not believe the below configuration will solve this problem. I will leave it in the answer until confirmed incorrect.
I believe this is possible during the set up process by adjusting the Dead Letter Queue Settings
In the SQS set up you will see: Use Redrive Policy that states:
Send messages into a dead letter queue after exceeding the Maximum Receives.
And just below that: Maximum Receives that states:
The maximum number of times a message can be received before it is sent to the Dead Letter Queue.
This setting should send all overflow to a secondary queue that is an optional value. So in other words, you can enable Redrive Policy leave Dead Letter Queue blank with the Maximum Receives set to 200
amazon-web-services amazon-sqs amazon-cloudwatch
Related
Currently I have a process where a Lambda (A) gets triggered which has logic to find out what customers need to have another lambda (B) run for (via a queue). For any run there could be 3k to 4k messages placed on the SQS Queue by Lambda A to be picked up by Lambda B to process. As Lambda B communicates with an external Api, the concurrency is set to 10 for Lambda B so as not to overload the Api. The whole process completes in 35 to 45 minutes.
My problem is how to tell when all the processing is complete?
If you don't need timely information, you could check out the CloudWatch Metrics that SQS offers, e.g.:
ApproximateNumberOfMessagesVisible
The number of messages available for retrieval from the queue.
Reporting Criteria: A non-negative value is reported if the queue is active.
and
ApproximateNumberOfMessagesNotVisible
The number of messages that are in flight. Messages are considered to be in flight if they have been sent to a client but have not yet been deleted or have not yet reached the end of their visibility window.
Reporting Criteria: A non-negative value is reported if the queue is active.
If the sum of these two metrics hits zero, no messages are in the Queue, and processing should be done.
If you need more timely information, the producer of the messages could increment a counter item in DynamoDB with the number of messages added, and each Lambda decrements that counter once it's done. You could then add a Lambda to the DynamoDB Stream of that table with a filter and do something when the value changes to zero again. This is, however, much more complex.
A third option could be to transform the whole thing into a stepfunction and use a map state with a parallelization factor to work on the tasks. The drawback is that the length of the list it can work on is limited afaik.
I have a use case to know how many times sqs message has been read in my code.
For example we read message from SQS, for abc reason/exception we cant process that message . Now the same message available in queue to read after visibility timeout.
This will create endless loop. Is there a way to know how many times particular sqs message has been read and returned back to queue.
I am aware this can be handled via dead letter queue. Since that requires more effort I am checking is there any other option
i dont want to retry the message if it fails more than x time and i want to delete it. Is it possible in SQS
You can do this manually by looking at the approximateReceiveCount attribute of your messages, see this question on how to do so. You just need to implement the logic to read the count and decide whether to try processing the message or delete it. Note however that receiveCount is affected by more than just programmatically processing messages: viewing messages in the console will increment it too.
That being said a DLQ is a premade solution for exactly this usecase. It's not a lot of additional work: all you have to do is create another SQS queue, set it as the DLQ of your processing queue, and set the number of retries. Then, the DLQ handles all your redrive logic, and instead of deleting messages after n failures they're moved to the DLQ, where you can manually look at them to understand why they're failing, set metrics alarms on the queue, and if you want manually re-drive the messages into your processing queue. Or just ignore them until they age out of the queue based on its retention policy - the important thing is that the DLQ gives you the option of being able to see which messages failed after the fact, while deleting them outright does not.
When calling ReceiveMessage(), you can specify a list of AttributeNames that you would like returned.
One of these attributes is ApproximateReceiveCount, which returns "the number of times a message has been received across all queues but not deleted".
It is an 'approximate' count due to the highly parallel nature of SQS -- it is possible that the count is slightly off if a message was processed around the same time as this request.
I don't understand the metrics for my SQS non-FIFO queue (images below) I'm looking at so I'm hoping someone can help me. The attached images show how I've configured the metrics and are the sum the number of messages sent and the number of messages deleted for the lifetime of this SQS queue (the queue is less than 1 week old but I've set the metrics period for 2 weeks).
It's my understanding that NumberOfMessagesSent refers to the number of messages that have been successfully enqueued and that NumberOfMessagesDeleted is the number of messages that have been successfully dequeued. Given that line of thinking I would think that NumberOfMessagesDeleted should always be <= than NumberOfMessagesSent but this is clearly not the case.
What am I missing here?
For every message you consume you have a receipt handle. You can call DeleteMessage using this handle multiple time, these calls are recorded successfully increasing the value for NumberOfMessagesDeleted metric.
In fact the AWS docs provide 2 examples when will the NumberOfMessagesDeleted larger then expected:
In case of multiple consumers for the same queue:
If the message is not processed before the visibility timeout expires, the message becomes available to other consumers that can process it and delete it again, increasing the value of the NumberOfMessagesDeleted metric.
Calling DeleteMessage multiple times for the same message:
If the message is processed and deleted but you call the DeleteMessage action again using the same receipt handle, a success status is returned, increasing the value of the NumberOfMessagesDeleted metric.
The second one may occur if you have a bug in your code. For example, the library used automatically deletes the message after it is received, but you are also attempt to delete the message manually.
Furthermore, non-FIFO SQS queues may encounter message duplications, which can also increase the number of messages deleted.
I am using SQS queues in two places of my Spring boot application :
In one queue, I would like the messages to be routed to DLQ when maximum numbers of receives for a given message > = 3
For the second case, I don't like to configure a DLQ.
In (1) and (2), however, I would like to delete the message from DLQ and normal queue respectively after 3 times receives.
As of now, I cannot find any such configurations in SQS, that allows me to delete a message from the queue after a certain number of receives.
Maybe, I am missing something. Could anyone please help here?
There is no mechanism for "automated" deletion of messages from SQS queue upon a given number of unsuccessful received, if you don't want to use DLQ.
Without DQL, SQS will keep messages in the queue till they expire. Thus, if you want to do what you wish, you have to create your own solution for that. You have to store number of times the message got received, e.g., in DynamoDB, and then upon third receive, the consumer must explicitly delete the message from the queue.
You can explore sqs message attributes. Once you received the message, delete it from the queue and send it back to the queue with an added message attribute stating how many times you have received the message.
Ref:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-java-send-message-with-attributes.html
Currently we want to pull down an entire FIFO queue, and process the contents, and if any issues, release messages back into the queue.
The problem is, that currently AWS only gives us 10 messages, and won't give us 10 more (which is the way you get bulk messages in SQS, multiple 10 max message requests) until we delete or release the first 10.
We need to get more than 10 though. Is this not possible? We understand we can set the group_id to a random string, and that allows processing more, but then the order isn't guaranteed, which defeats the purpose of FIFO.
I managed to reproduce your results -- I could retrieve 10 messages, but then running the same command again would not return another set of messages.
The relevant documentation seems to be:
While messages with a particular MessageGroupId are invisible, no more messages belonging to the same MessageGroupId are returned until the visibility timeout expires. You can still receive messages with another MessageGroupId as long as it is also visible.
I suspect (just a theory!) that this is to preserve the ordering of messages... If a client asked for a set of messages and they are still being processed, there is the chance that the messages might be returned to the queue. Therefore, no further messages are provided until the original messages are deleted or pass their visibility timeout.
This is only a behaviour of FIFO queues.
It seems that you will need to receive and delete all messages to be able to access them all. I would suggest:
Receive one (or more) message.
Process it. If everything worked, delete the message.
If there were problems, push the message to a new queue.
Once the queue is empty, you would need to read from the new queue and send them back to the original queue (which should preserve ordering).
If you frequently require more capabilities that Amazon SQS provides, you could consider using Amazon MQ – Managed message broker service for ActiveMQ. It has many more capabilities (but is accordingly less 'simple').
If you set another MessageGroupId, you can get another 10 messages, even you don't release or delete the previous ones.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html