Currently we are using 'RabbitMQ' for reliable messaging delivery,we have plan to move SQS.RMQP monitors its consumers through TCP when any consumer of the Queue goes down it will automatically Re-Queue the messaging for processing.
Will SQS monitor all its slaves? Will the message is visible in the queue if one of its consumer goes down while processing the message?
I have tried to find out the same from documentation,i could not find any.
If by 'slaves', you mean SQS consumers, then no, SQS does not monitor the consumers from the queue.
In a nutshell, SQS works like this:
A consumer requests a message from the queue.
SQS sends the message to the consumer to process and makes that message temporarily invisible to other consumers.
When the consumer is finished processing the message, it sends a 'DeleteMessage' requests back to SQS and SQS removes that item from the queue.
If a consumer does not send the deletemessage back soon enough (within its configurable timeout period), then SQS will put the message back into the queue automatically.
So SQS doesn't monitor you consumers, but if a consumer requests messages - and does nothing with them - they will eventually end up back in the queue to be processed by someone else.
But if your queue doesn't have any consumers, then sooner or later (14 days max), the messages will be deleted altogether (or sent to a dead-letter-queue if you set that up).
It is usually a good idea to setup your queue consumers in an auto-scaling group, with a health-check that can verify that it is running/processing properly. If an instance fails a health check, it will be terminated and a new instance spun up to continue the work in the queue. Optionally, you can spin up extra instances if the size of the SQS queue grows to meet peak demand.
Related
I had a question regarding SQS services.
If you have a SQS queue with multiple consumers and long polling enabled for FIFO type SQS. Which consumer gets preference for the delivery? Is it based on which started the polling first or is random? And also are there any good readings for this?
Thanks in advance!
The number of consumers does not impact operation of the Amazon SQS queue. When a consumer requests messages from a FIFO queue, they will be given the earliest unprocessed message(s).
There is an additional Message Group ID on each message. While a message with a particular Message Group ID is being processed, no further messages with the same Message Group ID will be provided. This ensures that those messages are processed in-order.
Long-polling simply means that if no messages are available, then SQS will wait up to 20 seconds before returning an empty response. Long Polling is a default value that you can override with each request to the queue.
I have a task generator to generate task messages to SQS queue and a bunch of workers to poll the SQS queue to process the task. In this case, is there any benefit to let the task generator to publish messages to a SNS topic first, and then the SQS queue subscribes to the SNS topic? I assume directly publish to SQS queue is enough.
Assuming you're not needing to fan out the messages to different types of workers, and your workers are doing the same job then no you don't.
Each worker can take and process one message.
One item to be aware off is the timeouts before the messages become visable on SQS again. i.e. not configuring the timeouts correctly could cause another worker to process the same message.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
When a consumer receives and processes a message from a queue, the
message remains in the queue. Amazon SQS doesn't automatically delete
the message. Because Amazon SQS is a distributed system, there's no
guarantee that the consumer actually receives the message (for
example, due to a connectivity issue, or due to an issue in the
consumer application). Thus, the consumer must delete the message from
the queue after receiving and processing it. Visibility Timeout
Immediately after a message is received, it remains in the queue. To
prevent other consumers from processing the message again, Amazon SQS
sets a visibility timeout, a period of time during which Amazon SQS
prevents other consumers from receiving and processing the message.
The default visibility timeout for a message is 30 seconds. The
minimum is 0 seconds. The maximum is 12 hours. For information about
configuring visibility timeout for a queue using the console
Did any one notice issue while consuming large number of messages from SQS Queues using spring boot listener. Some of the messages are going directly to Dead Letter Queue.
The messages in the Dead Letter Queue show the MessageDeliveryCount as 6.
Please check the visibilityTimeout attribute in DLQ. When processing time exceeds this, messages will move to DLQ
I tried implementing an AWS SQS Queue to minimise the database interaction from the backend server, but I am having issues with it.
I have one consumer process that looks for messages from one SQS queue.
A JSON message is placed in the SQS queue when Clients click on a button in a web interface.
A backend job in the app server picks up the JSON message from the SQS queue, deletes the message from the queue and processes it.
To test the functionality, I implemented the logic for one client. It was running fine. However, when I added 3 more clients it was not working properly. I was able to see that the SQS queue was stuck up with 500 messages and the backend job was working properly reading from the queue.
Do I need to increase the number of backend jobs or increase the number of client SQS queues? Right now all the clients send the message to same queue.
How do I calculate the number of backend jobs required? Also, is there any setting to make SQS work faster?
Having messages stored in a queue is good - in fact, that's the purpose of using a queue.
If your backend systems cannot consume messages at the rate that they are produced, the queue will act as a buffer to retain the messages until they can be processed. A good example is this AWS re:Invent presentation where a queue is shown with more than 200 million messages: Building Elastic, High-Performance Systems with Amazon SQS and Amazon SNS
If it is important to process the messages quickly, then scale your consumers to match the rate of message production (or faster, so you can consume backlog).
You mention that your process "picks up the JSON message from the SQS queue, deletes the message from the queue and processes it". Please note that best practice is to receive a message from the queue, process it and then delete it (after it is fully processed). This way, if your process fails, the message will automatically reappear on the queue after a defined invisibility period. This makes your application more resilient to failure.
Is it possible to dump a SQS queue to open space for "urgent" messages and then restore the dump to keep SQS queue on track?
I am not talking about aws cli commands but any possibility of doing it.
Of course I could open a new SQS and change the application to look after that new queue, but it would have some implications.
No it's not possible. The design pattern I've seen AWS recommend when you want to have "high priority" messages is this:
Create 2 queues, one for high-priority messages and one for regular-priority messages.
Have your application always scan the high-priority queue first to check for new messages.
If you don't receive any messages from the high-priority queue, scan the regular-priority queue for messages.
AWS SQS does not provide a priority based queue at the moment. But you can do certain implementations and build a priority queue for your application (consumer). Following are some implementations you can use.
1) As #Markb mentioned, you can create two SQSs, where one is for high priority messages and other is for regular messages. Make sure application polls the high-priority SQS first and then move on to the regular SQS.
2) If using a single SQS, have a few worker threads on the application side, that will collect all the messages from SQS, and process them to see which ones have a higher priority. Take them and process first.
3) Use a combination of SQS and SNS. Send all the regular messages into the SQS. If there are high priority messages, send them to SNS to direct them to a specific endpoint in your application. From you application side (consumer side), have and endpoint that listens to high priority messages coming from SNS, and then have aprocess that polls the SQS to retrieve the regular messages.