Trying to write some tests for my AWS SQS Queue and its associated Dead letter queue. I want to somehow in my tests force the message from the queue to its DLQ, then read from the dlq to see if the message is there.
Reading from the DLQ is no problem. But does anyone know a quick and easy way I can programmatically force an sqs queue to send a message to its associated DLQ?
The Dead Letter Queue is simply a SQS Queue, so you could send a message to it like you would any other queue.
The DLQ is configured when you create your normal queue and you need to pass a arn of a queue that will be used as DLQ.
When you configure your DLQ you set the maxReceiveCount (Maximum receives on the console) that is the number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
If want to test the process to send messages to DLQs, you need to force in your tests an error on the queue messages' processing to send a message to the DLQ queue, this will be the best way to understand if the errors are going to the queue correctly.
The process to send messages to the DLQ can be done in the following ways:
You explicitly send a message to the DLQ, if you found some error and do not want to process the message or delete at that time.
If you read the messages more times than the maxReceiveCount and do not process the message (read and delete from the queue) the AWS SQS service will understand that you are having problems on that message and will send automatically to the DLQ for you. (eg. maxReceiveCount equals 1 and your read the message and did not delete 2 times)
To understand more about DLQs, take a look here: Amazon SQS dead-letter queues.
Related
I have a task generator to generate task messages to SQS queue and a bunch of workers to poll the SQS queue to process the task. In this case, is there any benefit to let the task generator to publish messages to a SNS topic first, and then the SQS queue subscribes to the SNS topic? I assume directly publish to SQS queue is enough.
Assuming you're not needing to fan out the messages to different types of workers, and your workers are doing the same job then no you don't.
Each worker can take and process one message.
One item to be aware off is the timeouts before the messages become visable on SQS again. i.e. not configuring the timeouts correctly could cause another worker to process the same message.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html
When a consumer receives and processes a message from a queue, the
message remains in the queue. Amazon SQS doesn't automatically delete
the message. Because Amazon SQS is a distributed system, there's no
guarantee that the consumer actually receives the message (for
example, due to a connectivity issue, or due to an issue in the
consumer application). Thus, the consumer must delete the message from
the queue after receiving and processing it. Visibility Timeout
Immediately after a message is received, it remains in the queue. To
prevent other consumers from processing the message again, Amazon SQS
sets a visibility timeout, a period of time during which Amazon SQS
prevents other consumers from receiving and processing the message.
The default visibility timeout for a message is 30 seconds. The
minimum is 0 seconds. The maximum is 12 hours. For information about
configuring visibility timeout for a queue using the console
we have currently a sqs queue for processing incoming data. Is there a recommended way for managing two DLQs for one queue?
if there is a parsing error of the incoming data, then I want to move the message directly into a "userInput" DLQ, without redrives
if our mongo is on maxConnections, or any other error occurs, then the configured redrive policy should take place
Do I have to put the message manually into the dlq for the first szenario, or is there a better way?
Thanks!
An Amazon SQS queue only has one Dead Letter Queue.
If a message is read from an SQS queue more than a defined number of times, the message can be moved to the Dead Letter Queue for later processing. However, there is no control over what conditions will send the message to the Dead Letter Queue. It is simply based on a message being retrieved more than the maxReceiveCount.
See: Amazon SQS dead-letter queues
Please note that SQS itself does not process the message. Rather, you will have an app or an AWS Lambda function that reads the message from the queue and processes the message. Therefore, you could program your desired functionality (checking incoming data, responding to Mongo maxConnections) into the code that is processing the message from SQS. If it detects such a problem, that program could send the message to a specific queue, and then delete the original message from the source SQS queue.
This would have the same behaviour as having "multiple DLQs", except that your code is responsible for the logic of moving the messages to these queues, rather than Amazon SQS doing it.
SQS Supports only Single DLQ .
Alternatively what you could do is, Let the Consumer of the **Queue** Handle your first case. Meaning "if there is a parsing error of the incoming data" Let the Consumer Move it to another queue.
And The Second case of redrive policy will be handled Automatically and Moved to Real DLQ after the maxReceiveCount
You can have only one DLQ for an queue.
However, you could subscribe a lambda function to that one DLQ.
The lambda function could process the "bad" messages and distributed to other DQLs queues. So you could have additional DLQs for which the function would filter the messages.
Did any one notice issue while consuming large number of messages from SQS Queues using spring boot listener. Some of the messages are going directly to Dead Letter Queue.
The messages in the Dead Letter Queue show the MessageDeliveryCount as 6.
Please check the visibilityTimeout attribute in DLQ. When processing time exceeds this, messages will move to DLQ
Need a way for an admin to place a message back for reprocessing after he reviewed it in dead letter queue. We are using both AWS SQS and Active MQ for different pieces of the system. Assume there was some connectivity problem that prevented the message from being processed that has been resolved.
There is no command to send a message from an Amazon SQS Dead Letter Queue back to the original queue. In fact, there is no command to send messages between any queues.
Your application will need to send a new message to the queue, then delete the 'dead' message from the DLQ.
We can send/receive messages to/from AWS SQS Queue, But Can we update message content which is already in SQS Queue ? If possible , How ?
Once a message has been sent to an SQS queue (standard or FIFO), the message is immutable. Additionally, it isn't possible to ask SQS for a specific message by its ID.
The message is essentially inaccessible until received by a consumer.
(Viewing messages in the AWS console might seem to be an exception, but it isn't -- the console acts as a consumer, receives messages, and then resets their visibility timeout so they return to the queue for subsequent redelivery.)