I have a Queue set up with a redrive policy that automatically sends failed messages to a Dead Letter Queue. When looking at the Dead Letter Queue, there is a button to "start DLQ redrive" which should allow me to reprocess the failed messages in the original queue. Unfortuately this button is not enabled on my queue and I cannot fiure out why.
relevant article here: https://aws.amazon.com/blogs/compute/introducing-amazon-simple-queue-service-dead-letter-queue-redrive-to-source-queues/
This button is not available for FIFO queues.
There is a note about this in the aws documentation: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html
Related
I have an SQS queue (Q) that receives messages via the onFailure "destinations" setting from a Lambda function (F). The Lambda function is triggered by and EventBridge event bus "rule".
My question is: Can I configure the redrive policy of queue Q so that I can redrive messages directly to function F?
Currently, I have set the redrive allow policy to allowAll, but the "Start DLQ Redrive" button is disabled in the console. Looking at the configuration settings for the drive allow policy, I get the feeling that only other queues can be a target for a redrive.
What confuses me about this is that my goal here was to use the onFailure function of the "destinations" feature. Destinations can only be used when a function is called asynchronously and queues trigger lambdas synchronously. So if I were to put a queue in front of my lambda function F that could be a target for a redrive, then I would not be able to use the onFailure destination.
It's not possible to send an event payload from Queue Q to Lambda F with redrive. Redrive works by sending messages from the DLQ back to the source queue, not directly to a Lambda target. Consider, too that the SQS message structure differs from that of EventBridge events, which would confuse your Lambda.
Check out Event replay as an alternative. Or add a Lambda to periodically read from the DLQ and resubmit the events.
Trying to write some tests for my AWS SQS Queue and its associated Dead letter queue. I want to somehow in my tests force the message from the queue to its DLQ, then read from the dlq to see if the message is there.
Reading from the DLQ is no problem. But does anyone know a quick and easy way I can programmatically force an sqs queue to send a message to its associated DLQ?
The Dead Letter Queue is simply a SQS Queue, so you could send a message to it like you would any other queue.
The DLQ is configured when you create your normal queue and you need to pass a arn of a queue that will be used as DLQ.
When you configure your DLQ you set the maxReceiveCount (Maximum receives on the console) that is the number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
If want to test the process to send messages to DLQs, you need to force in your tests an error on the queue messages' processing to send a message to the DLQ queue, this will be the best way to understand if the errors are going to the queue correctly.
The process to send messages to the DLQ can be done in the following ways:
You explicitly send a message to the DLQ, if you found some error and do not want to process the message or delete at that time.
If you read the messages more times than the maxReceiveCount and do not process the message (read and delete from the queue) the AWS SQS service will understand that you are having problems on that message and will send automatically to the DLQ for you. (eg. maxReceiveCount equals 1 and your read the message and did not delete 2 times)
To understand more about DLQs, take a look here: Amazon SQS dead-letter queues.
I have a SQS queue and a Lambda which reads messages from the queue. I have a Dead Letter Queue (DLQ) configured and working. I can see failed messages being delivered to the DLQ, its "Messages Available" in the AWS Console goes up.
When AWS moves the messages to the DLQ will this be logged in Cloud Watch?
By "logged", I mean an entry/row/event created in CloudWatch that:
indicates that a message has been delivered to a DLQ
ideally specifies:
DLQ name
some message/event unique id
lambda that was processing the message
optionally specifies the message body/payload
we have currently a sqs queue for processing incoming data. Is there a recommended way for managing two DLQs for one queue?
if there is a parsing error of the incoming data, then I want to move the message directly into a "userInput" DLQ, without redrives
if our mongo is on maxConnections, or any other error occurs, then the configured redrive policy should take place
Do I have to put the message manually into the dlq for the first szenario, or is there a better way?
Thanks!
An Amazon SQS queue only has one Dead Letter Queue.
If a message is read from an SQS queue more than a defined number of times, the message can be moved to the Dead Letter Queue for later processing. However, there is no control over what conditions will send the message to the Dead Letter Queue. It is simply based on a message being retrieved more than the maxReceiveCount.
See: Amazon SQS dead-letter queues
Please note that SQS itself does not process the message. Rather, you will have an app or an AWS Lambda function that reads the message from the queue and processes the message. Therefore, you could program your desired functionality (checking incoming data, responding to Mongo maxConnections) into the code that is processing the message from SQS. If it detects such a problem, that program could send the message to a specific queue, and then delete the original message from the source SQS queue.
This would have the same behaviour as having "multiple DLQs", except that your code is responsible for the logic of moving the messages to these queues, rather than Amazon SQS doing it.
SQS Supports only Single DLQ .
Alternatively what you could do is, Let the Consumer of the **Queue** Handle your first case. Meaning "if there is a parsing error of the incoming data" Let the Consumer Move it to another queue.
And The Second case of redrive policy will be handled Automatically and Moved to Real DLQ after the maxReceiveCount
You can have only one DLQ for an queue.
However, you could subscribe a lambda function to that one DLQ.
The lambda function could process the "bad" messages and distributed to other DQLs queues. So you could have additional DLQs for which the function would filter the messages.
Need a way for an admin to place a message back for reprocessing after he reviewed it in dead letter queue. We are using both AWS SQS and Active MQ for different pieces of the system. Assume there was some connectivity problem that prevented the message from being processed that has been resolved.
There is no command to send a message from an Amazon SQS Dead Letter Queue back to the original queue. In fact, there is no command to send messages between any queues.
Your application will need to send a new message to the queue, then delete the 'dead' message from the DLQ.