AWS SQS multiple DLQs - amazon-web-services

we have currently a sqs queue for processing incoming data. Is there a recommended way for managing two DLQs for one queue?
if there is a parsing error of the incoming data, then I want to move the message directly into a "userInput" DLQ, without redrives
if our mongo is on maxConnections, or any other error occurs, then the configured redrive policy should take place
Do I have to put the message manually into the dlq for the first szenario, or is there a better way?
Thanks!

An Amazon SQS queue only has one Dead Letter Queue.
If a message is read from an SQS queue more than a defined number of times, the message can be moved to the Dead Letter Queue for later processing. However, there is no control over what conditions will send the message to the Dead Letter Queue. It is simply based on a message being retrieved more than the maxReceiveCount.
See: Amazon SQS dead-letter queues
Please note that SQS itself does not process the message. Rather, you will have an app or an AWS Lambda function that reads the message from the queue and processes the message. Therefore, you could program your desired functionality (checking incoming data, responding to Mongo maxConnections) into the code that is processing the message from SQS. If it detects such a problem, that program could send the message to a specific queue, and then delete the original message from the source SQS queue.
This would have the same behaviour as having "multiple DLQs", except that your code is responsible for the logic of moving the messages to these queues, rather than Amazon SQS doing it.

SQS Supports only Single DLQ .
Alternatively what you could do is, Let the Consumer of the **Queue** Handle your first case. Meaning "if there is a parsing error of the incoming data" Let the Consumer Move it to another queue.
And The Second case of redrive policy will be handled Automatically and Moved to Real DLQ after the maxReceiveCount

You can have only one DLQ for an queue.
However, you could subscribe a lambda function to that one DLQ.
The lambda function could process the "bad" messages and distributed to other DQLs queues. So you could have additional DLQs for which the function would filter the messages.

Related

AWS SQS how does SQS know that processing of a message failed?

I'm reading aws documents on deadletter queue and re-drive policy, and the document mentioned "The redrive policy specifies the source queue, the dead-letter queue, and the conditions under which Amazon SQS moves messages from the former to the latter if the consumer of the source queue fails to process a message a specified number of times".
However, even the document mentioned "message process failed" several times, I do not understand how sqs detects a message processing failure (and thus triggers re-drive or move to the dead letter queue.)
From what I understand, consumer applications call receiveMessage to retrieve the message from SQS, then process the message. The processing function is not passed in to receiveMessage as a lambda. So how does SQS know that message processing has failed?
When a client (e.g. a lambda function) gets a message from the queue, it has limited time to call DeleteMessage. Each msg has also visibility timeout. If the msg is not deleted by the client within the visibility timeout, SQS "assumes" that the processing failed.
Such messages can be then forwarded to SQS depending on how many failed attempts you setup to tollerate.

How to get SQS messages that have already been processed?

I have a common problem I think: I need to persist all the messages from a queue, even if they've already been processed and moved off from the queue.
I'm using SQS as queue system, and my first solution for that problem was persist every message that are sent to SQS in DynamoDB.
But I did a local test using Redis as queue system and I found out that it resolves that problem, saving some 'metadata' from each message that is sent to the queue. Example:
A message with ID = 'asdas-q1223-dasdacc-3222dd' is sent to the queue
It is processed by a random service
It is moved off from the queue
after that, I can perform a action like this to get a data from a specific message that has moved off from the queue:
getJob(jobId: string)
I just need the same behavior for SQS. Is there the same behavior as Redis in SQS?
Once a message has been deleted from an Amazon SQS queue, it is no longer available. This includes any metadata associated with the message.
If you wish to save the message, or information about the message, you would need to do it while processing the message (before processing has finished).
However, an alternative approach would be to send the messages to an Amazon SNS topic. Then, you could subscribe two Amazon SQS queues to the Amazon SNS topic. One queue would be used in the normal existing way. The other queue could be used to 'save' the message. For example, the Amazon SQS queue could trigger an AWS Lambda function and that function could store the message somewhere (eg in a database or in an Amazon S3 object). It won't have details about how the message was processed in the 'existing' queue, but it will have a copy of the message. It will, however, be a 'separate' message, so it will have a different Message ID.

Force a message from SQS Queue to its dead letter queue?

Trying to write some tests for my AWS SQS Queue and its associated Dead letter queue. I want to somehow in my tests force the message from the queue to its DLQ, then read from the dlq to see if the message is there.
Reading from the DLQ is no problem. But does anyone know a quick and easy way I can programmatically force an sqs queue to send a message to its associated DLQ?
The Dead Letter Queue is simply a SQS Queue, so you could send a message to it like you would any other queue.
The DLQ is configured when you create your normal queue and you need to pass a arn of a queue that will be used as DLQ.
When you configure your DLQ you set the maxReceiveCount (Maximum receives on the console) that is the number of times a message is delivered to the source queue before being moved to the dead-letter queue. When the ReceiveCount for a message exceeds the maxReceiveCount for a queue, Amazon SQS moves the message to the dead-letter-queue.
If want to test the process to send messages to DLQs, you need to force in your tests an error on the queue messages' processing to send a message to the DLQ queue, this will be the best way to understand if the errors are going to the queue correctly.
The process to send messages to the DLQ can be done in the following ways:
You explicitly send a message to the DLQ, if you found some error and do not want to process the message or delete at that time.
If you read the messages more times than the maxReceiveCount and do not process the message (read and delete from the queue) the AWS SQS service will understand that you are having problems on that message and will send automatically to the DLQ for you. (eg. maxReceiveCount equals 1 and your read the message and did not delete 2 times)
To understand more about DLQs, take a look here: Amazon SQS dead-letter queues.

How keep messages on SQS after triggering lambda

In my application we are using a SQS to queue messages to be processed by another module. SQS doesn't send notification that a message has come and I don't want to make my application to go to check on it every "X times". So I'm trying to use a lambda trigger to make a http request to my module and make it pool messages from SQS when a message got there.
The problem is SQS deletes the sent messages if there is no error on the lambda function (as far I know). Forcing an error just to keep the messages on the pool can't be right. So I need a way to keep messages on the SQS after the lambda was triggered.
Maybe I should move the code that process the message to the lambda function, but I'm looking for ways to keep it there.
Anyone could give some guidance?
Thanks in advance
SQS is built to be a single producer to consumer for its queues so the intended functionality is happening.
However, there is a solution available for this exact scenario but it will require you to update your architecture.
The solution is to use a fanout architecture.
You would instead publish to an SNS topic, which has your SQS queue subscribed to it. Then create additional SQS queues for parallel channels (1 per each unique Lambda).
Add each Lambda function as consumer of its own SQS queue, each with their own processing.

Is there any way to explicitly send event message to dead letter queue from inside AWS lambda function on certain condition?

I'm trying to send s3event message to rabbitmq by invoking AWS lambda function. I have configured SQS as my dead letter queue(DLQ).
I know the message is sent to DLQ when there failure in invocation of lambda or situations like timeouts or resource constraints.
My question is ,I want to send event message to DLQ from inside lambda function on certain condition like if rabbitmq is down or some other condition of my interest.
Is there any possiblity for the same? Should I throw exception or there is some other better approach to send event message to DLQ.
I'm using java for development and connecting to rabbitmq from my lambda function.
The DLQ is simply an SQS Queue, so you could send a message to it like you would any other queue. You would want it to be formatted the same way that Lambda natively puts message in the DLQ so that whatever processing you have on the DLQ can perform the same way for all messages. You would want to ensure that you treat the lambda as successfully executed in this instance though so that the normal DLQ process doesn't pick up the same message twice.
In the DLQ setting of a Lambda you specify a SNS topic or a SQS Queue. In your setup you have configured the DLQ to be a SQS queue. This is a regular SQS Queue. Using the SQS Java SDK you can post a message to that SQS Queue.
here are few references:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-send-message.html
To get the Queue URL you can use these:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueUrl.html
Or through Java:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html#getQueueUrl-java.lang.String-