I have the below function handler code.
public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
{
foreach (var message in #event.Records)
{
// Do work
// If a message processed successfully delete the SQS message
// If a message failed to process throw an exception
}
}
It is very confusing that while I don't handle validation logic for creating records in my database (already exists) I see database records with the same ID created twice meaning the same message processed more than once!
In my code, I am deleting the message after successful processing or throw an exception upon failure assuming all remained ordered messages will just go back to the queue visible for any consumer to reprocess but I can see code failing now because the same records are created twice for an event that succeeded.
Is AWS SQS FIFO exact-once delivery or am I missing some kind of retry processing policy?
This is how I delete the message upon successful processing.
var deleteMessageRequest = new DeleteMessageRequest
{
QueueUrl = _sqsQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessageResponse =
await _amazonSqsClient.DeleteMessageAsync(deleteMessageRequest, cancellationToken);
if (deleteMessageResponse.HttpStatusCode != HttpStatusCode.OK)
{
throw new AggregateSqsProgramEntryPointException(
$"Amazon SQS DELETE ERROR: {deleteMessageResponse.HttpStatusCode}\r\nQueueURL: {_sqsQueueUrl}\r\nReceiptHandle: {message.ReceiptHandle}");
}
The documentation is very explicit about this
"FIFO queues provide exactly-once processing, which means that each
message is delivered once and remains available until a consumer
processes it and deletes it."
They also mention protecting your code from retries but that is confusing for an exactly-once delivery queue type but then I see the below in their documentation which is confusing.
Exactly-once processing.
Unlike standard queues, FIFO queues don't
introduce duplicate messages. FIFO queues help you avoid sending
duplicates to a queue. If you retry the SendMessage action within the
5-minute deduplication interval, Amazon SQS doesn't introduce any
duplicates into the queue.
Consumer retries (how's this possible)?
If the consumer detects a failed ReceiveMessage action, it can retry
as many times as necessary, using the same receive request attempt ID.
Assuming that the consumer receives at least one acknowledgement
before the visibility timeout expires, multiple retries don't affect
the ordering of messages.
This was entirely our application error and how we treat the Eventssourcing aggregate endpoints due to non-thread-safe support.
Related
How do I configure visibility timeout so that a message in SQS can be read again?
I have Amazon SQS as a message queue. Messages are being sent by multiple applications. I am now using Spring listener to read message in queue as below:
public DefaultMessageListenerContainer jmsListenerContainer() {
SQSConnectionFactory sqsConnectionFactory = SQSConnectionFactory.builder()
.withAWSCredentialsProvider(new DefaultAWSCredentialsProviderChain())
.withEndpoint(environment.getProperty("aws_sqs_url"))
.withAWSCredentialsProvider(awsCredentialsProvider)
.withNumberOfMessagesToPrefetch(10).build();
DefaultMessageListenerContainer dmlc = new DefaultMessageListenerContainer();
dmlc.setConnectionFactory(sqsConnectionFactory);
dmlc.setDestinationName(environment.getProperty("aws_sqs_queue"));
dmlc.setMessageListener(queueListener);
return dmlc;
}
The class queueListener implements javax.jms.MessageListener which uses onMessage() method further.
I have also configured a scheduler to read the queue again after a certain period of time. It uses receiveMessage() of com.amazonaws.services.sqs.AmazonSQS.
As soon as message reach the queue the listener reads the message. I want to read the message again after certain period of time i.e. through scheduler, but once a message is read by listener it does not become visible or read again. As per Amazon's SQS developer guide the default visibility timeout is 30 seconds, but that message is not becoming visible even after 30 seconds. I have tried setting custom visibility timeout in SQS QUEUE PARAMETER CONSOLE, but it's not working.
For information, nobody is deleting the message from the queue.
I only have a passing familiarity with Amazon SQS, but I can say that typically in messaging use-cases when a consumer receives and acknowledges the message then that message is removed (i.e. deleted) from the queue. Given that your Spring application is receiving the message I would suspect it is also acknowledging the message and therefore removing it from the queue which prevents your scheduler from receiving it later. Note that Spring's DefaultMessageListenerContainer uses JMS' AUTO_ACKNOWLEDGE mode by default.
This documentation from Amazon essentially states that if a message is acknowledged in a JMS context that it is "deleted from the underlying Amazon SQS queue."
My application consists of:
1 Amazon SQS message queue
n workers
The workers have the following logic:
1. Wait for message from SQS queue
2. Perform task described in message
3. Delete the message from the SQS queue
4. Go to (1)
I want each message to be received by only one worker to avoid redundant work.
Is there a mechanism to mark a message as "in progress" using SQS, so that other pollers do not receive it?
Alternatively, is it appropriate to delete the message as soon as it is received?
1. Wait for message from SQS queue
2. Delete the message from the SQS queue
3. Perform task described in message
4. Go to (1)
If I follow this approach, is there a way to recover received but unprocessed messages in case a worker crashes (step (3) fails)?
This question is specific to Spring, which contains all sorts of magic.
An SQS message is considered to be "inflight" after it is received from a queue by a consumer, but not yet deleted from the queue. These messages are not visible to other consumers.
In SQS messaging, a message is considered in "inflight" if:
You the consumer have received it, and
the visibility timeout has not expired and
you have not deleted it.
SQS is designed so that you can call ReceiveMessage and a message is given to you for processing. You have some amount of time (the visibility timeout) to perform the processing on this message. During this "visibility" timeout, if you call ReceiveMessage again, no worker will be returned the message you are currently working with. It is hidden.
Once the visibility timeout expires the message will be able to be returned to future ReceiveMessage calls. This could happen if the consumer fails in some way. If the process is successful, then you can delete the message.
The number of messages that are hidden from ReceiveMessage call is the "inflight" number. Currently a SQS queue is set by default to allow a max of 120,000 messages to be "inflight".
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html
There will be ReciepientHandle String which will be sent with the message. It will be having a expiry time based on queue visibility timeout.
You can use the this ReciepientHandle to delete message from queue
I have one Node.js application that pool messages from an AWS SQS queue and process these messages.
However some of these messages are not relevant to my service and they will be filtered out and not processed.
I am wondering if I can do this filtering stuffs from AWS before receiving these irrelevant messages ...
For example if the message does not have the following attribute data.name than this message will be deleted before reaching my application ...
Filtering these messages before sending them to to the queue is not possible (according to my client).
No that is not possible without polling the message itself. So you would need some other consumer polling the messages and returning them to queue (not calling DeleteMessage on the received delete handle) if they meet your requirements but that would be overkill in most of the cases, depending on the ratio of "good" and "bad" messages but still you would have to process "good" messages twice.
Better way would be to set up additional consumer and 2 queues. Producer sends messages to the first queue which is polled by the first consumer whose sole purpose is to filter messages and to send "good" messages to the second queue which would then be polled by your current consumer application. But again, this is much more costly.
If you can't filter messages before sending them to queue then filter them in your consuming application or you will have to pay some extra for this extra functionality.
I have created an Amazon SNS topic. I have one Amazon SQS queue subscribed to the topic.
I have created a default SQS queue (not a FIFO queue).
I am using sqs-consumer API for long polling the SQS queue.
const app = Consumer.create({
queueUrl: 'https://sqs.us-east-2.amazonaws.com/xxxxxxxxxxxx/xxxxxxxxxxx',
handleMessage: async (message) => {
console.log(message);
},
sqs: sqs//new AWS.SQS({apiVersion: '2012-11-05'})
});
app.on('error', (err) => {
console.error(err.message);
});
app.on('processing_error', (err) => {
console.error(err.message);
});
app.on('timeout_error', (err) => {
console.error(err.message);
});
app.start();
When I am running this piece of js file from a single terminal by doing node sqs_client.js , then everything is working perfectly fine and messages are coming in proper order.
But, if open another terminal window and run node sqs_client.js , then the orders of incoming messages become very random. Newer messages may come in the first terminal window or second terminal window in any order.
Why is it happening so? And is there any way to prevent this so that I can get the same message in both the terminal windows at the same time.
You ask: "Is there any way...that I can get the same message in both the terminal windows at the same time."
This is not the way Amazon SQS operates. The general flow of Amazon SQS is:
Messages are sent to the queue
The messages sit in the queue for up to 14 days (can be extended)
A consumer calls ReceiveMessages(), asking for up to 10 messages at a time
When a message is received, it is marked as invisible
When a consumer has finished processing the message, the consumer calls DeleteMessage() to remove the message from the queue
If the consumer does not call DeleteMessage() within the invisibility timeout period, the message will reappear on the queue and will be available for a consumer to receive
Thus, messages are intentionally only available to one consumer at a time. Once a message is grabbed, it is specifically not available for other consumers to receive.
If your requirement is for two consumers to receive the same message, then you will need to redesign your architecture. You do not provide enough details to recommend a particular approach, but options include using multiple Amazon SQS queues or sending messages directly via Amazon SNS rather than Amazon SQS.
The SQS product page at: https://aws.amazon.com/sqs/faqs/
Q: Does Amazon SQS provide message ordering?
Yes. FIFO (first-in-first-out) queues preserve the exact order in
which messages are sent and received. If you use a FIFO queue, you
don't have to place sequencing information in your messages. For more
information, see FIFO Queue Logic in the Amazon SQS Developer Guide.
Standard queues provide a loose-FIFO capability that attempts to
preserve the order of messages. However, because standard queues are
designed to be massively scalable using a highly distributed
architecture, receiving messages in the exact order they are sent is
not guaranteed.
Are you perhaps using the standard queue and thus loose-FIFO?
I have an SQS FIFO queue which we send bunch of ids for processing on the other end. We have 4 workers digesting the message. Once the worker receives the message, it deletes the msg and stores these ids until it hits a limit before performing actions.
What I've noticed is that some ids are received more than once when each id is only sent once. Is it normal?
Your current process appears to be:
A worker pulls (Receives) a message from a queue
It deletes the message
It performs actions on the message
This is not the recommended way to use a queue because the worker might fail after it has deleted the message but before it has completed the action. Thus, the message would be "lost".
The recommended way to use a queue would be:
Pull a message from the queue (makes the message temporarily invisible)
Process the message
Delete the message
This way, if the worker fails while processing the message, it will automatically "reappear" on the queue after the invisibility period. The worker can also send a "still working" signal to keep the message invisible for longer while it is being processed.
Amazon SQS FIFO queues provide exactly-once processing. This means that a message will only be delivered once. (However, if the invisibility period expires before the message is deleted, it will be provided again.)
You say that "some ids are received more than once". I would recommend adding debug code to try and understand the circumstances in which this happens, since it should not be happening if the messages are deleted within the invisibility period.