AWS SQS message delete on acknowledge with spring cloud aws messaging - amazon-web-services

i have SQS queue setup along with consumer and producer, so i have used FIFO queue and once my consumer consume the message, it deletes the message from the queue and then my code perform some operations if any thing failed then i lost the message, so just i want to persist that message in queue and once i give acknowledgement only then delete it. please help me to how to do acknowledgement and delete on basis of acknowledgement.here is my consumer code
#SqsListener(value = "${queueName}")
public void receiveMessage(final msgDTO msgDTO,
#Header("SenderId") final String senderId,ΒΈ v) {
log.info("Received message: {}, having SenderId: {}", msgDTO, senderId);
// do some operation
if (operationSuccess) {
// TODO ACKNOWLEDGEMENT
}
}```

Related

AWS SQS pause consumer

Lets say for some reason I want to pause consuming reading messages from my SQS ... like a service on the client side will be down for maintanence. Can I pause from my SQS Listener?
#SqsListener(value = "${aws.sqs.listener.myqueue}", deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void processMessage(MyObj myObj) {
// do something with myObj
// if db is down then throw error and pause reading from SQS
}
I have run access some post that suggest to have a killSwitch but this is not ideal for production as it keeps consuming the messages and reposting to the queue.

Is AWS SQS FIFO queue really exact-once delivery

I have the below function handler code.
public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
{
foreach (var message in #event.Records)
{
// Do work
// If a message processed successfully delete the SQS message
// If a message failed to process throw an exception
}
}
It is very confusing that while I don't handle validation logic for creating records in my database (already exists) I see database records with the same ID created twice meaning the same message processed more than once!
In my code, I am deleting the message after successful processing or throw an exception upon failure assuming all remained ordered messages will just go back to the queue visible for any consumer to reprocess but I can see code failing now because the same records are created twice for an event that succeeded.
Is AWS SQS FIFO exact-once delivery or am I missing some kind of retry processing policy?
This is how I delete the message upon successful processing.
var deleteMessageRequest = new DeleteMessageRequest
{
QueueUrl = _sqsQueueUrl,
ReceiptHandle = message.ReceiptHandle
};
var deleteMessageResponse =
await _amazonSqsClient.DeleteMessageAsync(deleteMessageRequest, cancellationToken);
if (deleteMessageResponse.HttpStatusCode != HttpStatusCode.OK)
{
throw new AggregateSqsProgramEntryPointException(
$"Amazon SQS DELETE ERROR: {deleteMessageResponse.HttpStatusCode}\r\nQueueURL: {_sqsQueueUrl}\r\nReceiptHandle: {message.ReceiptHandle}");
}
The documentation is very explicit about this
"FIFO queues provide exactly-once processing, which means that each
message is delivered once and remains available until a consumer
processes it and deletes it."
They also mention protecting your code from retries but that is confusing for an exactly-once delivery queue type but then I see the below in their documentation which is confusing.
Exactly-once processing.
Unlike standard queues, FIFO queues don't
introduce duplicate messages. FIFO queues help you avoid sending
duplicates to a queue. If you retry the SendMessage action within the
5-minute deduplication interval, Amazon SQS doesn't introduce any
duplicates into the queue.
Consumer retries (how's this possible)?
If the consumer detects a failed ReceiveMessage action, it can retry
as many times as necessary, using the same receive request attempt ID.
Assuming that the consumer receives at least one acknowledgement
before the visibility timeout expires, multiple retries don't affect
the ordering of messages.
This was entirely our application error and how we treat the Eventssourcing aggregate endpoints due to non-thread-safe support.

When should I delete messages in SQS?

My application consists of:
1 Amazon SQS message queue
n workers
The workers have the following logic:
1. Wait for message from SQS queue
2. Perform task described in message
3. Delete the message from the SQS queue
4. Go to (1)
I want each message to be received by only one worker to avoid redundant work.
Is there a mechanism to mark a message as "in progress" using SQS, so that other pollers do not receive it?
Alternatively, is it appropriate to delete the message as soon as it is received?
1. Wait for message from SQS queue
2. Delete the message from the SQS queue
3. Perform task described in message
4. Go to (1)
If I follow this approach, is there a way to recover received but unprocessed messages in case a worker crashes (step (3) fails)?
This question is specific to Spring, which contains all sorts of magic.
An SQS message is considered to be "inflight" after it is received from a queue by a consumer, but not yet deleted from the queue. These messages are not visible to other consumers.
In SQS messaging, a message is considered in "inflight" if:
You the consumer have received it, and
the visibility timeout has not expired and
you have not deleted it.
SQS is designed so that you can call ReceiveMessage and a message is given to you for processing. You have some amount of time (the visibility timeout) to perform the processing on this message. During this "visibility" timeout, if you call ReceiveMessage again, no worker will be returned the message you are currently working with. It is hidden.
Once the visibility timeout expires the message will be able to be returned to future ReceiveMessage calls. This could happen if the consumer fails in some way. If the process is successful, then you can delete the message.
The number of messages that are hidden from ReceiveMessage call is the "inflight" number. Currently a SQS queue is set by default to allow a max of 120,000 messages to be "inflight".
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html
There will be ReciepientHandle String which will be sent with the message. It will be having a expiry time based on queue visibility timeout.
You can use the this ReciepientHandle to delete message from queue

SQS Receive Missing Messages from the Queue

I have AWS infrastructure set up so that every update to a dynamo db entry ends up in SQS FIFO queue with deduplication enabled. I also have a test covering this scenario where I purge the queue (The queue can get updates from the other tests in the suit. To avoid having to poll large number of messages before receiving the correct messages,I purge the queue before running the test) and update Dynamo Db and check those entries are received when polling the queue. This test is flaky and sometimes it fails because all the updates I have sent is not received from the queue.
The queue has only one consumer which is the test I have written. So it is not like there is another consumer who consumes these messages.
I checked the queue through AWS console and it is empty at the end of the test and doesn't contains the missing messages when the test times out due to TIMEOUT value set.
My queue configuration in CDK
public Queue createSqsQueue() {
return new Queue(this, "DynamoDbUpdateSqsQueue", QueueProps.builder()
.withContentBasedDeduplication(true)
.withFifo(true)
.withQueueName("DynamoDbUpdateSqsQueue.fifo")
.withReceiveMessageWaitTime(Duration.seconds(20))
.build());
}
My Receive Message Code
private void assertExpectedDynamoDbUpdatesAreReceived() {
List<String> expectedDynamoDbUpdates = getExpectedDynamoDbUpdates();
List<String> actualDynamoDBUpdates = newArrayList();
boolean allDynamoDbUpdatesReceived = false;
stopWatch.start();
while (!allDynamoDbUpdatesReceived && stopWatch.getTime() < TIMEOUT ) {
List<String> receivedDynamoDbUpdates =
AmazonSQSClientBuilder.standard().receiveMessage(queueUrl).getMessages().stream()
.map(this::processAndDelete)
.collect(Collectors.toList());
actualDynamoDBUpdates.addAll(receivedDynamoDbUpdates);
if(actualDynamoDBUpdates.containsAll(expectedDynamoDbUpdates)){
allDynamoDbUpdatesReceived= true;
}
}
stopWatch.stop();
assert(allDynamoDbUpdatesReceived).isTrue();
}
The issue was not in receiving the messages. It was in purging the queue. According to the purge queue documentation (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-purge-queue.html)
The message deletion process takes up to 60 seconds. We recommend
waiting for 60 seconds regardless of your queue's size
Simply adding a wait of 60 seconds before sending updates fixed the issue.

When I am trying to poll the same Amazon SQS from two different terminal tabs, I am not getting same message in both of them

I have created an Amazon SNS topic. I have one Amazon SQS queue subscribed to the topic.
I have created a default SQS queue (not a FIFO queue).
I am using sqs-consumer API for long polling the SQS queue.
const app = Consumer.create({
queueUrl: 'https://sqs.us-east-2.amazonaws.com/xxxxxxxxxxxx/xxxxxxxxxxx',
handleMessage: async (message) => {
console.log(message);
},
sqs: sqs//new AWS.SQS({apiVersion: '2012-11-05'})
});
app.on('error', (err) => {
console.error(err.message);
});
app.on('processing_error', (err) => {
console.error(err.message);
});
app.on('timeout_error', (err) => {
console.error(err.message);
});
app.start();
When I am running this piece of js file from a single terminal by doing node sqs_client.js , then everything is working perfectly fine and messages are coming in proper order.
But, if open another terminal window and run node sqs_client.js , then the orders of incoming messages become very random. Newer messages may come in the first terminal window or second terminal window in any order.
Why is it happening so? And is there any way to prevent this so that I can get the same message in both the terminal windows at the same time.
You ask: "Is there any way...that I can get the same message in both the terminal windows at the same time."
This is not the way Amazon SQS operates. The general flow of Amazon SQS is:
Messages are sent to the queue
The messages sit in the queue for up to 14 days (can be extended)
A consumer calls ReceiveMessages(), asking for up to 10 messages at a time
When a message is received, it is marked as invisible
When a consumer has finished processing the message, the consumer calls DeleteMessage() to remove the message from the queue
If the consumer does not call DeleteMessage() within the invisibility timeout period, the message will reappear on the queue and will be available for a consumer to receive
Thus, messages are intentionally only available to one consumer at a time. Once a message is grabbed, it is specifically not available for other consumers to receive.
If your requirement is for two consumers to receive the same message, then you will need to redesign your architecture. You do not provide enough details to recommend a particular approach, but options include using multiple Amazon SQS queues or sending messages directly via Amazon SNS rather than Amazon SQS.
The SQS product page at: https://aws.amazon.com/sqs/faqs/
Q: Does Amazon SQS provide message ordering?
Yes. FIFO (first-in-first-out) queues preserve the exact order in
which messages are sent and received. If you use a FIFO queue, you
don't have to place sequencing information in your messages. For more
information, see FIFO Queue Logic in the Amazon SQS Developer Guide.
Standard queues provide a loose-FIFO capability that attempts to
preserve the order of messages. However, because standard queues are
designed to be massively scalable using a highly distributed
architecture, receiving messages in the exact order they are sent is
not guaranteed.
Are you perhaps using the standard queue and thus loose-FIFO?