Suppose I want to bulk-send three messages, A, B, C, in that order, to a FIFO SQS queue. I could use the bulk SendMessageBatch call.
We provide the ordered list [A,B,C] to the API call.
Suppose that the sends of A and C return HTTPCode 200, but B does not.
Because C was sent (and delivered) after B was sent (and not delivered), what happens? Does C still get enqueued? If I resend B, or re-issue the bulk send command within the visibility timeout period; and suppose all three messages succeed this time; will the queue read A, C, B?
I am using boto3 as my driver.
More detail from the docs:
You can use SendMessageBatch to send up to 10 messages to the specified queue by assigning either identical or different values to each message (or by not assigning values at all). This is a batch version of SendMessage. For a FIFO queue, multiple messages within a single batch are enqueued in the order they are sent.
The result of sending each message is reported individually in the response. Because the batch request can result in a combination of successful and unsuccessful actions, you should check for batch errors even when the call returns an HTTP status code of 200.
Related
I created a test SQS FIFO queue test.fifo - see screenshot below.
Then:
I sent three messages to it with message-group-id = A, and message bodies A1, A2 and A3 respectively using AWS Console (via Send and receive messages button).
Using AWS Console again, I polled for messages with 10 seconds polling, two times in a row. Every time, I saw all three messages in results, and I could open and see the message bodies for all.
Without deleting any message after receiving, how was I able to see all messages with same message-group-id at once? Isn't that a violation of FIFO nature of the queue?
.
Isn't that a violation of FIFO nature of the queue?
No, of course not.
FIFO's main guarantee is around ordering, which would have been preserved in the console and you would have seen the messages in the same order you sent them.
It guarantees exactly-once processing, not exactly-one-message-being-received-at-a-time.
You can receive multiple messages at once, as mentioned in the FIFO docs:
It is possible to receive up to 10 messages in a single call using the MaxNumberOfMessages request parameter of the ReceiveMessage action. These messages retain their FIFO order and can have the same message group ID. Thus, if there are fewer than 10 messages available with the same message group ID, you might receive messages from another message group ID, in the same batch of 10 messages, but still in FIFO order.
I understand that standard SQS uses "at least once" delivery while FIFO messages are delivered exactly once. I'm trying to weigh standard queues vs FIFO for my application, and one factor is how long it takes for the duplicated message to arrive.
I intend to consume messages from SQS then post the data I received to an idempotent third-party API. I understand that with standard SQS, there's always a risk of me overwriting more recent data with the old duplicated data.
For example:
Message A arrives, I post it onwards.
Message A duplicate arrives, I post it onwards.
Message B arrives, I post it onwards.
All fine ✓
On the other hand:
Message A arrives, I post it onwards.
Message B arrives, I post it onwards.
Message A duplicate arrives - I post it and overwrite the latest data, which was B! ✖
I want to measure this risk, i.e. I want to know how long the duplicate message should take to arrive. Will the duplicate message take roughly the same amount of time to arrive, as the original message?
Maybe it's useful to understand how message duplication occurs. As far as I know this isn't documented in the official docs, but instead it's my mental model of how it works. This is an educated guess.
Whenever you send a message to SQS (SendMessage API), this message arrives at the SQS webservice endpoint, which is one of probably thousands of servers. This endpoint receives your message, duplicates it one or more times and stores these duplicates on more than one SQS server. After it has received confirmation from at least two SQS servers, it acknowledges to the client that the message has been received.
When you call the ReceiveMessage API only a subset of the SQS servers that handle your queue are queried for messages. When a message is returned, these servers communicate to their peers, that this message is currently in-flight and the visibility timeout starts. This doesn't happen instantaneously, as it's a distributed system. While this ReceiveMessage call takes place another consumer might also do a ReceiveMessage call and happen to query one of the servers that have a replica of the message, before it's marked as in-flight. That server hands out the message and now you have to consumers working on it.
This is just one scenario, which is the result of this being a distributed system.
There are a couple of edge cases that can happen as the result of network issues, e.g. when the SQS response to the initial SendMessage gets lost and the client thinks the message didn't arrive and sends it again - poof, you got another duplicate.
The point being: things fail in weird and complex ways. That makes measuring the risk of a delayed message difficult. If your use case can't handle duplicate and out of order messages, you should go for FIFO, but that will inherently limit your throughput. Alternatives are based on distributed locking mechanisms and keeping track of which messages you have already processed, which are complex tools to solve a complex problem.
I am using SQS queues in two places of my Spring boot application :
In one queue, I would like the messages to be routed to DLQ when maximum numbers of receives for a given message > = 3
For the second case, I don't like to configure a DLQ.
In (1) and (2), however, I would like to delete the message from DLQ and normal queue respectively after 3 times receives.
As of now, I cannot find any such configurations in SQS, that allows me to delete a message from the queue after a certain number of receives.
Maybe, I am missing something. Could anyone please help here?
There is no mechanism for "automated" deletion of messages from SQS queue upon a given number of unsuccessful received, if you don't want to use DLQ.
Without DQL, SQS will keep messages in the queue till they expire. Thus, if you want to do what you wish, you have to create your own solution for that. You have to store number of times the message got received, e.g., in DynamoDB, and then upon third receive, the consumer must explicitly delete the message from the queue.
You can explore sqs message attributes. Once you received the message, delete it from the queue and send it back to the queue with an added message attribute stating how many times you have received the message.
Ref:https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-java-send-message-with-attributes.html
I'm learning AWS SQS and I've sent 6 messages to a FIFO queue, with the same GroupId. But when I try to poll for messages, I can only receive 2 of them (Why? I set the MaxNumberOfMessages=10 using boto3 API, but I can only receive 2. How can I receive all of the messages?).
(As shown in this picture, I have 5 messages available, but I can only receive 2 messages.)
I tried to delete one of two received messages and poll again. The deleted one is gone, and I received a new message. But in total, it's still 2 messages.
Using an Amazon SQS FIFO queue means that you want to receive messages in order. It will also try to ensure ordering within a Message Group.
This means that, if some messages for a given Message Group ID are currently being processed ("in flight"), no more messages for that Message Group will be provided since an earlier message might be returned to the queue if not fully processed. This could result in messages being processed out-of-order.
From Using the Amazon SQS message group ID - Amazon Simple Queue Service:
To interleave multiple ordered message groups within a single FIFO queue, use message group ID values (for example, session data for multiple users). In this scenario, multiple consumers can process the queue, but the session data of each user is processed in a FIFO manner.
When messages that belong to a particular message group ID are invisible, no other consumer can process messages with the same message group ID.
Therefore, your choices are:
Don't uses a FIFO queue, or
Use different Message Group IDs, or
Be happy with what it is doing because that is desired FIFO behaviour
From AWS Docs:
The maximum number of messages to return. Amazon SQS never returns more messages than this value (however, fewer messages might be returned).
Just like doc's write, you can get less messages. You have to call ReceiveMessage multiple times, usually done in a loop. You can also increase WaitTimeSeconds so that the ReceiveMessage does not return immedietly if there are no messages.
So I am building a small application that uses SQS. I have a simple handler process that determines if a given message is considered processed, marked for retry (to be re-queued) or is not able to be processed (should be sent to dead letter).
However based on the docs it would appear the only way to truly send a message to DL is by using a redrive policy which operates over # of receives a message has racked up. Because of the nature of my application, I could have several valid retries if my process isn't ready to handle a given message, but there are also times I may want to DL a message I have just received. Does AWS/Boto3 not provide a way to mark a specific message for DL?
I know I can just send the message myself to another queue I consider my own DL, I would just rather use AWS' built in tools for this.
I don't believe there is any limitation that would prevent you from sending the message to the deal-letter-queue by yourself.
So just read the message from the Q, if you know it needs to go to the DLQ directly, send it to the DLQ and remove it from the regular Q.