Eventbridge - Use FIFO SQS for deduplication - amazon-web-services

I need some events to be delivered exactly once, but I have no control of the message processor (so I can‘t make the recipient idempotent).
Is it possible to route events from Eventbridge to a FIFO SQS for deduplication and from the FIFO sqs to the recipient (lambda on other account? Would this achieve exact-once delivery?

Can you set dynamically messageGroupId from the payload content when you set SQS FIFO as a target for your EventBridge rule?
From the current setup I see that it can be only a hardcoded value.

It EventBridge (EB) has at-least-once deliver which means you can get more then one event of the same type. But if this is not an issue, and your only concern is SQS, then yes, EB supports SQS FIFO targets:
EventBridge lets you set a variety of targets—such as Amazon SQS standard and FIFO queues—which receive events in JSON format.

It is possible, but you have to configure the FIFO queue. It can detect duplicates based on the body of the message. See this docs.
FIFO queues help you avoid sending duplicates to a queue. If you retry
the SendMessage action within the 5-minute deduplication interval,
Amazon SQS doesn't introduce any duplicates into the queue.
The link goes on to state what configurations are required, so be sure to check it out.

Related

aws sns invoking lambda multiple times

My AWS application does not allows duplicates. In my application (fan-out) SNS triggers multiple lambda services. Since SNS follows at least once execution, there are chances for triggering same service multiple times.
If i use SNS FIFO fixes duplicate issue? or any best alternatives?
With SNS FIFO we can subscribe only SQS, any alternatives to trigger lambda directly?
My intention is to SNS==> different lambdas (based on input msg type, without duplicates)
Thanks in advance,
Anil
Yes, if you provide a deduplication ID or if you enable content-based message deduplication on the topic. Also, you will have to have an SQS FIFO queue.
The AWS docs has this to say about the deduplication:
Amazon SNS FIFO topics and Amazon SQS FIFO queues support message deduplication, which provides exactly-once message delivery and processing as long as the following conditions are met:
The subscribed SQS FIFO queue exists and has permissions that allow the Amazon SNS service principal to deliver messages to the queue.
The SQS FIFO queue consumer processes the message and deletes it from the queue before the visibility timeout expires.
The Amazon SNS subscription topic has no message filtering. When you configure message filtering, SNS FIFO topics support at-most-once delivery, as messages can be filtered out based on your subscription filter policies.
There are no network disruptions that prevent acknowledgment of the message delivery.
The answer should be obvious, for this one. No, at this this point in time, you can have only SQS FIFO as a subscriber for the topic. The AWS documentation is pretty specific on this:
To fan out messages from Amazon SNS FIFO topics to AWS Lambda functions, extra steps are required. First, subscribe Amazon SQS FIFO queues to the topic. Then configure the queues to trigger the functions.

What is an alternative in AWS for sending message from SNS to SQS FIFO?

I have this situation where I am using Amazon SNS + SQS in order to handle domain events.
Basically on domain event I publish a message to SNS and two SQS queues are subscribed to SNS. Since i noticed SQS supports FIFO, but SNS doesn't support FIFO, I am trying to find a resolution on how to simultaneously deliver message A to multiple SQS FIFO queues?
What I had so far
Publish Message A to SNS
Distribute Message A to SQS 1 and SQS 2
All I can think of now is
Publish message A to SQS A
Use code to pull message A from SQS and publish it to SQS 1 and SQS 2
Not really an atomic process I was looking for...
Is there an alternative to this approach?
Today, we launched Amazon SNS FIFO topics, which can fan out messages to multiple Amazon SQS FIFO queues!
https://aws.amazon.com/about-aws/whats-new/2020/10/amazon-sns-introduces-fifo-topics-with-strict-ordering-and-deduplication-of-messages/
You can think about using the AWS Kinesis Data stream. One feature of it is an ordering.
From faq: https://aws.amazon.com/kinesis/data-streams/faqs/
When should I use Amazon Kinesis Data Streams, and when should I use Amazon SQS?
Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.
You can process events from Kinesis to SQSs.
If your goal is to have a message be pushed to two Amazon SQS FIFO queues, I'd recommend:
Have Amazon SNS trigger an AWS Lambda function
The Lambda function can send the same message to both Amazon SQS queues
It is effectively doing the fan-out via Lambda rather than SNS.
The Lambda function might also be able to extract a Message Group ID that it can provide with the SQS message, which will enable parallel processing of messages while maintaining FIFO within the message group. For example, all messages coming from a particular source will be FIFO, but can be processed in parallel with messages from other sources. It's a very powerful capability that would not be available just by having Amazon SNS forward the message.

Simulating message persistence in SNS using SQS

We are evaluating SNS for our messaging requirements to integrate multiple applications. we have a single producer that publishes messages to multiple topics on SNS. Each topic has 2-5 subscribers. In event of subscriber failures (down for maintenance) I have a few questions on the recommended strategy of using SQS queues per consumer
Is it possible to configure SNS to push to SQS only in event of failure in delivering the message to a subscriber? Dumping all the messages in SQS queue creates a problem for the consumer to analyze all messages in the queue when it restarts.
In event of subscriber failure, it can read messages from SQS queue on restart but how would it know that it missed messages from SNS when it was overloaded?
Any suggestions on handling subscriber failures are welcome.
Thanks!
No, it is not possible to "configure SNS to push to SQS only in event of failure".
Rather than trying to recover a message after a failure, you can configure the Amazon SNS retry policies.
From Setting Amazon SNS Delivery Retry Policies for HTTP/HTTPS Endpoints:
You can use delivery policies to control not only the total number of retries, but also the time delay between each retry. You can specify up to 100 total retries distributed among four discrete phases. The maximum lifetime of a message in the system is one hour. This one hour limit cannot be extended by a delivery policy.
So, you don't need to worry as long as the destination is back online within an hour.
If it is likely to be offline for more than an hour, you will need to find a way to store and "replay" the messages, possibly by inspecting CloudWatch Logs.
Or, here's another idea...
Push initially to SQS. Have an AWS Lambda function triggered by SQS. The Lambda function can do the 'push' that would normally be done by SNS. If it fails, then the standard SQS invisibility process will retry it later, eventually going to a Dead Letter Queue.

Are SNS messages fanned out to SQS queues keeping the order?

The AWS FAQs for SNS says:
Q: Will messages be delivered to me in the exact order they were published?
The Amazon SNS service will attempt to deliver messages from the
publisher in the order they were published into the topic. However,
network issues could potentially result in out-of-order messages at
the subscriber end.
Does it apply to SQS consumers, specially a FIFO SQS? I have a use case where one of the consumers needs to maintain the order in which the messages were sent. If this is not the case, I would need to use something else.
Amazon SNS does not currently support delivering messages to SQS FIFO queues. This is documented here.
Important
Amazon SNS isn't currently compatible with FIFO queues.
So since SNS does not guarantee order, and regular SQS queues do not guarantee order,
you have no guarantee of message delivery order when using SNS to fan out messages to SQS.
As of yesterday, SNS also supports strict message ordering and deduplication with FIFO topics.
https://aws.amazon.com/about-aws/whats-new/2020/10/amazon-sns-introduces-fifo-topics-with-strict-ordering-and-deduplication-of-messages/

SQS Logging for insertion and removal from queue

I'm using Amazon SQS for my application in a producer/consumer context. I want to enable queue level logging where I can see items put on the queue and removed from it later. How can I do that?
I have read the following:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/logging-using-cloudtrail.html
However, that doesn't suffice for my use case. Are we not allowed to do this with AWS queues?
What you're trying to achieve is not possible with just SQS. Possible solutions include:
Implement some middleware API between you producer and SQS queue. API level would log requests from producer.
Use Kinesis instead of SQS. Kinesis allows you to replay/analyze records created in last 24 hours.
Implement logging in consumer.
Use Lambda function that will (with help of CloudWatch Event Rule triggers) read SQS queue once a minute, log records and put them in another SQS queue for later processing by consumer.
Use different type of queue that allows logging. For example, Redis has MONITOR command for that.
In addition to Sergey Kovalev answer, one now has the option for Lambda functions to be triggered by SQS events.
You simply:
select the SQS queue you want as the event source for your Lambda function
I understand your pain. Even I had the issue where SQS was not behaving as expected and I was looking for logs to understand the problem.
SQS don't publish logs, all SQS APIs are synchronous so the client get the appropriate response.
The solutions mentioned above are the workarounds
Among them having Loggin at produce and consumer might not help much. Because in my case I did had logging at produce and consumer, but still what exactly SQS ran into and when will not be visible.