I have an AWS setup where an S3 bucket is setup with an SNS event to trigger a notification on putObject which is expected to push a message to it's subscribed SQS queue.
I wanted to gather information regarding the SLA that AWS guarantees for the S3-SNS event trigger and subsequently SNS-SQS message push delivery. I am interested in two independent numbers :
1. SLA for SNS notification trigger upon S3 event.
2. SLA for SNS to SQS message delivery.
I read about the message delivery guarantee in AWS FAQs but could not find any information regarding the SLA it guarantees, if any.
Related
I would like to know if it's possible to persist all unacknowledged messages from an SNS topic to an S3 file given a certain time window. These messages don't necessarily need to follow the original order in the S3 file, a timestamp attribute is enough.
If all you want is to save all messages published to your SNS topic in an S3 bucket, then you can simply subscribe to your SNS topic the AWS Event Fork Pipeline for Storage & Backup:
https://docs.aws.amazon.com/sns/latest/dg/sns-fork-pipeline-as-subscriber.html#sns-fork-event-storage-and-backup-pipeline
** Jan 2021 Update: SNS now supports Kinesis Data Firehose as a native subscription type. https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-sns-adds-support-for-message-archiving-and-analytics-via-kineses-data-firehose-subscriptions/
There is no in-built capability to save messages from Amazon SNS to Amazon S3.
However, this week AWS introduced Dead Letter Queues for Amazon SNS.
From Amazon SNS Adds Support for Dead-Letter Queues (DLQ):
You can now set a dead-letter queue (DLQ) to an Amazon Simple Notification Service (SNS) subscription to capture undeliverable messages. Amazon SNS DLQs make your application more resilient and durable by storing messages in case your subscription endpoint becomes unreachable.
Amazon SNS DLQs are standard Amazon SQS queues.
So, if Amazon SNS is unable to deliver a message, it can automatically send it to an Amazon SQS queue. You can later review/process those failed messages. For example, you could create an AWS Lambda function that is triggered when a message arrives in the Dead Letter Queue. The function could then store the message in Amazon S3.
I am using a Splunk Technical Add-on that will be pulling messages from an SQS queue. Although the TA suggests using S3 forwarding to an SNS and it subscribed to an SQS, there is also the possibility of S3 to forward directly to SQS.
Would SNS make any change on what S3 send to it? Or would it be a fully transparent transport method to SQS?
Yes, by default, S3 → SQS and S3 → SNS → SQS will result in two different data structures/formats inside the SQS message body.
This is because an SNS subscription provides metadata with each message delivered -- the SNS MessageId, a Signature to validate authenticity, a Timestamp of when SNS originally accepted the message, and other attributes. The original message is encoded as a JSON string inside the Message attribute of this outer JSON structure.
So with SQS direct, you would extract the S3 event with (pseudocode)...
s3event = JSON.parse(sqsbody)
...but with SNS to SQS...
s3event = JSON.parse(JSON.parse(sqsbody).Message)
You can disable the additional structures and have SNS send only the original payload by enabling raw message delivery on the SQS subscription to the SNS topic.
https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
With raw message delivery enabled, the contents become the same, for both S3 → SQS and S3 → SNS → SQS.
The downside to raw message delivery is that you lose potentially useful troubleshooting information with raw message delivery, like the SNS message ID and SNS-issued timestamp.
On the other hand, if the receiving service (the SQS consumer) assumes the messages are always coming via SNS and expects to find the SNS data structure in the SQS message body, then sending direct S3 → SQS will result in the consumer finding that the message body from SQS does not match its expectations.
I am using Amazon SES for a project and have set up a Receipt rule to send messages from SES to SNS. The SNS has my API end point as a subscriber but to ensure that I do not miss any message I have also set up an SQS Queue and have subscribed the queue to SNS topic.
With this set up I receive each SES email twice. (One from SNS and one with a poll in SQS). Is there a way to send only the failed SNS messages to SQS queue so that I don't have to check for duplicates always ?
Rule which sends SES messages to SNS:
SQS queue subscribed to SNS topic:
With your setup, it's not possible to do so. But there are other approaches that you can try out. It might introduce complexity into your application, but it's worth trying. Some of the approaches are given below.
SES to SNS and send all messages from SNS to SQS, and poll SQS for the messages. If some fail, put them to a Deadletterqueue (similar to SQS), and poll that queue separately from time to time, to look for failed messages. This makes the messages more persistent, but slightly inefficient due to polling.
SES to SNS and let SNS use it's delivery policy as needed. You can avoid an SQS, and ask SNS to look for the delivery status, and retry if it is a failure. You can define the retry policy as needed and it's given in this link. After a lot of trying, the message can be discarded as a total failure. You can also write to AWS CloudWatch Logs on the status of the retries.
Short answer, no.
If you've subscribed an SQS queue to an SNS topic, it'll receive all messages published to the latter. SNS doesn't know which messages were successfully processed by your API, so can't selectively send to SQS.
i want to be notified when file is uploaded to my s3 bucket. I know I can have sqs message or sns notification. What I need is a message send to multiple sqs queues. Is it possible?
You can configure a SNS topic which will get the message when there is a upload to s3 bucket.
Then subscribe all the SQS queues to that SNS topic.
See this.
You can use s3 notification service for both SNS or SQS http://aws.amazon.com/blogs/aws/s3-event-notification/
now aws enable you to add s3 event to notify one sqs, only one and should be standard not fifo.
you can add more than one filter suffix or filter prefix but for the same standard sqs.
if you want to notify more than one queue standard/fifo you should have sns in the middle, which means s3 event to sns and all sqs are subscribed to that sns, also you can add multiple lambda functions and ec2 instances.
I know I can configure an Amazon S3 bucket to publish events to a SQS topic and to a SNS topic.
But, is it possible to configure the bucket to publish the event to SQS first, and then, when the message has been sent to SQS, have the bucket publish the event to SNS (kind of publish these events synchronously)?
An Amazon S3 bucket can publish a notification to one of:
Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS)
AWS Lambda
However, SNS can also send a message to SQS. (More accurately, SQS can be added as a subscriber to an SNS topic).
Therefore, you could choose to send the event to SNS, which can on-send the event to an SQS queue. This is a good way to "fork" the event, sending it to multiple SNS subscribers.