s3 - file uploaded - message in multiple sqs queues - amazon-web-services

i want to be notified when file is uploaded to my s3 bucket. I know I can have sqs message or sns notification. What I need is a message send to multiple sqs queues. Is it possible?

You can configure a SNS topic which will get the message when there is a upload to s3 bucket.
Then subscribe all the SQS queues to that SNS topic.
See this.

You can use s3 notification service for both SNS or SQS http://aws.amazon.com/blogs/aws/s3-event-notification/

now aws enable you to add s3 event to notify one sqs, only one and should be standard not fifo.
you can add more than one filter suffix or filter prefix but for the same standard sqs.
if you want to notify more than one queue standard/fifo you should have sns in the middle, which means s3 event to sns and all sqs are subscribed to that sns, also you can add multiple lambda functions and ec2 instances.

Related

Question on SQS and S3 is it possible to wire SQS to publish straight to a S3 message

I am guessing the answer would be a big No. But is there a way to publish SQS messages straight to an S3 Bucket.
I know the pattern is SQS -> Lambda ->S3. I was wondering is there a way to just publish from a SQS straight to a S3 Bucket.
Amazon SQS does not 'publish' messages.
Instead, apps can SendMessage() to an Amazon SQS queue, and then apps can request messages by calling ReceiveMessage(). Once they have finished processing the message, they call DeleteMessage().

How to save failed messages from Amazon SNS into Amazon S3

I would like to know if it's possible to persist all unacknowledged messages from an SNS topic to an S3 file given a certain time window. These messages don't necessarily need to follow the original order in the S3 file, a timestamp attribute is enough.
If all you want is to save all messages published to your SNS topic in an S3 bucket, then you can simply subscribe to your SNS topic the AWS Event Fork Pipeline for Storage & Backup:
https://docs.aws.amazon.com/sns/latest/dg/sns-fork-pipeline-as-subscriber.html#sns-fork-event-storage-and-backup-pipeline
** Jan 2021 Update: SNS now supports Kinesis Data Firehose as a native subscription type. https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-sns-adds-support-for-message-archiving-and-analytics-via-kineses-data-firehose-subscriptions/
There is no in-built capability to save messages from Amazon SNS to Amazon S3.
However, this week AWS introduced Dead Letter Queues for Amazon SNS.
From Amazon SNS Adds Support for Dead-Letter Queues (DLQ):
You can now set a dead-letter queue (DLQ) to an Amazon Simple Notification Service (SNS) subscription to capture undeliverable messages. Amazon SNS DLQs make your application more resilient and durable by storing messages in case your subscription endpoint becomes unreachable.
Amazon SNS DLQs are standard Amazon SQS queues.
So, if Amazon SNS is unable to deliver a message, it can automatically send it to an Amazon SQS queue. You can later review/process those failed messages. For example, you could create an AWS Lambda function that is triggered when a message arrives in the Dead Letter Queue. The function could then store the message in Amazon S3.

S3 Notifications invoke sqs topic in other region

I have a bucket which I have configured SQS in US-EAST-1 and S3 Bucket in US West Carlifornia region , Is there any way that I can configure the SQS from other region to be invoked at the time of an s3 event
Able to setup event notification when
S3 - Same region - Same account
SQS - Same region - Same account
S3 - Same Region - Different account
SQS - Same Region - in another account
NOT WORKING
S3 - Different region
SQS - Different region
can someone help me, please?
The Amazon SQS queue must be in the same region as your Amazon S3 bucket.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/setup-event-notification-destination.html
It isn't a supported configuration for S3 to reach across a regional boundary to send a notification to SQS. The reason why is not specifically stated in the documentation.
But there is a workaround.
An S3 bucket in us-west-1 can send an event notification to an SNS topic in us-west-1, and and an SQS queue in us-east-1 can subscribe to an SNS topic in us-west-1... so S3 (us-west-1) → SNS (us-west-1) → SQS (us-east-1) is the solution, here. After subscribing the queue to the topic, you may want to enable the "raw message delivery" option on the subscription, otherwise the message format will differ from what you expect, because otherwise SNS will add an outer wrapper to the original event notification payload.
Cross-region Amazon S3 events do not work for Amazon SNS, so they probably won't work for Amazon SQS either. See: S3 Notifications invoke sns topic in other region
Some options:
Trigger an AWS Lambda function that pushes a message into the SQS queue in the other region, or
Use cross-region replication to replicate the bucket to the other region and trigger an event from there
You can now send S3 notifications to EventBridge, from there send them to EventBridge in another region and from there forward them to SQS.
It's supported now: s3, sns in the same region, sqs another region.

Will AWS S3 event forwarding to SQS will end up being a different message than S3 to SNS to SQS?

I am using a Splunk Technical Add-on that will be pulling messages from an SQS queue. Although the TA suggests using S3 forwarding to an SNS and it subscribed to an SQS, there is also the possibility of S3 to forward directly to SQS.
Would SNS make any change on what S3 send to it? Or would it be a fully transparent transport method to SQS?
Yes, by default, S3 → SQS and S3 → SNS → SQS will result in two different data structures/formats inside the SQS message body.
This is because an SNS subscription provides metadata with each message delivered -- the SNS MessageId, a Signature to validate authenticity, a Timestamp of when SNS originally accepted the message, and other attributes. The original message is encoded as a JSON string inside the Message attribute of this outer JSON structure.
So with SQS direct, you would extract the S3 event with (pseudocode)...
s3event = JSON.parse(sqsbody)
...but with SNS to SQS...
s3event = JSON.parse(JSON.parse(sqsbody).Message)
You can disable the additional structures and have SNS send only the original payload by enabling raw message delivery on the SQS subscription to the SNS topic.
https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
With raw message delivery enabled, the contents become the same, for both S3 → SQS and S3 → SNS → SQS.
The downside to raw message delivery is that you lose potentially useful troubleshooting information with raw message delivery, like the SNS message ID and SNS-issued timestamp.
On the other hand, if the receiving service (the SQS consumer) assumes the messages are always coming via SNS and expects to find the SNS data structure in the SQS message body, then sending direct S3 → SQS will result in the consumer finding that the message body from SQS does not match its expectations.

Configure S3 bucket to publish events synchronously

I know I can configure an Amazon S3 bucket to publish events to a SQS topic and to a SNS topic.
But, is it possible to configure the bucket to publish the event to SQS first, and then, when the message has been sent to SQS, have the bucket publish the event to SNS (kind of publish these events synchronously)?
An Amazon S3 bucket can publish a notification to one of:
Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS)
AWS Lambda
However, SNS can also send a message to SQS. (More accurately, SQS can be added as a subscriber to an SNS topic).
Therefore, you could choose to send the event to SNS, which can on-send the event to an SQS queue. This is a good way to "fork" the event, sending it to multiple SNS subscribers.