I have the following pipeline:
Lambda #1 -> SNS -> SQS -> Lambda #2
Lambda #1 will publish some messages in batch to SNS, which will propagate that to subscriptions, in this case, an SQS queue.
SQS will then invoke Lambda via event invocations with the message from Lambda #1.
This entire pipelines works, but when the payload finally gets to Lambda #2, it's double stringified, so if I send the message {foo: bar}, I'll get a response like this:
{
"Records": [
{
...
"body": "{\n \"Type\" : \"Notification\",\n \"MessageId\" : \"some id\",\n \"TopicArn\" : \"arn:aws:sns:us-west-2:xxx:topicName\",\n \"Message\" : \"{\\\"foo\\\": \\\"bar\\\"}\",\n
... rest of SNS payload}",
... rest of SQS payload
}
]
}
It seems SNS gets stringified and then sent to SQS as the body of a message, then given to Lambda.
Is this to be expected or did I configure incorrectly?
After a bit of digging, turns out you need to enable RawMessageDelivery: https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
To avoid having Amazon Kinesis Data Firehose, Amazon SQS, and HTTP/S endpoints process the JSON formatting of messages, Amazon SNS allows raw message delivery:
- When you enable raw message delivery for Amazon Kinesis Data Firehose or Amazon SQS endpoints, any Amazon SNS metadata is stripped from the published message and the message is sent as is.
- When you enable raw message delivery for HTTP/S endpoints, the HTTP header x-amz-sns-rawdelivery with its value set to true is added to the message, indicating that the message has been published without JSON formatting.
- When you enable raw message delivery for HTTP/S endpoints, the message body, client IP, and the required headers are delivered. When you specify message attributes, it won't be sent.
- When you enable raw message delivery for Kinesis Data Firehose endpoints, the message body is delivered. When you specify message attributes, it won't be sent.
Try setting up your SNS to SQS/Lambda using this template
https://carova.io/snippets/serverless-aws-sqs-queue-subscribed-to-sns-topic
Related
I'm using the AWS SDK v3 to push a message to SNS which is then subscribed to by an SQS Queue.
await snsClient.send(new PublishCommand({
Message: JSON.stringify(payload),
TopicArn: process.env[SNS_TOPIC_ARN],
}));
I want to delay individual messages. Is this possible if I'm pushing them via SNS or do I have to rework it and push directly to SQS?
You can only control the delay on individual messages when sending the messages to the Amazon SQS queue directly.
It is not possible to specify this value when sending the message via an Amazon SNS topic to the Amazon SQS queue.
I would like to know if it's possible to persist all unacknowledged messages from an SNS topic to an S3 file given a certain time window. These messages don't necessarily need to follow the original order in the S3 file, a timestamp attribute is enough.
If all you want is to save all messages published to your SNS topic in an S3 bucket, then you can simply subscribe to your SNS topic the AWS Event Fork Pipeline for Storage & Backup:
https://docs.aws.amazon.com/sns/latest/dg/sns-fork-pipeline-as-subscriber.html#sns-fork-event-storage-and-backup-pipeline
** Jan 2021 Update: SNS now supports Kinesis Data Firehose as a native subscription type. https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-sns-adds-support-for-message-archiving-and-analytics-via-kineses-data-firehose-subscriptions/
There is no in-built capability to save messages from Amazon SNS to Amazon S3.
However, this week AWS introduced Dead Letter Queues for Amazon SNS.
From Amazon SNS Adds Support for Dead-Letter Queues (DLQ):
You can now set a dead-letter queue (DLQ) to an Amazon Simple Notification Service (SNS) subscription to capture undeliverable messages. Amazon SNS DLQs make your application more resilient and durable by storing messages in case your subscription endpoint becomes unreachable.
Amazon SNS DLQs are standard Amazon SQS queues.
So, if Amazon SNS is unable to deliver a message, it can automatically send it to an Amazon SQS queue. You can later review/process those failed messages. For example, you could create an AWS Lambda function that is triggered when a message arrives in the Dead Letter Queue. The function could then store the message in Amazon S3.
I am using a Splunk Technical Add-on that will be pulling messages from an SQS queue. Although the TA suggests using S3 forwarding to an SNS and it subscribed to an SQS, there is also the possibility of S3 to forward directly to SQS.
Would SNS make any change on what S3 send to it? Or would it be a fully transparent transport method to SQS?
Yes, by default, S3 → SQS and S3 → SNS → SQS will result in two different data structures/formats inside the SQS message body.
This is because an SNS subscription provides metadata with each message delivered -- the SNS MessageId, a Signature to validate authenticity, a Timestamp of when SNS originally accepted the message, and other attributes. The original message is encoded as a JSON string inside the Message attribute of this outer JSON structure.
So with SQS direct, you would extract the S3 event with (pseudocode)...
s3event = JSON.parse(sqsbody)
...but with SNS to SQS...
s3event = JSON.parse(JSON.parse(sqsbody).Message)
You can disable the additional structures and have SNS send only the original payload by enabling raw message delivery on the SQS subscription to the SNS topic.
https://docs.aws.amazon.com/sns/latest/dg/sns-large-payload-raw-message-delivery.html
With raw message delivery enabled, the contents become the same, for both S3 → SQS and S3 → SNS → SQS.
The downside to raw message delivery is that you lose potentially useful troubleshooting information with raw message delivery, like the SNS message ID and SNS-issued timestamp.
On the other hand, if the receiving service (the SQS consumer) assumes the messages are always coming via SNS and expects to find the SNS data structure in the SQS message body, then sending direct S3 → SQS will result in the consumer finding that the message body from SQS does not match its expectations.
I have requirement to publish messages from SNS to kinesis. I have found that, it is not possible directly by subscribing same as SNS/SQS. I will need to write lambda to fetch from SNS and publish it to kinesis.
Is there any other way to publish records from SNS to kinesis directly?
Thanks
Amazon SNS is a publish/subscribe model.
Messages sent to SNS can be subscribed from:
http/s: delivery of JSON-encoded message via HTTP POST
email: delivery of message via SMTP
email-json: delivery of JSON-encoded message via SMTP
sms: delivery of message via SMS
sqs: delivery of JSON-encoded message to an Amazon SQS queue
application: delivery of JSON-encoded message to an EndpointArn for a mobile app and device.
lambda: delivery of JSON-encoded message to an AWS Lambda function.
Other options: See Otavio's answer below!
See: Subscribe - Amazon Simple Notification Service
Of these, the only ones that could be used to send to Amazon Kinesis would be to use AWS Lambda. You would need to write a Lambda function that would send the message to a Kinesis stream.
To clarify: Your Lambda function will not "fetch from SNS". Rather, the Lambda function will be triggered by SNS, with the message being passed as input. Your Lambda function will then need to send the message to Kinesis.
Your only other alternative is to change the system that currently sends the message to SNS and have it send the message to Kinesis instead.
Good news! As of January 2021, Amazon SNS has added support for message archiving and analytics via Kinesis Data Firehose subscriptions. You can now load SNS messages into S3, Redshift, Elasticsearch, MongoDB, Datadog, Splunk, New Relic, and more. The SNS documentation has the details.
What's the easiest way to save/log every message published on a AWS SNS topic? I thought there might be a magic setting to automatically push them to S3 or a database, or maybe a database service supporting the HTTP destination automatically, but doesn't seem to be the case. Maybe it needs to be done via a Lambda function?
The purpose is just for basic diagnostics and debugging while setting up some SNS publishing. I don't really care about high scale or fast querying, just want to log and perform basic queries on all the activity for a few minutes at a time.
You can setup a trigger to push your SNS messages to SQS queue. Push is automatic and does not require any code.
According to the docs, SNS can publish to:
http – delivery of JSON-encoded message via HTTP POST
https – delivery of JSON-encoded message via HTTPS POST
email – delivery of message via SMTP
email-json – delivery of JSON-encoded message via SMTP
sms – delivery of message via SMS
sqs – delivery of JSON-encoded message to an Amazon SQS queue
application – delivery of JSON-encoded message to an EndpointArn for a mobile app and device.
lambda – delivery of JSON-encoded message to an AWS Lambda function.
So yes, a simple approach would be to trigger a lambda function to write to S3 or CloudWatch.