How to know once all the SNS subscribers received the message? - amazon-web-services

Let's say I have set up a AWS SNS with 3 subscribers. I'd like to know when all of the subscribers received/processed the message in order to mark that message as processed by all 3, and to generate some metrics.
Is there a way to do this?

You can log delivery status for SNS topics to CloudWatch, but only for certain types of messages (AWS has no reliable way of knowing if some messages were received or not, such as with SMS or email).
The types of messages you can log are:
HTTP
Lambda
SQS
Custom Application (must be configured to tell AWS that the message is received)
To set up logging in SNS:
In the SNS console, click "Edit Topic"
Expand "delivery status logging"
Then you can configure which protocols to log and the necessary permissions to do so.
Once you're logging to CloudWatch, you can draw metrics from there.
If you need to be notified when the subscribers have received the messages, you could set up a subscription filter within cloudwatch to send the relevant log events to a lambda function, in which you would implement custom logic to notify you appropriately.

I mean successful processing by the consumer
Usually your consumers would have to indicate this somehow. This is use-case specific, therefore its difficult to speculate on exact solutions.
But just to give an example, a popular patter is Request-response messaging pattern. In here, your your consumers would use a SQS queue to publish outcome of the message processing. The producer(s) would pull the queue to get these messages, subsequently, knowing which messages were correctly process and which not.

Related

AWS Step Functions connect and visualize sns + sqs + lambda

I have a situation in which I have a message published to sns topic, an sqs queue subscribed to receive all messages sent to the topic, then a lambda that polls the sqs for messages, the performs some processing (in this case saving to dynamodb). Then there is another lambda that gets triggered off the dynamodb change stream.
For a particular event (identified by a unique id) it would be nice to able to visualize where the event is in this process (i.e. whether it is in the first queue, whether it is in a dlq, etc). I know i can use x-ray or structured logs combined with queries to find this information, but I was wondering if I could use step functions to do this instead.
I know I can send messages to sns with step functions, and I know I can have the next step after that being receive messages from sqs and invoke lambda. However, this seems to just call "receive message" once on the queue, and doesn't connect to the specific event (which would be the point of the visualization).
Does anyone know if this is possible with step functions, or would I need to build my own ui or rely on x-ray or cloudwatch logs insights queries?
Thanks

Alert on Lambda failure with detailed info

I have a cloudWatch alert setup on all lambdas sending data to a an SNS topic
Using the metric as
sum(errors) across all functions
I get the notification as expected, but there is no information in there to identify which amongst my lambdas triggered the alarm or in other words which one failed
If I setup the alarm individually on each lambda, then I get the information on which one failed under Dimensions. But I have a lot of them and plan to add more and this process will become painful
How can I leverage cloudWatch to alert me on all lambda failures and also provide info on which lambda failed and the error message ?
Should this be implemented in a different way ?
The AWS Cloud Operations & Migrations Blog has a post published on this topic.
Instead of using CloudWatch Alarms as you are doing now, you can use a CloudWatch Logs subscription. Whenever a log entry matches a specific pattern that you specify, it will trigger a new Lambda function that can notify you however you choose. In the blog post, the Lambda uses SNS to send an email notification.
You can control what information gets included in the body of the notification by adjusting what the Lambda function sends to SNS. The log group name, log stream, and the error message itself can be included.

Polling an AWS SQS queue for messages with certain attributes

I have set up a standard queue with AWS SQS and I want to poll this queue for messages containing a
specific attribute, preferably using the boto3 library in python. I know that boto3 has a
method recieve_message() which polls messages from the queue. However, I want to only get those messages which contain a specific attribute. A naive approach is to iterate through the receive_message() output and check if a message in receive_message() contains the attribute, but I was wondering if there is another solution to this problem.
You can't filter certain messages with SQS solely, however, you can do that with SNS.
You can publish the messages to an SNS topic. The message filter feature of SNS enables endpoints subscribed to an SNS topic to receive only the subset of topic messages it is interested in. So you can ensure only the relevant messages with specific attributes are enqueued to the consumer's queue.
Refer to Filter Messages Published to Topics and SNS subscription filtering policies.
Selective polling is not supported in SQS ReceiveMessage API. An alternative is to let your EC2 send messages containing different attributes into different SQS queues.

Simulating message persistence in SNS using SQS

We are evaluating SNS for our messaging requirements to integrate multiple applications. we have a single producer that publishes messages to multiple topics on SNS. Each topic has 2-5 subscribers. In event of subscriber failures (down for maintenance) I have a few questions on the recommended strategy of using SQS queues per consumer
Is it possible to configure SNS to push to SQS only in event of failure in delivering the message to a subscriber? Dumping all the messages in SQS queue creates a problem for the consumer to analyze all messages in the queue when it restarts.
In event of subscriber failure, it can read messages from SQS queue on restart but how would it know that it missed messages from SNS when it was overloaded?
Any suggestions on handling subscriber failures are welcome.
Thanks!
No, it is not possible to "configure SNS to push to SQS only in event of failure".
Rather than trying to recover a message after a failure, you can configure the Amazon SNS retry policies.
From Setting Amazon SNS Delivery Retry Policies for HTTP/HTTPS Endpoints:
You can use delivery policies to control not only the total number of retries, but also the time delay between each retry. You can specify up to 100 total retries distributed among four discrete phases. The maximum lifetime of a message in the system is one hour. This one hour limit cannot be extended by a delivery policy.
So, you don't need to worry as long as the destination is back online within an hour.
If it is likely to be offline for more than an hour, you will need to find a way to store and "replay" the messages, possibly by inspecting CloudWatch Logs.
Or, here's another idea...
Push initially to SQS. Have an AWS Lambda function triggered by SQS. The Lambda function can do the 'push' that would normally be done by SNS. If it fails, then the standard SQS invisibility process will retry it later, eventually going to a Dead Letter Queue.

Step functions - read from SQS?

There will be thousands of messages in the SQS.
Is it possible in the Step functions wait until OrderId:123 (json) is in the SQS and then execute the Lambda function when specific Order Id is received?
Edit:
Step Functions to call the Lambda function at regular intervals until it manages to retrieve a message with a particular attribute. OrderId attribute will be in the body message. For example:
{
"OrderId": 1235,
"Items": [{"Id":1, "Name": "Item 1"}]
}
No, it is not possible to selectively retrieve a message from Amazon SQS.
Your application can request to receive 1-10 messages from SQS, but cannot request specific messages.
See: Finding certain messages in SQS
You could use send the messages to Amazon SNS instead and then Filter Messages with Amazon SNS using attributes, and also subscribe the SQS queue to the SNS topic to send a copy of the message to SQS, but this is starting to get too complicated.
You should probably re-architect the solution to have incoming orders trigger the next activity, rather than searching for a given order response.