I have set up a standard queue with AWS SQS and I want to poll this queue for messages containing a
specific attribute, preferably using the boto3 library in python. I know that boto3 has a
method recieve_message() which polls messages from the queue. However, I want to only get those messages which contain a specific attribute. A naive approach is to iterate through the receive_message() output and check if a message in receive_message() contains the attribute, but I was wondering if there is another solution to this problem.
You can't filter certain messages with SQS solely, however, you can do that with SNS.
You can publish the messages to an SNS topic. The message filter feature of SNS enables endpoints subscribed to an SNS topic to receive only the subset of topic messages it is interested in. So you can ensure only the relevant messages with specific attributes are enqueued to the consumer's queue.
Refer to Filter Messages Published to Topics and SNS subscription filtering policies.
Selective polling is not supported in SQS ReceiveMessage API. An alternative is to let your EC2 send messages containing different attributes into different SQS queues.
Related
Let's say I have set up a AWS SNS with 3 subscribers. I'd like to know when all of the subscribers received/processed the message in order to mark that message as processed by all 3, and to generate some metrics.
Is there a way to do this?
You can log delivery status for SNS topics to CloudWatch, but only for certain types of messages (AWS has no reliable way of knowing if some messages were received or not, such as with SMS or email).
The types of messages you can log are:
HTTP
Lambda
SQS
Custom Application (must be configured to tell AWS that the message is received)
To set up logging in SNS:
In the SNS console, click "Edit Topic"
Expand "delivery status logging"
Then you can configure which protocols to log and the necessary permissions to do so.
Once you're logging to CloudWatch, you can draw metrics from there.
If you need to be notified when the subscribers have received the messages, you could set up a subscription filter within cloudwatch to send the relevant log events to a lambda function, in which you would implement custom logic to notify you appropriately.
I mean successful processing by the consumer
Usually your consumers would have to indicate this somehow. This is use-case specific, therefore its difficult to speculate on exact solutions.
But just to give an example, a popular patter is Request-response messaging pattern. In here, your your consumers would use a SQS queue to publish outcome of the message processing. The producer(s) would pull the queue to get these messages, subsequently, knowing which messages were correctly process and which not.
I have this situation where I am using Amazon SNS + SQS in order to handle domain events.
Basically on domain event I publish a message to SNS and two SQS queues are subscribed to SNS. Since i noticed SQS supports FIFO, but SNS doesn't support FIFO, I am trying to find a resolution on how to simultaneously deliver message A to multiple SQS FIFO queues?
What I had so far
Publish Message A to SNS
Distribute Message A to SQS 1 and SQS 2
All I can think of now is
Publish message A to SQS A
Use code to pull message A from SQS and publish it to SQS 1 and SQS 2
Not really an atomic process I was looking for...
Is there an alternative to this approach?
Today, we launched Amazon SNS FIFO topics, which can fan out messages to multiple Amazon SQS FIFO queues!
https://aws.amazon.com/about-aws/whats-new/2020/10/amazon-sns-introduces-fifo-topics-with-strict-ordering-and-deduplication-of-messages/
You can think about using the AWS Kinesis Data stream. One feature of it is an ordering.
From faq: https://aws.amazon.com/kinesis/data-streams/faqs/
When should I use Amazon Kinesis Data Streams, and when should I use Amazon SQS?
Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.
You can process events from Kinesis to SQSs.
If your goal is to have a message be pushed to two Amazon SQS FIFO queues, I'd recommend:
Have Amazon SNS trigger an AWS Lambda function
The Lambda function can send the same message to both Amazon SQS queues
It is effectively doing the fan-out via Lambda rather than SNS.
The Lambda function might also be able to extract a Message Group ID that it can provide with the SQS message, which will enable parallel processing of messages while maintaining FIFO within the message group. For example, all messages coming from a particular source will be FIFO, but can be processed in parallel with messages from other sources. It's a very powerful capability that would not be available just by having Amazon SNS forward the message.
I am not sure this is possible so I just writing my question here. I am working with SNS/SQS architecture where messages going through an SNS topic and then being delivered to SQS queues that are subscribed to that topic. I want to set timers on some specific message. Is it possible to do it when they routed by the SNS topic to the SQS queue?
I don't think this is possible, especially since you only want it "on some specific message".
There is a default Delay setting on an SQS queue, but that would not be applicable for some messages only.
There is no capability to specify Delay settings on a message going from Amazon SNS to Amazon SQS. Your only choice might be to send it to a different queue using Amazon SNS Message Filtering.
As there is a limitation on SQS to support multiple consumers to process messages in parallel. ie. m1 to m10 picked by process 1 and m11 to m20 picked by process 2 and so on.. with duplication. Since this is not supported by SQS, I am thinking of using SNS + SQS (list of queues subscribed), where each process listens to its specific queue and processes records.
Is there an option to set between SNS and SQS like round-robin so that SNS distributes messages to SQS in a round robin fashion, So that each queue would have unique messages without duplication across queues?
Thanks in advance!
Regards,
Kumar
If you don't want your SNS publish to go to all subscribers (queues), look in to SNS Message Filtering. Message filtering allows you to define logic controlling which subscribers receive a given message.
By default, a subscriber of an Amazon SNS topic receives every message
published to the topic. To receive only a subset of the messages, a
subscriber assigns a filter policy to the topic subscription.
A filter policy is a simple JSON object. The policy contains
attributes that define which messages the subscriber receives. When
you publish a message to a topic, Amazon SNS compares the message
attributes to the attributes in the filter policy for each of the
topic's subscriptions. If there is a match between the attributes,
Amazon SNS sends the message to the subscriber. Otherwise, Amazon SNS
skips the subscriber without sending the message to it. If a
subscription lacks a filter policy, the subscription receives every
message published to its topic.
Unless you are using SQS FIFO queues, your assumption about the limitation of SQS not supporting multiple parallel consumers is not correct
Standard SQS do support multiple parallel consumers.
Regarding the SQS FIFO queues they don't serve messages from the same message group to more than one consumer at a time. However, if your FIFO queue has multiple message groups, you can take advantage of parallel consumers, allowing Amazon SQS to serve messages from different message groups to different consumers.
In RabbitMQ, one can create an exchange, then bind it to multiple queues, each with a routing key. This enables messaging architectures like this:
message_x
/ | \
foo-msg_q bar-msg_q msg-logger_q
Clients publish messages to the message_x exchange, which routes only messages with routing key "foo" to the foo-msg_q queue, only messages with the routing key "bar" to the bar-msg_q queue, and all messages to msg-logger_q queue.
I'm having trouble figuring out how to do this in AWS. My first thought was to set up permissions on the individual queues to accept messages based on subject, but the only available fields for permission conditions are:
aws:CurrentTime
aws:EpochTime
aws:MultiFactorAuthAge
aws:principaltype
aws:SecureTransport
aws:SourceArn
aws:SourceIp
aws:UserAgent
aws:userid
aws:username
None of these seem like they can be influenced by any message I publish to the message_x topic.
Is it possible to doing something like this when using Amazon Simple Notification Service to fan out to multiple Simple Queue Service queues, with each queue receiving a subset of messages published to the topic?
This is possible by using message attribute filtering in SNS. After you subscribe different SQS queues to an SNS topic, you can specify attributes to filter on by using the SNS API SetSubscriptionAttributes. This will allow messages with different attributes to get routed to the correct SQS queue.
This is also not limited to SQS queues but any subscription sources on a SNS topic. For example, a single SNS topic can publishing one set of messages to Lambda, and another set to SQS.
SDK Reference:
http://docs.aws.amazon.com/sns/latest/api/API_SetSubscriptionAttributes.html
More details are given here with examples:
https://aws.amazon.com/blogs/compute/simplify-pubsub-messaging-with-amazon-sns-message-filtering/
EDIT
I cannot delete an accepted answer, so see the answer below for the now correct answer since this feature has been released.
Original (now incorrect) Answer (for posterity):
No it's not possible. Would be a great feature for them to add
though.
The only alternative I know is to create a topic for each routing rule
and then publish to the correct topic. It's not pretty, but it
accomplishes the task. If you have a lot of rules, you might need
more than the 3000 topics they allow. You can request an increase in
topic limit from AWS on their website by following the instructions
here
http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ses_quota.
AWS now supports filtering on SNS subscribers. Each subsriber can set its policy for filtering messages that it needs and discard others. If you don't set any policy on a subscriber it will get all messages. refer below
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/