While searching for the ordering features of Pub/Sub I stumbled upon the fact that ordering is preserved on the same region.
Supposing I have ordered Pub/Sub subscriptions outside of GCP.
Each subscription is on a different Datacenter on a Different Provider on another Region.
How can I specify that those subscriptions will consume from a specific region?
Is there an option on an ordered subscription to specify a region?
If not then how Pub/Sub decides which region my application is located since it is provisioned in another datacenter, on another provider. Is the region assigned going to change?
The ordering is preserved on the publish side only within a region. In other words, if you are publishing messages to multiple regions, only messages within the same region will be delivered in a consistent order. If your messages were all published to the same region, but your subscribers are spread across regions, then the subscribers will receive all messages in order. If you want to guarantee that your publishes all go to the same region to ensure they are in order, then you can use the regional service endpoints.
Related
As SQS is distribute queue, so does it replicate messages in the same region or different region? Looking at architecture at the AWS
docs, it shows the message being replicated, but does it replicate in the same region or different regions?
Use case:
I'm setting up queue in region X, but it might be accessed in a region at other end of world. So if there are two workers running one in region X and one in region Y, does both get data from same region X queue or can it be region X and region Y got data from region near to them.
Like X got a message from region X and before the time this info reaches region Y to update queue, then another worker take from replicated region Y queue and reads same message.
P.S :- I know SQS in at least once semantics. But I want to know semantics in the above use case.
SQS is a regional service, that is highly available within a single region. There is no cross-region replication capability. You can definitely access the queue from different regions, just initialize the sqs client with the correct destination region.
As a standard practice for AWS services, the data resides within the region that you create the service in.
There are exceptions, but these will require you as the user to perform an action to allow such as copying an AMI, or enabling S3 replication.
If the queue is being consumed in multiple regions, it will always access the regional endpoint of the SQS queue rather than that of the current region.
As SQS is a queueing service, if you have workers distributed across regions the likelihood is that the item is removed from the queue and processed in a single region (although the exact definition would be it is delivered at least once).
If you're trying to have the message consumed in multiple regions, it would be better to consider a fanout based approach whereby each regions workers would consume from their own SQS queue as opposed to sharing one.
For more information take a look at the https://aws.amazon.com/getting-started/hands-on/send-fanout-event-notifications/ documentation.
Let's say a code running in Region A publishes a message. Can cloud function in region B and C subscribe for such events?
In standard, YES. Pubsub is a global service. If the publisher and the subscriber are in the same region, there is no reason the message change region.
But in cross region, the message is forwarded to the subscriber region and then consumed. You don't see this mechanism, it's automatic and managed by PubSub
However, if you have legal constrains, you can limit the available regions for PubSUb
I am trying to achieve a point in a system architecture on top of AWS infrastructure where a message is published from a data processor and sent to multiple subscribers (clients). This message would contain information that some - but not all - clients would want to receive.
Very similar question > Routing messages from Amazon SNS to SQS with filtering
So to do this message filtering I have turned to the message FilterPolicy functionality provided by SNS using one topic. Currently the system is reaching a point in time that clients have more granular and specific filtering rules so now I am reaching the filtering limits of the AWS SNS.
See more about the SNS filter policy here https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html (section “Filter policy constraints”)
One example of my limitation is the amount of filter values in a policy, on above link it states 150 values. Right now my subscribers would be interested in receiving messages with a specific attribute value. Although this one attribute could have several hundreds or thousands of different values.
I can not also group this attributes since they represent a non-sequential identity.
I seek some guidance over on a architectural solution that would allow me to keep using AWS SNS. I am limited to use some of the AWS infrastructure services, so no RabbitMQ for me.
I would like to create SQS queues in 2 different AWS regions. Is there a way to setup synchronization between both queues? When data is read off a queue in either region , message must not be available for consumption. If one the region goes down , then consumer must start reading from the next message in the available region? Does AWS support this out of the box or are there any patterns available to support this use case?
No, this is not a feature of Amazon SQS.
It would be quite difficult to implement because you cannot request a specific message off a queue. So, if a message is retrieved in one region, there is no way to delete that message in a different region. You would need to operate a database to keep track of the messages, which sort of defeats the whole purpose of the queue.
Amazon SQS is a multi-AZ service that can survive failure of an Availability Zone, but resides in a single region.
You can use Amazon SNS to fan out messages to multi SQS queues, even in multiple different regions. Details here: Sending Amazon SNS messages to an Amazon SQS queue or AWS Lambda function in a different Region.
However this results in duplicate messages across those regions, this does not satisfy your requirement
When data is read off a queue in either region , message must not be available for consumption
In the Google Cloud Pub/Sub documentation about load balancing in pull delivery say:
Multiple subscribers can make pull calls to the same "shared"
subscription. Each subscriber will receive a subset of the
messages.
My concern is about the last phase. Can I decide the way to partition the topic? In others words, Can I decide the way the subsets are grouped?
For instance, in the Kinesis AWS service I can decide the partition key of the stream, in my case by user id, in consequence, a consumer recibe all the messages of a subset of users, or, from other point of view, all the messages of one user are consumed by the same consumer. The message stream of one user is not distributed between different consumers.
I want to do this kind of partition with the Google Pub/Sub service. Is that possible?
There is currently no way for the subscriber to specify a partition or set of keys for which they should receive messages in Google Cloud Pub/Sub, no. The only way to set up this partition would be to use separate topics.