Is GCP pub/sub region specific? - google-cloud-platform

Let's say a code running in Region A publishes a message. Can cloud function in region B and C subscribe for such events?

In standard, YES. Pubsub is a global service. If the publisher and the subscriber are in the same region, there is no reason the message change region.
But in cross region, the message is forwarded to the subscriber region and then consumed. You don't see this mechanism, it's automatic and managed by PubSub
However, if you have legal constrains, you can limit the available regions for PubSUb

Related

AWS CloudWatch logs in multiple regions

I created a lambda function in us-east-1 and sns topic to send notifications to a slack channel.
Now I also want to use logs from a service in us-west-2 to trigger the notifications but I can't because they are in different regions.
Whats the best way to handle this? I could just copy the Lambda function/sns topic into us-west-2 but that seems redundant....
Thanks
I decided to go with separate lambda functions in each region.
Since Network Manager is only available in US West 2 and the messages being processed will be specific to that region.

Why does GCP Pub/Sub publish a message twice?

I have a Google Cloud Function subscribed to a topic. Our GCP Pub/Sub publishes a message to the topic when cloud scheduler invoke GCP Pub/Sub each 5 minutes. The problem is that the cloud functions gets sometimes invoked twice 90s after invoking first one.
The acknowledgement deadline on the subscription is 600 seconds.
So, I can't figure it out why GCF is invoked twice in 90s by GCP Pub/Sub.
Does invoking twice 90s after related to something?
Your duplicate could either be on the publish side or on the subscribe side. If the duplicate messages have different message IDs, then your duplicates are generated on the publish side. This could be caused by retries on the publish side in response to retryable errors. If the messages have the same message ID, then the duplication is on the subscribe side within Pub/Sub.
Cloud Pub/Sub offers at-least-once delivery semantics. That means it is possible for duplicates to occur, even if you acknowledge the message and even if the acknowledgement deadline has not passed. If you want stronger guarantees around delivery, you can use Pub/Sub's exactly once feature, which is currently in public preview. However, this will require you to set up your Cloud Function with an HTTP trigger and to create a push subscription in Pub/Sub that points to the address of the function because there is no way to set the exactly once setting on a subscription created by Cloud Functions.

Pub/Sub Ordering and Multi-Region

While searching for the ordering features of Pub/Sub I stumbled upon the fact that ordering is preserved on the same region.
Supposing I have ordered Pub/Sub subscriptions outside of GCP.
Each subscription is on a different Datacenter on a Different Provider on another Region.
How can I specify that those subscriptions will consume from a specific region?
Is there an option on an ordered subscription to specify a region?
If not then how Pub/Sub decides which region my application is located since it is provisioned in another datacenter, on another provider. Is the region assigned going to change?
The ordering is preserved on the publish side only within a region. In other words, if you are publishing messages to multiple regions, only messages within the same region will be delivered in a consistent order. If your messages were all published to the same region, but your subscribers are spread across regions, then the subscribers will receive all messages in order. If you want to guarantee that your publishes all go to the same region to ensure they are in order, then you can use the regional service endpoints.

Limit/Block MQTT publish limit from thing in AWS IoT

In AWS IoT Core, we created few things and allow policy for thing to publish certain topic.
Question here is that possible to limit the thing publish such as only 1000 publish per day on each things. This is not the AWS publish limit per second but our customized limit on things.
Any possibility to do in AWS IoT security policy on thing certificate? Or topic? This should be first level of limit to reject over-publish before it goes to rule engine.
AWS offers a service called IoT Device Defender that allows you to collect metrics for individual clients, see: https://aws.amazon.com/iot-device-defender/
You can define a threshold for the number of published messages in a given time interval and configure automatic alerts if the threshold is exceeded. These alerts can be published to an SNS topic, so you can trigger further automatic actions like disabling the client's certificate to block further messages.
The relevant metrics can be found in the "Detect" section of AWS IoT Device Defender:
Create security profile
Cloud-side metrics
Messages sent, messages received, message size...
Note that collecting these metrics is not free.
There is no way to limit per client.
As you know,
Using certificate is only way to limit per client.
ex) change A client's certificate to inactive when it publish 1000 times a day.
You have to check publishing times per client, and then change certificate's state active/inactive.

Fault Tolerant Clustered Queues - SQS

I would like to create SQS queues in 2 different AWS regions. Is there a way to setup synchronization between both queues? When data is read off a queue in either region , message must not be available for consumption. If one the region goes down , then consumer must start reading from the next message in the available region? Does AWS support this out of the box or are there any patterns available to support this use case?
No, this is not a feature of Amazon SQS.
It would be quite difficult to implement because you cannot request a specific message off a queue. So, if a message is retrieved in one region, there is no way to delete that message in a different region. You would need to operate a database to keep track of the messages, which sort of defeats the whole purpose of the queue.
Amazon SQS is a multi-AZ service that can survive failure of an Availability Zone, but resides in a single region.
You can use Amazon SNS to fan out messages to multi SQS queues, even in multiple different regions. Details here: Sending Amazon SNS messages to an Amazon SQS queue or AWS Lambda function in a different Region.
However this results in duplicate messages across those regions, this does not satisfy your requirement
When data is read off a queue in either region , message must not be available for consumption