I am designing an application that runs on multiple regions. say R1, R2.
Files are submitted to a multi-region cloud storage bucket. PUT event in the bucket will publish a notification to either directly trigger the cloud function or to an pub/sub topic.
I want 80% of processing to be done by R1, and 20% by R2.
Approach1:
Have 2 Cloud functions: CF-R1, CF-R2.
How do I ensure that 80% of storage bucket notifications trigger CF-R1 & 20% trigger CF-R2?
Approach 2:
Have pub/sub topic which captures notification from the storage bucket.
Is it possible to configure CF-R1 & CF-R2 on the topic so that I can split traffic?
Or any other approach to handle this scenario.
Approach 1: Use a Load balancer with URL maps
You coudl use a Cloud function or Cloud Run and use a load balancer with a URL map (announced in June in this blog post - see documentation).
If you use the load balancer you can trigger the notification to the balancer directly or via pubsub with a PUSH subscription.
Note that the load balancer is a separate product and you must take a close look at usage and price.
Approach 2: Several pubsub subscriptions with a filter
I think the second option could be viable. Crazy to do for your case, but it will work.
Google has now in beta the option to apply a filter to a pubsub topic when you create a subscription.
Then, you can have a cloud function (or a cloud run) reacting to the pubsub notifications they recieve on their own subscription.
With this beta feature, you can filter by message values (equals ==, not equals !=, and hasPrefix).
The trick here is to have enough information to distribute the messages between the functions evenly because you cannot change the filter after you create the subscription.
If you can pass that information in your app, or as part of the filename, you can do it this way in an easy way.
If not, I guess the crc32 might have enough information for the filter you need.
But this filter has a 128 character limit that you hit with this:
hasPrefix(attributes.crc32,"A") OR hasPrefix(attributes.crc32,"B") OR hasPrefix(attributes.crc32,"C") OR hasPrefix(attributes.crc32,"D") OR hasPrefix(attributes.crc32,"E")
With the filter above you have almost 10% of the CRC32 possible cases. Not bad for some simple cases, but not good for you since you would have to configure a lot of subscriptions.
Related
I have a Google Cloud Storage Trigger set up on a Cloud Function with max instances of 5, to fire on the google.storage.object.finalize event of a Cloud Storage Bucket. The docs state that these events are "based on" the Cloud Pub/Sub.
Does anyone know:
Is there any way to see configuration of the topic or subscription in the console, or through the CLI?
Is there any way to get the queue depth (or equivalent?)
Is there any way to clear events?
No, No and No. When you plug Cloud Functions to Cloud Storage event, all the stuff are handle behind the scene by Google and you see nothing and you can't interact with anything.
However, you can change the notification mechanism. Instead of plugin directly your Cloud Functions on Cloud Storage Event, plug a PubSub on your Cloud Storage event.
From there, you have access to YOUR pubsub. Monitor the queue, purge it, create the subscription that you want,...
The recomended way to work with storage notifications is using Pubsub.
Legacy storage notifications still work, but with pubsub you can "peek" into the pubsub message queue and clear it if you need it.
Also, you can process pubsub events with cloud run - which is easier to develop and test (just web service), easier to deploy (just a container) and it can process several requests in parallel without having to pay more (great when you have a lot of requests together).
Where does pubsub storage notifications go?
You can see where gcloud notifications go with the gsutil command:
% gsutil notification list gs://__bucket_name__
projects/_/buckets/__bucket_name__/notificationConfigs/1
Cloud Pub/Sub topic: projects/__project_name__/topics/__topic_name__
Filters:
Event Types: OBJECT_FINALIZE
Is there any way to get the queue depth (or equivalent?)
In pubsub you can have many subsciptions to topics.
If there is no subsciption, messages get lost.
To send data to a cloud function or cloud run you setup a push subscription.
In my experience, you won't be able to see what happened because it faster that you can click: you'll find this empty 99.9999% of the time.
You can check the "queue" depht in the console (pubsub -> choose you topics -> choose the subscription).
If you need to troubleshoot this, set up a second subscription with a time to live low enough that it does not use a lot of space (you'll be billed for it).
Is there any way to clear events?
You can empty the messages from the pubsub subscription, but...
... if you're using a push notification agains a cloud function it will much faster than you can "click".
If you need it, it is on the web console (opent the pubsub subscription and click in the vertical "..." on the top right).
I wanted to get notified if/when there is/are any VM creation in my infra on GCP.
I see a google library that can give me list of VM.
I can create a function to use this code (probably)
Schedule the above function. And check for difference.
But do storage like triggers available for Compute.
Also if there is any other solution.
You have a third solution. You can use Cloud Run instead of Cloud Functions (the migration is very easy, let me know if you have issues).
With Cloud Run, you can use the trigger (eventArc feature), a new feature (still in preview) based on the auditLog logs. It's very similar to the first solution proposed by LundinCast, but it's automatically set up by Cloud Run Trigger feature.
So, deploy your service on Cloud Run. Then configure a trigger on v1.compute.instancs.insert API, select your region or make the trigger global and that's all!! Your service will be triggered when a new instance will be created.
As you can see in my screenshot, you will be asked to activate the auditLog to be able to use this feature. Because it's built-in, it's done automatically for you!
Using Logging sink and a PubSub-triggered Cloud Function
First, export the relevant logs to a PubSub topic of your choice by creating a Logging sink. Include the logs created automatically during VM creation with the following log filter:
resource.type="gce_instance"
protoPayload.methodName="beta.compute.instances.insert"
protoPayload.methodName="compute.instances.insert"
Next, create a Cloud Function that'll trigger every time a new log is set to the PubSub topic. You can process this new message as per your needs.
Note that with this option you'll have to handle to notification yourself (for example, by sending an email). It is useful though if you want to send different notification based on some condition or if you want to perform additional actions apart from the notification.
Using a log-based metric and a Cloud Monitoring alert
You can use a Log-based metric filtering logs for Compute Engine VM creation and set an alert on that metric to get notified.
First create a counter log-based metric with a log filter similar to the one in the previous method, which will report a data point to Cloud monitoring every time a new VM instance is created.
Then go to Cloud Monitoring and create an alert based on that metric that trigger every time a metric is reported.
This option is the easiest to set up and supports various notification channels out-of-the-box.
Going along with LudninCast's answer.
Cloud Run --
Would have used it if it had not been zone issue for me. Though I conclude this from POC I did
Easy setup.
Containerised Apps. Probably more code to maintain.
Public URL for app.
Out of box support for the requirements like mine.
Cloud Function --
Sink setups for triggers can be time consuming for first timer
Easy coding and maintainance.
I am trying to send IoT commands using a push subscription. I have 2 reasons for this. Firstly, my devices are often on unstable connections so going through the pubsub let me have retries and I don't have to wait the QoS 1 timeout (I still need it because I log it for later use) at the time I send the message. The second reason is the push subscription can act as a load balancer. To my understanding, if multiple consumers listen to the same push subscription, each will receive a subset of the messages, effectively balancing my workload. Now my question is, this balancing is a behavior I observed on pull subscriptions, I want to know if:
Do push subscription act the same ?
Is it a reliable way to balance a workload ?
Am I garanteed that these commands will be executed at most once if there is, lets say, 15 instances listening to that subscription ?
Here's a diagram of what I'm trying to acheive:
Idea here is that I only interact with IoT Core when instances receive a subset of the devices to handle (when the push subscription triggers). Also to note that I don't need this perfect 1 instance for 1 device balancing. I just need the workload to be splitted in a semi equal manner.
EDIT: The question wasn't clear so I rewrote it.
I think you are a bit confused about the concepts behind Pub/Sub. In general, you publish messages to a topic for one or multiple subscribers. I prefer to compare Pub/Sub with a magazine that is being published by a big publishing company. People who like the magazine can get a copy of that magazine by means of a subscription. Then when a new edition of that magazine arrives, a copy is being sent to the magazine subscribers, having exactly the same content among all subscribers.
For Pub/Sub you can create multiple push subscriptions for a topic, up to the maximum of 10,000 subscriptions per topic (also per project). You can read more about those quotas in the documentation. Those push subscriptions can contain different endpoints, in your case, representing your IoT devices. Referring back to the publishing company example, those push endpoints can be seen as the addresses of the subscribers.
Here is an example IoT Core architecture, which focuses on the processing of data from your devices to a store. The other way around could also work. Sending a message (including device/registry ID) from your front-end to a Cloud Function wrapped in API gateway. This Cloud Function then publishes the message to a topic, which sends the message to a cloud Function that posts the message using the MQTT protocol. I worked out both flows for you that are loosely coupled so that if anything goes wrong with your device or processing, the data is not lost.
Device to storage:
Device
IoT Core
Pub/Sub
Cloud Function / Dataflow
Storage (BigQuery etc.)
Front-end to device:
Front-end (click a button)
API Gateway / Cloud Endpoints
Cloud Function (send command to pub/sub)
Pub/Sub
Cloud Function (send command to device with MQTT)
Device (execute the command)
I have an API Gateway endpoint that I would like to limit access to. For anonymous users, I would like to set both daily and monthly limits (based on IP address).
AWS WAF has the ability to set rate limits, but the interval for them is a fixed 5 minutes, which is not useful in this situation.
API Gateway has the ability to add usage plans with longer term rate quotas that would suit my needs, but unfortunately they seem to be based on API keys, and I don't see a way to do it by IP.
Is there a way to accomplish what I'm trying to do using AWS Services?
Is it maybe possible to use a usage plan and automatically generate an api key for each user who wants to access the api? Or is there some other solution?
Without more context on your specific use-case, or the architecture of your system, it is difficult to give a “best practice” answer.
Like most things tech, there are a few ways you could accomplish this. One way would be to use a combination of CloudWatch API logging, Lambda, DynamoDB (with Streams) and WAF.
At a high level (and regardless of this specific need) I’d protect my API using WAF and the AWS security automations quickstart, found here, and associate it with my API Gateway as guided in the docs here. Once my WAF is setup and associated with my API Gateway, I’d enable CloudWatch API logging for API Gateway, as discussed here. Now that I have things setup, I’d create two Lambdas.
The first will parse the CloudWatch API logs and write the data I’m interested in (IP address and request time) to a DynamoDB table. To avoid unnecessary storage costs, I’d set the TTL on the record I’m writing to my DynamoDB table to be twice whatever my analysis’s temporal metric is... ie If I’m looking to limit it to 1000 requests per 1 month, I’d set the TTL on my DynamoDB record to be 2 months. From there, my CloudWatch API log group will have a subscription filter that sends log data to this Lambda, as described here.
My second Lambda is going to be doing the actual analysis and handling what happens when my metric is exceeded. This Lambda is going to be triggered by the write event to my DynamoDB table, as described here. I can have this Lambda run whatever analysis I want, but I’m going to assume that I want to limit access to 1000 requests per month for a given IP. When the new DynamoDB item triggers my Lambda, the Lambda is going to query the DynamoDB table for all records that were created in the preceding month from that moment, and that contain the IP address. If the number of records returned is less than or equal to 1000, it is going to do nothing. If it exceeds 1000 then the Lambda is going to update the WAF WebACL, and specifically UpdateIPSet to reject traffic for that IP, and that’s it. Pretty simple.
With the above process I have near real-time monitoring of request to my API gateway, in a very efficient, cost-effective, scaleable manner in a way that can be deployed entirely Serverless.
This is just one way to handle this, there are definitely other ways you could accomplish this with say Kinesis and Elastic Search, or instead of logs you could analyze CloudTail events, or by using a third party solution that integrates with AWS, or something else.
It looks like CloudWatch gives customers 10 custom metrics under the free plan, then each additional one costs $0.50. Does anyone know how to enforce PutMetric accept only a set of custom metrics?
I'm interested in limiting the custom metrics coming from mobile clients or possibly adding a layer of protection against abuse.
Is the only solution to implement my own service which does the validation against a whitelist?
One option you could look at is placing AWS Gateway in front of Cloudwatch and making the calls through the api.
This example shows you how to do this for S3, but there's not reason why you couldn't do something similar for Cloudwatch.
This shows you how to do it for dynamo: https://aws.amazon.com/blogs/compute/using-amazon-api-gateway-as-a-proxy-for-dynamodb/
I ended up running a simple tomcat service which validates metrics against a whitelist (stored in s3) and publishes them to CloudWatch.