I want to generate an alert in Monitoring and Login and I want that when that alert is triggered, my script is executed (which I already have generated in my cloud repository), how could I do it?
This could be possible with Cloud Functions this could trigger a declaration of what occurrence should cause your function to execute.
You can use Google Cloud Pub/Sub Triggers, so when an event is presenting on the system a message is published to a Pub/Sub topic that is specified when a function is deployed. Every message published to this topic will trigger function execution with message contents passed as input data.
In the next guide “Alert-based event” you can find the steps to implement this solution.
Related
I'm writing a gcp cloud function and is there a feature to handle a batch of messages put on a pub/sub topic. I mean a single run of the cloud function can handle around 10-30 messages put on the queue. From the examples I have seen the cloud function gets invoked for each message. But in AWS I have seen the option where you can batch multiple messages into one Lambda.
With the traditional method Cloud functions + PubSub receiving messages via push, you won't be able to work with batches since every event will trigger the function.
You could perhaps create a different mechanism (trigger) for example a Cloud Scheduler to trigger the cloud function and pull all messages in the queue (pull mechanism): https://cloud.google.com/pubsub/docs/pull
I wanted to get notified if/when there is/are any VM creation in my infra on GCP.
I see a google library that can give me list of VM.
I can create a function to use this code (probably)
Schedule the above function. And check for difference.
But do storage like triggers available for Compute.
Also if there is any other solution.
You have a third solution. You can use Cloud Run instead of Cloud Functions (the migration is very easy, let me know if you have issues).
With Cloud Run, you can use the trigger (eventArc feature), a new feature (still in preview) based on the auditLog logs. It's very similar to the first solution proposed by LundinCast, but it's automatically set up by Cloud Run Trigger feature.
So, deploy your service on Cloud Run. Then configure a trigger on v1.compute.instancs.insert API, select your region or make the trigger global and that's all!! Your service will be triggered when a new instance will be created.
As you can see in my screenshot, you will be asked to activate the auditLog to be able to use this feature. Because it's built-in, it's done automatically for you!
Using Logging sink and a PubSub-triggered Cloud Function
First, export the relevant logs to a PubSub topic of your choice by creating a Logging sink. Include the logs created automatically during VM creation with the following log filter:
resource.type="gce_instance"
protoPayload.methodName="beta.compute.instances.insert"
protoPayload.methodName="compute.instances.insert"
Next, create a Cloud Function that'll trigger every time a new log is set to the PubSub topic. You can process this new message as per your needs.
Note that with this option you'll have to handle to notification yourself (for example, by sending an email). It is useful though if you want to send different notification based on some condition or if you want to perform additional actions apart from the notification.
Using a log-based metric and a Cloud Monitoring alert
You can use a Log-based metric filtering logs for Compute Engine VM creation and set an alert on that metric to get notified.
First create a counter log-based metric with a log filter similar to the one in the previous method, which will report a data point to Cloud monitoring every time a new VM instance is created.
Then go to Cloud Monitoring and create an alert based on that metric that trigger every time a metric is reported.
This option is the easiest to set up and supports various notification channels out-of-the-box.
Going along with LudninCast's answer.
Cloud Run --
Would have used it if it had not been zone issue for me. Though I conclude this from POC I did
Easy setup.
Containerised Apps. Probably more code to maintain.
Public URL for app.
Out of box support for the requirements like mine.
Cloud Function --
Sink setups for triggers can be time consuming for first timer
Easy coding and maintainance.
I started a GCE VM with a Docker image that runs a pub/sub subscriber, which handles the messages and start some big computational work (long running).
When we are ready to deploy new code, how do we ensure all the current running jobs are finished (make the deploy block on task finish). What's the best practice here?
I believe that you can look at Google Cloud Functions. Saying this, you can create a programming function that will respond to some specific events without the need to manage a server or runtime environment.
In particular, it is feasible subscribing specific Cloud function to Pub/Sub topic and every message published to this topic will trigger some custom code execution with message contents passed as input data generating google.pubsub.topic.publish event type.
Supposedly, you can compose some function, that would be subscribed to the same Pub/Sub topic as the message consumer from your example, triggering the desired deployment upon some condition match, checking the status of the long running job.
I have schedule a HTTP call type job using Google Cloud Scheduler. How do I send out email alert if the job failed?
I have read the Cloud Scheduler documentation, and google around but the answer is not obvious. I had also attempted the stackdriver alert policy but can't find the corresponding metrics for the failed log entry.
I expect an email notification can be configured to send out if the scheduled job failed.
One way to handle this is to create a new Log-Based Metric with this filter:
resource.type="cloud_scheduler_job" severity != INFO.
Then you can create an alert based on this new metric.
I use a workaround to solve my own problem.
Since my Cloud Scheduler is calling a HTTP call to my Cloud Function.
I use stack driver to create an alert to monitor my function execution with status code != ok. Any time the function execute with failure, an email alert will be send to my inbox.
This for the time being solve my problem.
Nevertheless, perhaps Cloud Scheduler can provide such enhancement to send alert as part of the configuration.
thank you.
You can use log-based metrics in Stackdriver along with email notifications to get email notifications when your job fails.
October 2022: You no longer need to create a metric for this, you can skip that step and create an alert directly from Logs Explorer after entering the query already described:
resource.type="cloud_scheduler_job" severity != INFO
I use Cloud pub/sub and Cloud Functions.
Now, I want to publish a message to a topic which will trigger a background cloud function.
But I want to trigger my cloud function after a specific duration, like 30 seconds later.
How can I do this?
update:
Here is my architecture, is it correct?
Now, I want to publish a message to a topic which will trigger a
background cloud function. But I want to trigger my cloud function after a specific duration, like 30 seconds later.
If you setup PubSub to trigger Cloud Functions on publish events, Cloud Functions will be triggered almost immediately. There is no method to insert a delay.
You will need to implement your code as several major steps:
Setup PubSub Topic and Subscriptions. Do not trigger Cloud Functions on new messages. Messages will just sit waiting for delivery. Send messages to this topic.
Create a Cloud Function that processes PubSub subscriptions. Pull messages and process.
Use another service such as Cloud Tasks, Cloud Scheduler or App Engine Tasks to trigger your Cloud function after your desired delay.
You can use Cloud Tasks to schedule some work to happen on a delay.