I wanted to get notified if/when there is/are any VM creation in my infra on GCP.
I see a google library that can give me list of VM.
I can create a function to use this code (probably)
Schedule the above function. And check for difference.
But do storage like triggers available for Compute.
Also if there is any other solution.
You have a third solution. You can use Cloud Run instead of Cloud Functions (the migration is very easy, let me know if you have issues).
With Cloud Run, you can use the trigger (eventArc feature), a new feature (still in preview) based on the auditLog logs. It's very similar to the first solution proposed by LundinCast, but it's automatically set up by Cloud Run Trigger feature.
So, deploy your service on Cloud Run. Then configure a trigger on v1.compute.instancs.insert API, select your region or make the trigger global and that's all!! Your service will be triggered when a new instance will be created.
As you can see in my screenshot, you will be asked to activate the auditLog to be able to use this feature. Because it's built-in, it's done automatically for you!
Using Logging sink and a PubSub-triggered Cloud Function
First, export the relevant logs to a PubSub topic of your choice by creating a Logging sink. Include the logs created automatically during VM creation with the following log filter:
resource.type="gce_instance"
protoPayload.methodName="beta.compute.instances.insert"
protoPayload.methodName="compute.instances.insert"
Next, create a Cloud Function that'll trigger every time a new log is set to the PubSub topic. You can process this new message as per your needs.
Note that with this option you'll have to handle to notification yourself (for example, by sending an email). It is useful though if you want to send different notification based on some condition or if you want to perform additional actions apart from the notification.
Using a log-based metric and a Cloud Monitoring alert
You can use a Log-based metric filtering logs for Compute Engine VM creation and set an alert on that metric to get notified.
First create a counter log-based metric with a log filter similar to the one in the previous method, which will report a data point to Cloud monitoring every time a new VM instance is created.
Then go to Cloud Monitoring and create an alert based on that metric that trigger every time a metric is reported.
This option is the easiest to set up and supports various notification channels out-of-the-box.
Going along with LudninCast's answer.
Cloud Run --
Would have used it if it had not been zone issue for me. Though I conclude this from POC I did
Easy setup.
Containerised Apps. Probably more code to maintain.
Public URL for app.
Out of box support for the requirements like mine.
Cloud Function --
Sink setups for triggers can be time consuming for first timer
Easy coding and maintainance.
Related
I want to be able to programmatically trigger alerts in Google Cloud Monitoring. Basically I have a watchdog that I want to execute certain actions based on multiple criteria. One of those actions is that I want it to trigger a new alert in Google Cloud Monitoring.
Is there a smooth way to do this?
So far my best guess is:
Setup an alert policy on a custom metric (like isTriggeringAlert>0)
Write the actions to a log ("[ALERT]: ....") and use Cloud Monitoring to catch that log
Both works, but was wondering if there is a programatic way to trigger instead? I haven't found anything in the Python SDK for Cloud Monitoring (just how to create monitoring policies)
Regards,
Niklas
This feature has been requested, but isn't available yet.
As a workaround, you can try writing appropriate data into timeSeries using this API.
I would like to receive a notification on my Notification Channel every time in Cloud Build a Build on master fails.
Now there were mentions of using Log Viewer but it seems like there is no immediate way of accessing the branch.
Is there another way where I can create a Monitoring Alert/a Metric which is specific to master?
A easy solution might be to define a logging metric and link an alerting trigger to this.
Configure Slack alerting in Notification channels of GCP.
Define your logging metric trigger in Logs-based Metrics. Make a Counter with Units 1 and filter using the logging query language:
resource.type="build"
severity=ERROR
or
resource.type="build"
textPayload=~"^ERROR:"
Create an Alerting Policy with that metric you've just defined and link the trigger to your Slack notification channel you've configured in step 1.
you can create Cloud Build notifications sending you updates to desired channels, such as Slack or your SMTP server HTTP channel. Also create a PubSub topic when your build's state changes, such as when your build is created, when your build transitions to a working state.
I just went through the pain of trying to get the official GCP slack integration via Cloud Run working. It was too cumbersome and didn't let me customize what I wanted.
Best solution I see is to get Cloud Build setup to send Pub/Sub messages to the cloud-builds topic. With that, you can use the below repo I just made public to filter on the specific branch you want but looking at the data_json['substitutions']['BRANCH_NAME'] field.
https://github.com/Ucnt/gcp-cloud-build-slack-notifier
How would our organization log, audit, and alert on any code changes (add, change, delete) to Google Cloud Functions to survive an external audit? We've figured out how to do so on AWS (combination of CloudTrail and CloudWatch Events/Amazon EventBridge) and Azure (Audit log and Alerts under the Monitor service, although this is not as reliable as the AWS solution because some events do not seem to be picked up. Azure even has this nice new service in preview called Application Change Analysis, but it does not alert, and it goes away when a function is deleted instead of reporting that it has been deleted.)
But how do we do the same thing with Google Cloud Functions? How would we log and audit the creation/update/deletion of Cloud Functions and Cloud Function code? How would we go even further and receive an alert whenever any of those conditions occur, just like we have proven can happen with AWS and (kind of, at least) with Azure? Thank you!
You can use the Cloud Function audit logs. You can export the logs to PubSub, and then, you can do what you want on the event:
Store them in BigQuery for the history
Send an alert (email, slack message,...)
Act: for example, perform a rollback to the previous code stored in the source repository
...
All depends on your security process and what do you want to do with the events.
Is there a solution/service available on GCP in similar lines of Systems Manager?
My end goal is to run a shell script on GCP VM on specific events.
Like for AWS, via EventBridge I was able to trigger a Lambda Function and the function in turn triggered a SSM command for specific VM.
Is this possible on GCP?
There isn't a Systems Manager equivalent in GCP.
A Pub/Sub subscription from the VMs/compute units which triggers a lambda function (cloud function in GCP) is a suboptimal solution and different from what Systems Manager accomplishes..
I don't know what kind of events you have in mind that would trigger running a script but you can check out the tutorial how to run a function using pub/sub. It shows how to use scheduler based events but it's possible to use not-scheduled triggers;
Events are things that happen within your cloud environment that you might want to take action on. These might be changes to data in a database, files added to a storage system, or a new virtual machine instance being created. Currently, Cloud Functions supports events from the following providers:
HTTP
Cloud Storage
Cloud Pub/Sub
Cloud Firestore
Firebase (Realtime Database, Storage, Analytics, Auth)
Stackdriver Logging—forward log entries to a Pub/Sub topic by creating a sink. You can then trigger the function.
And here you can read on how to implement those triggers.
For example this documentation explains how to use storage based triggers in pub/sub.
If you provide more details of what exactly you want to achieve (what events have to trigger what) then I can point you to a more direct solution.
The approach depends on the exact use case you have in hand. One of the common architecture option could be using pub/sub with cloud functions. Based on messages published to Pub/Sub topics, cloud functions performing operations of our interest can be triggered/ invoked in the same cloud project as the function.
In AWS it was possible to run cloudwatch to trigger callback lambda functions on events.
Is it possible in GCE to automatically tag servers with the user who created it based on the activity logs? Google Cloud functions seem to only be able run a non-public callback based on GCS events.
How would I do this?
As a matter of fact, there are four types of triggers for Google Cloud Functions. But none of them is useful in this case.
There is a way to automatically do so, though.
You can create an application setting up Stackdriver Logging using a Client Library, as for example Python, in App Engine.
Then you can schedule a task using a cron job which triggers the application. You can use the client library to review the logs and search for compute.instance.insert (CE creation), the "actor" or "user" and...
finally add a label to the existing resource.