In Google Cloud Logging (nee Stackdriver), can I create a log-based metric for logs aggregated at folder/organization level? I want to have a log-based metric for a certain audit event across many projects.
This isn't currently supported. You can export logs to a different project, but can't have metrics encapsulating more than one project.
If you think that functionality should be available, you can create a Feature Request at Public Issue Tracker.
Related
I'd like to use an OpenCensus metric in a Cloud Monitoring (Stackdriver) Alert Policy.
When I try to click the Add button, I get This query must contain a resource type. error. The odd thing, is that I can view this metric in MQL and can chart it.
According to the MQL charts that us this metric, the Resource: field is blank, and the charts work fine. The MQL charts show a resource type (on metric hover) of knative_broker, dataflow_job, aws_rds_database, k8s_control_plane_component, aws_lambda_function, and 36 more.
What Resource type should be used to alert on Open Census metrics in Cloud Monitoring alerts?
In Cloud Monitoring, application-specific metrics are typically called “custom metrics”. You can create custom metrics with opencensus. For more detailed information please follow Custom Metrics with OpenCensus. We define a Stackdriver exporter as the goal to create an alert policy on the aggregation of two metrics. Refer Alert policy metric thresholds with stackdriver and opencensus for more information.
I want to be able to programmatically trigger alerts in Google Cloud Monitoring. Basically I have a watchdog that I want to execute certain actions based on multiple criteria. One of those actions is that I want it to trigger a new alert in Google Cloud Monitoring.
Is there a smooth way to do this?
So far my best guess is:
Setup an alert policy on a custom metric (like isTriggeringAlert>0)
Write the actions to a log ("[ALERT]: ....") and use Cloud Monitoring to catch that log
Both works, but was wondering if there is a programatic way to trigger instead? I haven't found anything in the Python SDK for Cloud Monitoring (just how to create monitoring policies)
Regards,
Niklas
This feature has been requested, but isn't available yet.
As a workaround, you can try writing appropriate data into timeSeries using this API.
Is it possible to create alerts for configuration activities?
On the dashboard of my GCP project, I'm able to see the history of activities. However, for security reasons, I would like to be able to receive notifications when certain activities happen, e.g. Set IAM policy on project, deleting instance of project, etc. Is this possible?
I have looked into "metric-based alerting policies", but I'm only able to create alerts for uptime checks. Not sure what else to look for.
You are on the right path. You need to create a log-based metric and then to create an alert when the counter cross a threshold (1 for example)
Now a more straightforward solution is available: In one step, you can use log-based alerts. It allows to set alerts on any log type and content. This new feature is on preview and was announced few days ago.
I was curious if Stackdriver metrics are only available via the API or is there a way to send them through Pub/Sub? I'm currently not seeing any of the metrics listed here for Compute Engine in my Pub/Sub output.
I did create a sink for all gce vm instances to export from Stackdriver logging in Pub/Sub and I'm not seeing any of them.
There are a few different types of signals that Stackdriver organizes--metrics, logs, traces, errors, plus derived signals like incidents or error groups. Logs can be exported via Pub/Sub using sinks. Metrics, traces, and errors can only be pulled via the API today.
What is the best way to set up Dataflow resource monitoring and alerting on Dataflow errors?
Is it custom log based metrics only?
Checked Cloud Monitoring - Dataflow is not listed there - no metrics available.
Checked Error Reporting - it is empty too, despite a few of my flows failing.
What do I miss?
Update March 2017: Stackdriver Monitoring Integration with Dataflow is now in Beta. Review user docs, and listen to Dataflow engineers talking about it at GCP Next.
For the time being you could set up alerts based on Dataflow logs (go to Stackdriver Logging and set up alerts there). We are also working on better alerting using Stackdriver Monitoring and will post an announcement to our Big Data blog when it's in beta.