I have an application running in google compute instance (ps -ef | grep myapp).
I want to setup an alert if the application goes down and/or comes up in google cloud Stackdriver.
How to achieve this ?
You can create an alert on the Stackdriver monitoring. You will have the option to either have the alert displayed as an incident on the Stackdriver monitoring console or as a notification via an email.
Before configuring your alerts, you will need to choose a metric type in which the alert will be based on.
If you cannot find a metric type that matches your needs, you will need to create your own custom metric.
I also found this approach which will collect and the send the logs from the serial port to the Stackdriver logs. You will then be able to manipulate the data collected as you see fit.
Related
I wrote some code to automate the training procedure on our company vm instances.
you probably know that sometimes GCP can't provide you at the current moment with a machine - 'out of resource' exception.
so , I'd like to monitor which of my machines successfully turned on and which not.
if there is some way to show it on Bigquery it will be great.
thanks .
Using the Cloud Monitoring (Stackdriver) functionality is good way for monitoring all you VMs.
Here is a detailed guide to implement Monitoring on a Compute Engine Instance.
Hope you find it useful.
You can use Google cloud's activity logs too:
Activity logging is enabled by default for all Compute Engine
projects.
You can see your project's activity logs through the Logs Viewer in
the Google Cloud Console:
In the Cloud Console, go to the Logging page. Go to the Logging page
When in the Logs Viewer, select and filter your resource type from the
first drop-down list. From the All logs drop-down list, select
compute.googleapis.com/activity_log to see Compute Engine activity
logs.
Here is the Official documentation.
I've created a RabbitMQ kubernetes cluster using Google One Click to deploy. I've checked "Enable Stackdriver Metrics Exporter" and created the cluster. My problem is that Google is charging for every custom metric created.
I need to disable Stackdriver Metrics Exporter.
¿Anyone had the same issue and disabled this Exporter? If so ¿How can I disable it without destroying the cluster?
If this kubernetes cluster without another application, only RabbitMQ is running on it, you can disable “Stackdriver Kubernetes Engine Monitoring” function of kubernetes cluster.
In the Cloud Console, go to the Kubernetes Engine > Kubernetes clusters page:
Click your cluster.
Click Edit for the cluster you want to change.
Set the “Stackdriver Kubernetes Engine Monitoring” drop-down value to Disabled.
Click Save.
The Logs ingestion page in the Logs Viewer tracks the volume of logs in your project. The Logs Viewer also gives you tools to disable all logs ingestion or exclude (discard) log entries you're not interested in, so that you can minimize any charges for logs over your monthly allotment.
Go to logs exports, and follow this topic for manage "Logs Exports".
I use Google Kubernetes Engine(GKE) to deploy my service. In the cluster, I enable Stackdriver Kubernetes Engine Monitoring instead of Legacy Stackdriver Logging and Legacy Stackdriver Monitoring. With the legacy monitor, I can find the metrics of the number of logs with the name log entries. What is the corresponding metrics name with Stackdriver Kubernetes Engine Monitoring?
If you go to Stackdriver monitoring > Resources > Metrics Explorer and select "Kubernetes cluster" as a resource type, you can find a metric called "log_entry_count" and select it. This metric is also mentioned here.
So - the metric you're asking about is still there - no matter if you create a cluster with Stackdriver Kubernetes Engine Monitoring enabled or no.
Furthermore - it will still collect data about number of logs ingested.
To be sure of the metric existence and if it actually does work I created a test cluster with some back-end service which generated some log entries and then tried "log entries" metric to count them - it worked as it should.
I am looking to split the logs on the StackDriver Agent (SDA) to multiple GCP projects (StackDrivers) based on some filter. By default SDA targets GCP project where resides.
There is a SDA configuration option, to setup different GCP destination project id, but only one.
SDA as a FluentD wrapper uses for the match section, type google_cloud.
Does this mean that the only solution is to write a custom FluentD filter that rely on the google_cloud and targets multiple GCP projects?
Thanks
First of all, you can not split logs on Stackdriver Monitoring Agent to send in different GCP project’s Stackdrivers based on any filter. I understand that you went through the document [1] and want to be confirmed about the option “type google_cloud”.
Here, the configuration options will let you override LogEntry labels [2] and MonitoredResource labels [3] when ingesting logs to Stackdriver Logging and “type google_cloud” for cloud resources of all types.
[1]:- https://cloud.google.com/logging/docs/agent/configuration#label-setup
[2]:- https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
[3]:- https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource
If you write your own Stackdriver logger, you can do anything you want.
The Google Stackdriver logging (the driver) does not support streaming parts of logs to different Stackdriver Logs (the service).
Configured Google Stackdriver Logging in one of the GCE VM and everything works except Log Level . Have used the parameter log_level in the file
/etc/google-fluentd/config.d/tomcat.conf
as per provided in http://docs.fluentd.org/articles/in_tail
but even then in Console Log Viewer cant able to view log in different levels. Is there any specif way to configure fluentd agent for Google Cloud?
The Google Cloud output plugin for fluentd currently uses the severity field to indicate the log level. See this issue for details. There is an open request to make it configurable.